Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

The Indian Computer Emergency Response Team (CERT-In) published a new advisory on the security implications of AI language-based applications.  

In the advisory the cyber security agency under the Ministry of Electronics and Information Technology said that AI language-based models such as ChatGPT, Bing AI, Bard AI etc., are widely getting recognition and being discussed for their useful impact. But it can be used by threat actors to target individuals and organizations.  

CERT-In stated the various uses of the AI language-based applications in its advisory, adding that people are using these to understand, interpret and enumerate cyber security context to review security events and logs to interpret malicious codes and malware samples etc. 

The advisory board stated that the applications have the potential to be used in vulnerability scanning, translation of security code from one language to another or transfer of code into natural languages, performing a security audit of the codes, VAPT, or integration of the application with SOC and SIEMs for monitoring reviewing and generating alerts. 

However, as per CERT-In, AI-based applications can also be used by threat actors to conduct various malicious activities, such as: 

  • To write malicious codes for, exploit a vulnerability, conduct scanning, perform privilege escalation and lateral movement to construct malware or ransomware for a targeted system.  
  • Generating output in the form of text as written by a human 
  • Asking for promotional emails, shopping notifications or software updates in their native language and getting a well-crafted response in English. 
  • Creation of fake websites and web pages to host and distribute malware to users through malicious links or attachments using the domain similar to AI-based applications 
  • Creation of fake applications impersonating AI-based applications 
  • Cybercriminals could use AI language models to scrape information from the internet, such as articles, websites, news and posts and potentially take personally Identifiable Information without explicit consent from owners. 

The CERT-In stated some advisory measures that can be followed to minimize the adversarial threats from AI applications: 

  • Educate developers and users about the risks and threats associated with interacting with AI language models. 
  • Verify domains and URLs impersonating AI language-based applications. 
  • Implement appropriate controls to preserve the security and privacy of data. 
  • Ensure that the generated text is not used for illegal, unethical activities. 
  • Use content filtering and moderation techniques within the organization to prevent the dissemination of malicious links, inappropriate content, or harmful information through such applications. 
  • Secure and conduct regular security audits and assessments of the systems. 
  • Organizations may continuously monitor user interactions with AI language-based applications for suspicious or malicious activity. 
  • Organizations may prepare an incident response plan and establish the set of activities that may be followed in case of an incident. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE