There is growing evidence that cybercriminal groups are leveraging artificial intelligence in their cyberattacks, specifically large language models (LLMs) such as ChatGPT, despite the restrictions OpenAI has put in place. There are also LLMs that are being marketed directly to cybercriminals such as WormGPT. WormGPT is a blackhat AI tool that has been specifically developed for malicious uses and can perform similar tasks to ChatGPT but without any ethical restrictions on uses. The tool can be used for generating convincing phishing and business email compromise emails in perfect English, free from the spelling mistakes and grammatical errors that are often found in these emails.

It is not only cybercriminal groups that are using these AI tools. Nation state hacking groups are exploring how these tools can help them gain initial access to targeted networks. Recently published research from Microsoft and OpenAI confirmed that threat actors from Russia, China, Iran, and North Korea and using AI tools to support their malicious activities. Microsoft and OpenAI found the most common uses of LLMs by nation state actors were for translation, finding coding errors, running basic coding tasks, and querying open-source information. While it does not appear that they are using LLMs to generate new methods of attack or write new malware variants, these tools are being used to improve and accelerate many aspects of their campaigns.

The threat actor tracked by Microsoft as Crimson Sandstorm, which is affiliated with the Islamic Revolutionary Guard Corps (IRGC), a multi-service primary branch of the Iranian Armed Forces, has been using LLMs to improve its phishing campaigns to gain initial access to victims’ networks. Microsoft and OpenAI also report that the hacking group has been using LLMs to enhance its scripting techniques to help them evade detection. The North Korean APT group, Emerald Sleet, is well known for conducting spear phishing and social engineering campaigns and is using LLMs to assist with researching think tanks and key individuals that can be impersonated in its spear phishing campaigns. Threat groups linked to the People’s Republic of China such as Charcoal Typhoon and Salmon Typhoon have been using LLMs to obtain information on high-profile individuals, regional geopolitics, US influence, and internal affairs and for generating content to socially engineer targets. OpenAI says it has terminated the accounts of five malicious state actors and has worked with Microsoft to disrupt their activities, and OpenAI and Microsoft have been sharing data with other AI service providers to allow them to take action to prevent malicious uses of their tools.

It should come as no surprise that cybercriminals and nation state actors are using AI to improve productivity and the effectiveness of their campaigns and are probing the capabilities of AI-based tools, and while this is a cause of concern, there are steps that businesses can take to avoid falling victim to AI-assisted attacks. The best way to combat AI-assisted attacks is to leverage AI for defensive purposes. SpamTitan has AI and machine learning capabilities that can detect zero day and AI-assisted phishing, spear phishing, and business email compromise attacks and better defend against AI-0assisted email campaigns.

With fewer spelling mistakes and grammatical errors in phishing emails, businesses need to ensure they provide their workforce with comprehensive training to help employees recognize email and web-based attacks. The SafeTitan security awareness training and phishing simulation platform is an ideal choice for conducting training and phishing simulations and improves resilience to a range of security threats. TitanHQ’s data shows susceptibility to phishing attacks can be reduced by up to 80% through SafeTitan training and phishing simulations. Businesses should also ensure that all accounts are protected with multi-factor authentication, given the quality of the phishing content that can be generated by AI tools, and ensure that cybersecurity best practices are followed, and cybersecurity frameworks are adopted. The most important advice that we can give is to take action now and proactively improve your defenses, as malicious uses of AI are only likely to increase.