Has AI surpassed humans at writing phishing emails? A team of researchers at IBM decided to put that to the test and the results are now in. Humans still have the edge, but AI is not far behind and will soon overtake humans.

There has been a lot of press coverage recently about the capabilities of AI and significant concern has been voiced about the threat AI-based systems pose. While there are legitimate concerns that AI systems could turn against humans, one of the most pressing immediate cybersecurity concerns is that cybercriminals could use generative AI tools to devastating effect in their cyberattacks.

Many security researchers have demonstrated that generative AI chatbots such as ChatGPT can write perfect phishing emails, free of spelling mistakes and grammatical errors, and can also create convincing lures to trick humans into opening a malicious email attachment or visiting a malicious website. ChatGPT and other generative AI tools can also be used to write malware code, and there have been demonstrations of AI tools being used to create functional polymorphic malware and ransomware code. One of the key advantages of AI tools such as ChatGPT is the speed at which phishing emails, social engineering lures, and malware code can be generated, which could greatly improve the efficiency and even the quality of a range of malicious campaigns.

Tools such as ChatGPT have guardrails in place to prevent them from being used for malicious purposes such as writing malware or phishing emails. If you ask ChatGPT to write ransomware code or a phishing email, it will refuse to do so as it violates OpenAI’s terms and conditions of use. Those controls can, however, be easily bypassed, plus there are generative AI tools that have been developed specifically for cybercriminal use, such as WormGPT and FraudGPT.

Are Cybercriminals Using AI in Their Campaigns?

Security researchers have shown that it is possible to use generative AI tools for offensive cybersecurity purposes, but are cybercriminals actually using these tools? While there is limited evidence on the extent to which these tools have been used, it is clear that they are being put to use. An August 2023 report by the U.S. cyber defense and threat intelligence firm Mandiant explored this and found threat actors are certainly interested in generative AI but use remains limited. The main area where these AI tools are being used is in information operations, specifically to efficiently scale their activity beyond their inherent means and to produce more realistic content.

Financially motivated threat actors have been using generative AI such as deepfake technology to increase the effectiveness of their social engineering, fraud, and extortion operations, including the use of face swap tools. The main focus currently is on social engineering, such as phishing attacks, for generating convincing lures for phishing emails and greatly reducing the time spent researching potential targets.

Anti-Phishing Demo
Protect your MSP clients with the newest zero-day threat protection and intelligence against anti-phishing, business email compromise and zero-day attacks with PhishTitan.
Free Demo

Are Generative AI Tools Better than Humans at Phishing?

An IBM X-Force team of social engineering experts recently went head-to-head with a generative AI chatbot to see which was better at creating phishing emails. The researchers would typically take around two days to construct a phishing campaign, with most of the time taken on researching targets to identify potential social engineering lures, such as topics for targeting specific industries, the persons to impersonate, and for creating convincing emails.

They developed 5 simple prompts to get a generative AI chatbot to do this, and the entire campaign was created in just 5 minutes, thus saving a cybercriminal around 2 days of their time. The good news is that the security researchers’ email performed better in terms of a higher click rate and a lower reporting rate, but the margins were very small. Humans still have the edge when it comes to emotional manipulation in social engineering, but AI is not very far behind and is likely to overtake humans at some point.

How to Combat AI-generated Phishing

Generative AI can save cybercriminals a great amount of time and the content generated is almost as good as human-generated content, and certainly good enough to fool many users. The best defense is to provide more extensive and regular security awareness training to employees to improve resilience to phishing attempts and to put cybersecurity solutions in place that incorporate AI and machine learning tools.

TitanHQ’s Email Security solution, SpamTitan, has AI and machine learning capabilities that are used to detect previously unseen phishing threats, such as those generated by AI tools. These capabilities also apply to email attachments, which are sent to an email sandbox for deep analysis of their behavior, allowing SpamTitan to detect and block zero-day malware threats. TitanHQ can also help with security awareness training. SafeTitan is an easy-to-use security awareness training and phishing simulation platform that has been shown to reduce susceptibility to phishing by up to 80%. Combined with multifactor authentication and endpoint detection tools, these solutions can help organizations improve their defenses against cyberattacks that leverage generative AI.