Toward the end of 2022, a new AI-based chatbot was made available to the public which has proven popular for creating written content. Concern is now growing about the potential for the tool to be used by cybercriminals for creating new phishing lures and for rapidly coding new malware.

ChatGPT was developed by OpenAI and was released on 30 November 2022 to the public as part of the testing process. Just a few days after its release, the chatbot had reached a million users, who were using the tool to write emails, articles, essays, wedding speeches, poems, songs, and all manner of written content. The chatbot is based on the GPT-3 natural language model and can create human-like written content. The language model was trained using a massive dataset of written content from the Internet and can generate content in response to questions or prompts that users enter into the web-based interface.

While articles written using the chatbot would be unlikely to win any awards, the content is grammatically correct, contains no spelling mistakes, and in many cases is far better than you could expect from an average high school student. One of the problems is that while the content may superficially appear to be correct, it is biased by the data it was trained on and may include errors. That said, the generated content is reasonable and sufficiently accurate to pass the Bar exam for U.S. lawyers and the US Medical Licensing exam, although only just. It is no surprise that many school districts have already implemented bans on students using ChatGPT.

To get ChatGPT to generate content, you just need to tell it what you want to create. It is no surprise that it has proven to be so popular, considering it is capable of writing content better than many humans could. While there are many benefits from using AI for chatbots that can create human-like text, there is growing concern that these natural language AI tools could be used for malicious purposes, such as creating social engineering scams and phishing and business email compromise attacks.

The potential for misuse has prompted many security researchers to put ChatGPT to the test, to see whether it is capable of generating malicious emails. The developer has put certain controls in place to prevent misuse, but those controls can be bypassed. For instance, asking ChatGPT to write a phishing email will generate a message saying the request violates the terms and conditions, but by experimenting with the queries it is possible to get the chatbot to generate the required content.

Further, it is possible to write a phishing email and spin up many different combinations that are all unique, grammatically correct, and free from spelling errors. The text is human-like, and far better than many of the phishing emails that are used in real phishing campaigns. The rapid generation of content has allowed security researchers to spin up an entire email chain for a convincing spear phishing attack. It has also been demonstrated that the technology can be rapidly trained to mimic a specific style of writing, highlighting the potential for use in convincing BEC attacks. These tests were conducted by WithSecure prior to public release and before additional controls were implemented to prevent misuse, but they continued their research after restrictions were added to the tool, clearly demonstrating the potential for misuse.

Anti-Phishing Demo
Protect your MSP clients with the newest zero-day threat protection and intelligence against anti-phishing, business email compromise and zero-day attacks with PhishTitan.
Free Demo

The potential for misuse does not stop there. The technology underlying the chatbot can also be used to generate code and researchers have demonstrated ChatGPT and its underlying codex technology are capable of generating functional malware. Researchers at CyberArk were able to bypass the restrictions and generate a new strand of polymorphic malware, then were able to rapidly generate many different unique variations of the code. Researchers at Check Point similarly generated malicious code, in fact, they generated the full infection process from spear phishing email to malicious Excel document for downloading a payload, and the malicious payload itself – a reverse shell.

At present, it is only possible to generate working malicious code with good textual prompts, which requires a certain level of knowledge, but even in its current form, the technology could help to rapidly accelerate malware coding and improve the quality of phishing emails. There are already signs that the tool is already being misused, with posts on hacking forums including samples of malware allegedly written using the technology, such as a new information stealer and an encryptor for ransomware.

With malicious emails likely to be generated using these tools, and the potential for new malware to be rapidly coded and released, it has never been more important to ensure that email security defenses are up to scratch. Email security solutions should be put in place that are capable of detecting computer-generated malware. SpamTitan includes signature-based detection mechanisms for identifying known malware along with email sandboxing. The sandbox is an isolated and secure testing environment where suspicious email attachments are subjected to behavioral analysis. The next-gen sandbox means SpamTitan can detect zero-day malware variants that would otherwise not be detected since their signatures have yet to be added to the blocklists. SpamTitan also uses machine learning mechanisms for detecting zero-day phishing threats, based on deviations from the standard messages received by companies.

TitanHQ also recommends implementing multifactor authentication, web filtering for blocking access to malicious websites, and security awareness training for employees. The quality of phishing emails may get better, but there will still be red flags that employees can be trained to recognize.