AI Chatbots – significant opportunities must be balanced against significant risks

Published by:
SecAlliance
Published on:
June 12, 2023

Nicholas Vidal from SecAlliance discusses how much of a security risk Chatbots pose.

The use of generative AI tools for malicious purposes is a growing and important concern within the cybersecurity community. Security researchers are raising alarm bells about the potential risks associated with the use of these tools by malicious actors.

While some have sought to downplay these concerns, recent developments in large language model (LLM)-enabled generative AI are changing the state of play.

While chatbots have been available for years, recent design breakthroughs have enabled their rapid sophistication. In November 2022, OpenAI released ChatGPT, a generative pre-trained transformer (GPT) model that quickly gained attention for its ability to produce highly detailed and complex responses to a wide variety of text prompts.

ChatGPT reached over 100 million monthly users within its first two months of being publicly available -- a faster rate of adoption than seen by Instagram, TikTok, or Facebook. Other competitor’s models, such as Microsoft's Bingbot and Google's Bard AI, have also entered the market to great interest.

This has led to what many have labelled an AI arms race, not only between Western companies seeking to bring these products to market but also between nation-states, (primarily the US and China), who both aim to cement their advantage in this strategic domain.

One of the primary concerns is that generative AI tools, such as OpenAI's GPT-3, could give low-skilled threat actors the ability to generate low- to moderate-complexity malicious code, without the need for them to have significant programming experience or resources. This could result in an increase in the number of cyberattacks and a decrease in the overall security of computer systems.

Another concern is that generative AI tools could enable motivated threat actors to generate semi-reliable text, which could be used to complete tasks associated with phishing campaigns and coordinated inauthentic behaviour operations. This could make it easier for malicious actors to trick individuals into divulging sensitive information or engaging in harmful behaviour.

Traditionally, it was often easy to spot a phishing message because of spelling and grammar mistakes or misuse of English vocabulary. But given GPT models’ remarkable ability to generate highly convincing human-language output from relatively simple prompts, non-native English-speaking threat actors may now possess the ability to draft clearer and less easily falsifiable email text at scale for use in phishing campaigns.

Meanwhile, content filtering mechanisms developed by AI firms like OpenAI have been shown to be easily circumvented, either through prompt engineering or direct interface with the GPT API. This reveals that despite efforts to prevent the misuse of generative AI tools, there is still significant risk that they could be used for malicious purposes.

While researchers have demonstrated use cases for generative AI that could produce high-complexity malware, including so-called “polymorphic” variants, it is unlikely that such use cases will be widely leveraged in the near-term, due to noted quality control issues present in current-generation LLMs and the high threshold of programming ability required for successful campaign execution.

Note that since November 2022, white hat researchers and hacking forum users claim to have used ChatGPT to produce python-based infostealers, encryption tools, cryptoclippers and cryptocurrency drainers, crypters and malicious VBA code, alongside countless other use cases.

Despite the current technical limitations of generative AI tools, the potential impact of their malicious use cannot be ignored. As these tools become more sophisticated, and threat actors become more experienced in exploiting them for malicious purposes, there will be an increasing need for AI-enabled detection and response capabilities.

While current-generation LLMs may have technical limitations, it is essential to remain vigilant and proactive in developing measures to mitigate these risks. This may include developing new detection and response capabilities that can help identify and neutralise potential threats arising from the misuse of generative AI tools like ChatGPT.