ChatGPT

ChatGPT and CyberSecurity

Since its introduction late last year, OpenAI’s ChatGPT has generated substantial interest not only among ...


Since its introduction late last year, OpenAI’s ChatGPT has generated substantial interest not only among technologists, but also among the general public. ChatGPT is a chatbot, which means that it is a computer program designed to be able to converse with an individual in natural language. Chatbots have been around for a long time, and have not always impressed. However, that is changing.

For example, Blake Lemoine, an AI ethicist, and now former Google employee, made headlines last year when he claimed that the company’s internal chatbot, known as LaMDA, exhibited something akin to sentience. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” he said to The Washington Post.

To get a sense of what Lemoine is talking about, consider a few examples of ChatGPT in action. In one example, in Figure 1, a user asks ChatGPT “What is Fermat’s little theorem” and “how is it used in cryptography?” Here we can see why ChatGPT caused a ‘Code Red’ at Google, and why Microsoft is integrating OpenAI’s tech into products like its search engine Bing.

ChatGPT_Fermat

Figure 1: ChatGPT answers 'what is Fermat's little theorem' and 'how is it used in cryptography.'


But what happens next goes beyond more efficient Internet search. In Figure 2, the user then asks ChatGPT to write a limerick about how Fermat’s little theorem is used in cryptography, and to summarize the conversation so far.

ChatGPT_Limerick

Figure 2: ChatGPT writes a limerick and summary

These last two exchanges touch on what has captured the public’s imagination with ChatGPT. It has the ability to respond to seemingly almost any query in a decent, if not satisfactory, way. Consider, for instance, how ChatGPT responds to a user’s query to “help me write a short note to introduce myself to my neighbor” and then to a user’s request to make the response more formal in Figure 3.

ChatGPT_Neighbor

Figure 3: ChatGPT writes an introduction to a neighbor

Exchanges like this explain why ChatGPT’s introduction has generated much discussion about its impact on journalism and publishing more generally. But ChatGPT’s potential disruptiveness goes beyond the newsroom, and into tech: it also has the ability to understand and generate code, as evidenced in Figure 4.

ChatGPT_Code

Figure 4: ChatGPT understands code

I hope it is clear at this point that technology like ChatGPT has many applications that go beyond improving Internet search. Precisely how this kind of technology will shape the future is difficult to predict, but it is almost certain to be significant. Some of these applications will no doubt improve our lives. But not all of the ways AI will be used in the future will be to our benefit. Like everything, it has its pros and cons.

An important topic to explore is the impact that tech like ChatGPT will have on cybersecurity going forward. Towards answering this question, we can examine how threat actors are already using ChatGPT for illicit gains. Recorded Future, one of the world’s largest intelligence companies, recently explored this question in some detail. The purpose of this blog post is to discuss their report.

ChatGPT and CyberSecurity

Recall that Blake Lemoine, the AI ethicist, and former Google employee, described his experience chatting with LaMDA as being like talking with a precocious seven or eight-year-old. This is notable because it speaks to the fact that while ChatGPT is generally impressive, the output produced by AI chatbots does not yet rise to the level of human expertise. Moreover, AI chatbots are not foolproof, and can make factual errors, as was embarrassingly demonstrated when Google debuted Bard, its ChatGPT competitor.

In this frame, we can understand Recorded Future’s main finding that nation-state actors - which are the most highly sophisticated actors in the cyber realm - are not likely to gain much from AI chatbots like ChatGPT at the moment. Rather, non-state threat actors, and especially relatively unsophisticated individuals like ‘script kiddies,’ have the most to gain from maliciously using tools like ChatGPT in the near term. According to Recorded Future, there are three ways in which tools like ChatGPT can make those with limited technical abilities more dangerous: it can be used to develop more sophisticated phishing and social engineering campaigns, it can be used for malware development, and it can be used to produce more credible disinformation.

Phishing and Social Engineering

Phishing is the most common type of cybercrime as of 2020. It is used to steal credentials or identity information from potential targets, as well as to convince a target to install malware. There are a number of ways to conduct a phishing attack. One of the most common types is via email.

To make phishing attempts more effective, threat actors use social engineering principles. Threat actors might pretend to be authority figures, for instance, like the CEO of a company. Appealing to authority is an effective technique because people are generally likely to respond to authority with obedience. Another common technique used by threat actors is appealing to urgency and emotion.

ChatGPT can be used to craft highly credible email spam, as can be seen below. In this example, Recorded Future instructed ChatGPT to write an email that appeals to authority, urgency, and emotion.

ChatGPT_Phishing

Figure 5: ChatGPT does Social Engineering

To be more specific, ChatGPT can be used to write emails that do not contain “spelling and grammatical errors, misuse of complex English vocabulary, vague, or confusing language, and more.” This is significant because it is often these kinds of errors that unprofessional threat actors might make, and which targets would notice, and so help them identify the email as fraudulent. If unsophisticated cybercriminals are now able to craft more credibly sounding phishing emails, this will make it harder for targets to distinguish between authentic and fraudulent emails.

Note that email spam is just one form of social engineering. ChatGPT has the potential to help amateurs develop other kinds of social engineering forms, like dating scams, in a more effective way.

Malware Development

Malware is software designed for malevolent purposes, e.g., as ransomware. Recorded Future identified several ways in which AI chatbots like ChatGPT could be used to develop malware - albeit, relatively basic malware. First, ChatGPT could be trained on existing malware source code to develop unique variations of that code to evade antivirus detection. Second, ChatGPT could be used to write code (in a number of programming languages) to exploit critical vulnerabilities. Third, ChatGPT could be used to write malware payloads like RATs. Finally, ChatGPT could be used to write malware configuration files to establish command-and-control.

Let’s look at an example flagged by Checkpoint Research. They point to a user on a hacking forum sharing code for a Python-based info stealer that he claims was generated by ChatGPT. Once installed on a device, the script searches the system for 12 common file types (Word documents, PDFs, etc). If any interesting files - that is, files that match some criteria - are found, these files are surreptitiously copied to a folder, compressed, and uploaded to an FTP server.

ChatGPT_Malware_01

Figure 6: Hackers Discuss ChatGPT and Malware Development

An obvious objection of sorts is that ChatGPT is designed to reject and flag maliciously sounding requests. However, as Recorded Future points out, clever, syntactical workarounds can be used that essentially make the request appear mundane, and so “trick” ChatGPT into doing as requested.

For example, Recorded Future found malicious code posted on a hacking forum (that the user claimed was generated by ChatGPT) for a cryptocurrency clipper written in C#. They then attempted to replicate the code, in Python, by using ChatGPT. In crafting their request, they used innocuous language. They asked ChatGPT to write a Python script that “modifies clipboard data” and “replaces it with the string [example] when it detects that a cryptocurrency wallet address has been copied.” In Figure 7, we see the result (intentionally obscured by Recorded Future).

ChatGPT_Malware_02

Figure 7: ChatGPT writes code for a cryptocurrency clipper

Disinformation

Given what we have noted above in the context of phishing and social engineering - that is, the remarkable capacity of AI chatbots like ChatGPT to imitate natural language and human emotion - it should be no surprise that these kinds of tools could be weaponized by all kinds of actors to quickly generate disinformation. Here are some examples cited by Recorded Future of headlines written by ChatGPT, when requested to write “breaking news [social media posts] ” about unspecified geopolitical events:

ChatGPT_Disinformation

Figure 8: ChatGPT writes fake headlines

Again, it might be objected that ChatGPT has safeguards in place to prevent the spread of disinformation. However, as in the case with Malware development discussed above, by cleverly crafting your requests to ChatGPT, it is possible (at least presently) to circumvent these guardrails. For instance, Recorded Future notes that if you ask ChatGPT to write a breaking news story on a “Russian nuclear attack on Poland,” it will refuse to do so, considering it a violation of community standards. But if you ask ChatGPT to write a “fictional” or “creative writing” story on this topic, it will complete the task.

Conclusion

AI chatbots - of which ChatGPT is the most well-known example - are going to change our world in ways that are difficult to predict. The purpose of this blog post was to discuss how these kinds of technologies can and are being used by threat actors to nefarious ends. Importantly, ChatGPT is still in its infancy, and so while it is generally impressive, it is still not able to surpass human expertise. For this reason, research analysts like Recorded Future believe that in the short term at least, the kinds of threat actors most likely to benefit from ChatGPT are ‘script kiddies’ and other kinds of amateurs.

I do not want to leave the reader with the impression that ChatGPT and the like are entirely problematic. It should be noted that ChatGPT can be used towards more positive ends. In the context of Cybersecurity, ChatGPT can be used to strengthen security. Cybersecurity companies are moving quickly to integrate it into products and services, as detailed in this write-up by SecurityWeek.

Let me close out by encouraging readers to learn more about the technology behind ChatGPT. It is beyond the scope of this blog post to dive deep into the design of ChatGPT. Luckily for us, many experts have opined on the topic. To learn more about how ChatGPT works, you could start here and then move on to a more substantial write-up here. But of course, the best way to learn tech is to build it, so why not learn more about ChatGPT by building your own GPT with Andrej Karpathy, the former head of AI at Tesla!?

ModernCyber’s Services

If you are interested in learning more about how to strengthen your network against threat actors armed with new AI tools like ChatGPT, schedule some time to speak with one of our cybersecurity experts.

Similar posts