ChatGPT and the Emerging Cyber Threats

As with all technologies, how they are used is what determines the type of impact they will have on society as a whole. Artificial Intelligence (AI) is that new technology and in particular the generative AI of ChatGPT, that has seemingly stolen all the technology headlines of late. Currently, the technology has proven to be both advantageous to the cybersecurity industry. Conversely, it has also proven to be a risk to both security firms and the IT infrastructures they have been tasked with protecting.

This article is primarily focused on the risks posed by such technology but will cover some of the benefits the technology can unlock for those defending networks and endpoints. However, before going down those rabbit holes, some definitions are required to move forward.

Starting with AI, one of the best working definitions comes from John McCarthy’s paper “What is Artificial Intelligence”, McCarthy states,

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

With that in mind, IBM’s explanation of the current state of AI is incredibly apt,

“At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms that seek to create expert systems which make predictions or classifications based on input data.”

And,

“Over the years, artificial intelligence has gone through many cycles of hype, but even to skeptics, the release of OpenAI’s ChatGPT seems to mark a turning point. The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing. And it’s not just language: Generative models can also learn the grammar of software code, molecules, natural images, and a variety of other data types.”

LMNTRIX’s Experience with Chat GPT

At LMNTRIX we have followed the recent developments in earnest and have noticed that ChatGPT, and its similar incarnations, can be used to supplement cyber research. If done properly this can drastically reduce research times by handling tasks that would benefit from some level of automation. Tasks like decrypting Base64 shellcode and discovering a known cross-site scripting vulnerability can be done easily and in record time.

Carlo Minassian, CEO of LMNTRIX, said he was able to get the program to perform several offensive and defensive cyber security tasks with his team at LMNTRIX CDC. Notably, researchers at LMNTRIX CDC were able to task the AI with helping them with writing ransomware for Windows Operating Systems. This sounds like it could be used by an attacker, spoiler alert, it can, in the hands of the good guys, it helped assist the LMNTRIX team with reverse engineering malware code, despite specific terms of use that prohibit the practice.

LMNTRIX staff also noted that the security risks posed by ChatGPT are not direct risks, as the deployment of malware can be perceived. Rather, it is the vast amounts of information that can be used maliciously to supplement attacks.

The Risks Associated With ChatGPT

As with all tools, it is the end user that determines if it will be used for good or, dare I say it, evil. While security researchers, and those tasked with defending IT infrastructure can use the technology for good, we have already begun to see threat actors use the technology for less noble purposes. Currently, we have seen threat actors use ChatGPT to help generate AI-powered phishing scams and dupe the technology into writing malicious code. 

AI-Generated Phishing Scams

Phishing is still the most common Internet Threat users are impacted by. They are relatively easy to detect as they’re often littered with misspellings, poor grammar, and generally awkward phrasing, especially those originating from other countries where the threat actor’s first language isn’t English. ChatGPT allows the threat actor to generate a campaign error-free making detection a harder prospect than before. This can be seen in ChatGPT’s ability to converse so seamlessly with users without spelling, grammatical, and verb tense mistakes makes it seem like there could very well be a real person on the other side of the chat window. OpenAI can be considered a godsend for those composing the lures, which are critical to a phishing campaign’s success.

Just as threat actors have looked to leverage the tech, security researchers have already developed a “ChatGPT Detector”. Ideally, such software would be used to automatically screen and flag emails that are AI-generated. Additionally, all employees need to be routinely trained and re-trained on the latest cybersecurity awareness and prevention skills, with specific attention paid to AI-supported phishing scams. That said, the onus is on both the private sector and the wider public to continue advocating for advanced detection tools, rather than only focusing on AI’s expanding capabilities and benefits.

Generating Malicious Code

In this regard, it is important to note that ChatGPT is coded to prevent the AI from generating code it perceives as malicious. As the Harvard Business Review notes,

“ChatGPT is proficient at generating code and other computer programming tools, but the AI is programmed not to generate code that it deems to be malicious or intended for hacking purposes. If hacking code is requested, ChatGPT will inform the user that its purpose is to ‘assist with useful and ethical tasks while adhering to ethical guidelines and policies.’”    

However, it is possible that ChatGPT can be tricked into supplying malicious code to a threat actor. There have already been several instances of hackers using the tool to such an end. In one instance, a well-known underground hacking site by a hacker claiming to be testing the tool to recreate malware strains as a researcher might. A deeper dive revealed that ChatGPT was used to write their initial scripts. Another user uploaded Python code that he claimed could encrypt files and had been created using ChatGPT on the aforementioned forum. The threat actor claimed that the hacking tools and code were his first of its kind.

In another instance, a hacker demonstrated how ChatGPT might be used to establish a Dark Web marketplace. The hacker revealed that he had developed a piece of code that uses a third-party API to get the most recent Bitcoin values and can be utilized for the Dark Web market as a payment mechanism.

Final Thoughts

Once again I turn to the Harvard Business Review, with regard to the duality of AI tools like ChatGPT,

“…it’s important to remember that this same power is equally available to good actors. In addition to trying to prevent ChatGPT-related threats, cybersecurity training should also include instruction on how ChatGPT can be an important tool in the cybersecurity professionals’ arsenal. As this rapid technology evolution creates a new era of cybersecurity threats, we must examine these possibilities and create new training to keep up.”

As one of LMNTRIX’s team pointed out If you take a tool like a screwdriver, it is always dual use. It can be used as intended, pry a door open to grant an intruder, or it can be used as a weapon to cause bodily harm. How the tool performs, and what it does, depends on who uses it. Ultimately, opportunity and motive can probably help draw a thin line between a technology (or a tool) being used for ethical, or unethical purposes ChatGPT is no exception. The real question is how are you going to create and integrate new threat models into your existing ecosystem? If you’re unsure, go ahead and contact the LMNTRIX team, we’ll be happy to help shape your security program for years to come.

Tags: No tags

Comments are closed.