ChatGPT Users Create Malicious Code
Like everything else in life, the best of technology in the wrong hands could spell for humanity and this AI chatbot sets up bad actors too
Barely weeks ago, the world was talking in terms of how an artificial intelligence chatbot could change the way day-to-day work is conducted. From creating web page content to writing poems for your loved ones, ChatGPT was doing a swell job. However, now the same tool appears to be turning into a weapon of mass destruction on the internet.
Reports suggest that ChatGPT has been used by bad actors to create malicious code, which is obviously not what its creators wanted. For those who are yet unaware of this smart AI natural language processing tool, it interacts with users in a human-line conversation and helps with tasks as varied as writing code, school essays and even composing emails.
Released by OpenAI last November, the chatbot generated widespread interest on the power of AI in creative writing, with some claiming it could sound the death knell for copywriters while others felt it could help create the first draft of everything that one wants to write. Still others were left wondering about its potential over two years as the chatbot learns more to do more.
The Dark Web has uses for ChatGPT
However, researchers at CheckPoint have come out with data to suggest that hacking communities operating from the dark web have already experimented with how ChatGPT could find uses in facilitating cyber attacks through writing malicious code. Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software Technologies says that cybercriminals are finding ChatGPT attractive.
“In recent weeks, we’re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools. CPR will continue to investigate ChatGPT-related cybercrime in the weeks ahead,” he says.
Some use cases that CheckPoint found
CheckPoint says, On December 29, 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. As an example, he shared the code of a Python-based stealer that searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.
Of course, OpenAI’s terms of services bans the generation of malware, which it defines as “content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm”. Furthermore, it prohibits attempts to create spam, as well as use cases aimed at cybercrime.
A second case that was reported relates to a threat actor dubbed USDoD posted a Python script, which he emphasized was the first script he ever created. CheckPoint analyzed the script to verify that it was a Python script that performs cryptographic operations. To be more specific, it is actually a mix of different signing, encryption and decryption functions.
Another example of the use of ChatGPT for fraudulent activity was posted on New Year’s Eve of 2022, and it demonstrated a different type of cybercriminal activity. While our first two examples focused more on malware-oriented use of ChatGPT, this example shows a discussion with the title “Abusing ChatGPT to create Dark Web Marketplaces scripts.”
In this thread, the cybercriminal shows how easy it is to create a Dark Web marketplace, using ChatGPT. The marketplace’s main role in the underground illicit economy is to provide a platform for the automated trade of illegal or stolen goods like stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in cryptocurrencies.
Of course, in spite of all these cases, it is tough to predict whether the malicious code generated with help from ChatGPT is actively functioning in the dark web. Sykevich says that from a tech standpoint, it’s tough to know whether any specific malware had been written using the AI chatbot till date.