News & Analysis

Cybercrime and ChatGPT – A New Challenge

cybersecurity

Using ChatGPT, the AI powered chatbot, to generate malicious code has received some media mileage now, resulting in OpenAI, which owns and operates the bot, adding some restrictions to contain the activities of cybercriminals. However, new research suggests that now raiders from the dark web are actually using Telegram bots and scripts to bypass the restrictions. 

CheckPoint Research was amongst the early ones to note that cybercriminals were leveraging OpenAI platform to generate malicious content such as phishing emails and malware. They had also reported how ChatGPT had successfully conducted a full inflection flow, from creating a convincing spear-phishing email to running a reverse shell that accepts English commands. 

Now their researchers have found an instance of cybercriminals using ChatGPT to enhance the code of a basic infostealer malware from 2019. Although the code is not complicated or tough to create, the AI-powered bot merely improved the code, says a note shared by CheckPoint as part of its regular update. 

It said there were two ways to access and work with open AI models at this point in time. The first involved a web user interface to use ChatGPT or any OpenAI platform while the second was through an API for building applications, processes etc., where the creator can see their own user interface with the OpenAI models and data running in the background. 

The CheckPoint note said, as part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. Several restrictions have been set within ChatGPT’s user interface to prevent the abuse of the models. For example, if you ask ChatGPT to write a phishing email impersonating a bank or create malware, it will not generate it. 

Bypassing limitations to create malicious content 

However, the report said cybercriminals are working their way around ChatGPT’s restrictions and there is an active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT´s barriers and limitations. This is done mostly by creating Telegram bots that use the API. These bots are advertised in hacking forums to increase their exposure.

The current version of OpenAI´s API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has very few if any anti-abuse measures in place. As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface. 

In an underground forum, the researchers found a cybercriminal advertising a newly created service: a Telegram bot using OpenAI API without any limitations and restrictions. As part of its business model, cybercriminals can use ChatGPT for 20 free queries and then they are charged US$5.50 for every 100 queries. 

Sergey Shykevich, Threat Group Manager at CheckPoint Software shares “As part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. However, we’re seeing cyber criminals work their way around ChatGPT’s restrictions, and there’s active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT´s barriers and limitations. 

This is mostly done by creating Telegram bots that use the API, and these bots are advertised in hacking forums to increase their exposure. The current version of OpenAI´s API is used by external applications and has very few anti-abuse measures in place. As a result, it allows malicious content creation, such as phishing emails and malware code without the limitations or barriers that ChatGPT has set on their user interface. Right now, we’re seeing continuous efforts by cybercriminals to find ways around ChatGPT restrictions, he says. 

In conclusion, we see cybercriminals continue to explore how to utilize ChatGPT for their needs of malware development and phishing emails creation. As the controls ChatGPT implements improve, cybercriminals find new abusive ways to use OpenAI models – this time abusing their API, says the report by CheckPoint Research. 

 

“ChatGPT is an interesting experiment at the moment, but its wider availability certainly appears to present new challenges. I have been playing with it since its public availability in November of 2022 and it is quite easy to convince it to assist with creating very convincing phishing lures and responding in a conversational way that could advance romance scams and business email compromise attacks. OpenAI seems to be trying to limit the high risk activities from abusing its use, but the cat is now out of the bag. Today the biggest risk is to English speaking populations, but it is likely only a matter of time before it is available to generate believable text in most commonly spoken languages of the world. We have reached a stage where humans are unlikely be able to discern machine generated prose from human written in casual conversations with those we are not intimately familiar which will security filters to aid in preventing humans from being victimized.” –  Chester Wisniewski, Field CTO Applied Research, Sophos

Leave a Response