You may have seen rumblings in the news about GhostGPT, a new AI tool that is being used by attackers.
What Is GhostGPT?
Well, it is what they are calling "uncensored AI" in that it is a tool that can be used by hackers without any guardrails or ethical filters to the prompts. In case you didn't realize, there are some safeguards in place on public AI resources like ChatGPT that filter certain queries for ethical reasons, such as hacking and other nefarious schemes.
Supposedly, it has a "no-logs policy" so that conversations are untraceable, and you can access it using Telegram which is a service that attackers have long used for cybercrime activities. GhostGPT has been showing up in cybercrime forums and is largely referenced with focus on business email compromise scams and other malicious activities.
Phishing and Malware
Abnormal Security researches tested GhostGPT with a simple prompt for it to draft a phishing email that mimicked DocuSign. The results that it produced were very legitimate and convincing. As with most phishing attempts it urged the recipient to click a link to review a document which is common when harvesting credentials.
However, this wasn't the extent of what GhostGPT was able to do. It can also code malware and help with exploring and developing exploits for hackers. This tool will make it much easier for hackers to develop these tools, saving time and energy to work on other aspects of a cybercrime initiative.
Is this "Dark AI"
Yes, Dark AI is a real thing. It is a term that has come to light with the increase in demand for malicious tools backed by AI.
You may have seen just a couple of years ago, tools like WormGPT and FraudGPT. These tools help to lower the skills needed to carry out sophisticated attacks and even allow attackers without much experience to conduct advanced phishing, BEC scams, and even launch a ransomware attack.
GhostGPT seems to be the latest in the wave of tools coming out that will help with this goal of compromise, attack, extortion, and other malicious activities.
AI use is increasing in cybercrime
A recent report by Egress cited that 75% of all the phishing kits sold on the dark web now include some type of AI capabilities. VIPRE security found that 40% of business email compromise attempts involved AI generated emails. These are also used in ransomware campaigns unfortunately.
Even the legitimate tools out there like ChatGPT have been used for malicious purposes, but OpenAI has taken steps to help disrupt activities by malware developers and other threat actors trying to use it for cybercrime.
The thoughts on GhostGPT come from the Abnormal Security report on this new tool. You can find that here: How GhostGPT Empowers Cybercriminals with Uncensored AI | Abnormal.