Malicious Generative AI on the Dark Web in 2024: What to Expect?
The rapid evolution of Artificial Intelligence, particularly generative models, has been a remarkable aspect of recent technological advancements. These models, powered by deep learning techniques like generative adversarial networks (GANs), are adept at producing highly realistic text, images, audio, and video.
This growth is attributed to factors such as increasing model sizes, advancements in pre-training and transfer learning, and continuous enhancements in architecture. However, alongside their numerous benefits, there’s a growing concern about their misuse by threat actors.
Technology on the dark web
The hidden parts of the web, also known as the deep and dark web, are often used for various types of illicit activities. Here, technology, including AI, plays a significant role in the dark web ecosystem, often being adapted for malicious purposes.
AI generative models on the dark web
Since AI models such as GPT3 were first seen in 2022, it was clear that cybercriminals and other threat actors would use the new technology for malicious purposes and create custom tools with different payment models, from “pay to get full price” programs (AKA “lifetime access”) to “For Hire” programs, where threat actors are sharing their tools for a monthly or annual rate.
Over time, different kinds of malicious generative AI solutions became popular, with hiring services being offered for fees ranging from $90 to $200 a month.
They leverage these malicious models to develop all kinds of malicious activities, including:
- Deceptive and reliable phishing emails, whether to trick the employees of organizations into revealing sensitive credentials, or even deceive the CFO enough to transfer money.
- Reliable and sophisticated malware and code that can be used in various ways, such as phishing emails or exploiting vulnerabilities.
- Fully operational and convincing smishing (phishing by SMS) campaigns targeting large audiences.
- Discovery of new hacking-related sites and vulnerabilities which they use to develop their attacks.
These types of AI-driven attacks have become more sophisticated, with services like WormGPT, DarkGPT, and FraudGPT emerging as popular tools for illegal activities.
The types of malicious AI models seen on the dark web
- ruGPT – an uncensored generative AI model that mainly focuses on Russian language users who want to bypass restrictions of legitimate AI models in order to create malicious code and deceptive content.
- DarkLLaMa – inspired by Meta’s AI model, this unrestricted model enables users to create malicious code, crack software, and generate fake content and emails.
- FraudGPT – a blackhat version of a Generative AI bot that cybercriminals and fraudsters use to create phishing emails, deceiving messages, and fake websites that can trick legitimate users into providing their personal information
- DarkGPT – This malicious AI model, which can be purchased on a lifetime access model for $200, allows threat actors to create custom malicious code and create phishing campaigns.
- WormGPT – This AI model, first spotted in March 2023 on a known forum called Hack Forums, became fully operational in June 2023. It’s a blackhat version of ChatGPT, which allows users to create realistic and convincing text, develop custom code for malicious purposes, and even produce fake and fraudulent invoices.
How Generative AI is used in dark web malicious operations
Since December 2022, we have seen a rising demand for jailbreak versions of known AI models. The peak is in March 2023, with over 1,600 mentions on the deep and dark web.
From May to June 2023, we can also see many mentions of different AI Generative models with no restrictions in known deep and dark web forums.
When looking for mentions of these malicious AI models in the last few months, we can see a 180-degree change as there has been a spike in demand for malicious AI generative models.
It is hard to know what is the exact cause yet one possible explanation is what some would call the “End of Year” effect when many companies are busy with customers, reports, and planning for the new year – making them less attentive to potential cyber dangers and more vulnerable to attacks.
Some threat actors are using these malicious AI models to sell “how-to” guides to enable cybercrime.
For example, the next post, taken from a known hacking forum, was published by a threat actor who is teaching new members how to create malware and phishing emails to hack or scam others with unrestricted AI models.
Others use Generative AI models to create malware code, as can be seen in the next post.
This post was published on the renowned XSS forum, by a threat actor who claims he used AI to create a rootkit from scratch.
How fraudulent generative AI has changed the market
The introduction of fraudulent generative AI has significantly changed the cybersecurity landscape. The realism of AI-generated attacks, like convincing phishing emails, and deepfake scams, poses bigger threats than ever before.
Since generative AI is dynamic, cybersecurity defenses must constantly adapt and react quickly to the ever-changing AI-generated threat landscape. It ushers in a new era in the way businesses approach cybersecurity and risk management.
The use of generative AI for malicious activities on the dark web in 2024
Looking ahead to 2024, the use of malicious AI generative models is expected to become more sophisticated, making the detection and mitigation of these threats more challenging.
Malicious AI is not just a passing trend but a new modus operandi of many threat actors on the deep and dark web. Keeping up with the ever-evolving hidden threat landscape is set to be an increasingly challenging task in 2024 and beyond. A critical task that requires dark web monitoring tools like Lunar or a powerful dark web data solution to identify, investigate, and react to any emerging risk.