Many cybercriminals are skeptical about the use of AI-based tools such as ChatGPT to automate their malicious campaigns.
A new Sophos investigation looked to gauge the interests of cybercriminals by analyzing dark web forums. Apparently, there are many safeguards in place in tools such as ChatGPT, which prevent hackers from automating the creation of malicious landing pages, phishing emails, malware code, and more.
That forced the hackers to do two things: try and compromise premium ChatGPT accounts (that, as the research suggests, come with fewer restrictions), or pivot towards GhatGPT derivatives – cloned AI writers that hackers built to circumvent the safeguards.
Poor results and plenty of skepticism
But many are wary of the derivatives, fearing that they might have been built just to trick them.
“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused,” says Ben Gelman, senior data scientist, Sophos. “Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period.”
While the researchers did observe attempts at creating malware or other attack tools using AI-powered chatbots, the results were “rudimentary and often met with skepticism from other users,” said Christopher Budd, director, X-Ops research, Sophos.
“In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us,” Budd added.
More from TechRadar Pro
Services Marketplace – Listings, Bookings & Reviews