Chief Information Security Officers (CISO) are becomingly ever more concerned the increasing use of Generative AI tools could lead to more cybersecurity incidents.

A new pape by security experts Metomic surveying more than 400 CISOs in the UK and the US found security breaches linked to generative AI worry almost three-quarters (72%) of the respondents.

But that’s not the only thing CISOs are worried about, when it comes to generative AI, the report warns, as they also fear people will use sensitive company data to train the Large Language Models (LLM) used to power these tools. Sharing the data this way is a security risk, as there is a theoretical possibility that a malicious third party might extract this information somehow.

Spotting malware

CISOs have every right to be worried, though. Data breaches and similar cybersecurity incidents have been rising quarter into quarter, year after year. Since the introduction of generative AI tools, these attacks have gotten even more sophisticated, some researchers said.

For example, poor writing, as well as grammar and typing errors, were the best way to spot a phishing attack. Today, most hacking groups use AI to write convincing phishing emails for them, not only making them harder to spot, but also significantly lowering the barrier for entry. 

Another example is the writing of malicious code. Be it for a landing page, or for malware, hackers are constantly finding new ways to abuse the new tools. Generative AI developers are fighting back, putting limits in place that prevent the tools from being used this way, but threat actors have so far always managed to find a way around such roadblocks. 

Good news is that AI can also be used in defense, and many organizations have already deployed advanced, AI-powered solutions. 

More from TechRadar Pro

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums