Workers are more likely to divulge company secrets to a workplace AI tool than their friends, a new report has claimed.

In a study of over 1,000 office employees from the US and UK, data analytics firm CybSafe found many are positive about generative AI tools, so much so that a third of both US and UK workers admitted that they would probably continue using them even if their company banned them. 

69% of all respondents also said that the benefits of such tools outweigh their security risks. US workers were the most sanguine, as 74% of them agreed with this statement.

AI dangers

Half of all respondents reported using AI at their work, with a third using it weekly and 12% daily. When it comes to US workers, the most common use cases include research, copywriting and data analysis, all closely tied at 44%, 40% and 38% respectively. AI tools were also employed for other tasks, such as helping with customer service (24%) and code writing (15%).

CybSafe believes this is a cause for concern, as it claims that businesses are not properly alerting their employees to the dangers posed by using such tools.

In its reports, CybSafe comments that, “as AI cyber threats rise, businesses are in danger. From phishing scams to accidental data leaks, employees need to be informed, guided, and supported.”

A worrying 64% of US workers have entered information pertaining to their work into generative AI tools, and 28% weren’t sure if they had. CybSasfe further claims that a massive 93% of workers are potentially sharing confidential information with AI. And the icing on the cake is that 38% of US workers admit to sharing data with AI that they wouldn’t “in a bar to a friend.”

“The emerging changes in employee behavior also need to be considered,” says Dr Jason Nurse, CybSafe’s director of science and research and current associate professor at the University of Kent. 

“If employees are entering sensitive data sometimes on a daily basis, this can lead to data leaks. Our behavior at work is shifting, and we are increasingly relying on Generative AI tools. Understanding and managing this change is crucial.”

Another issue from a cybersecurity perspective is the inability of workers to distinguish between content created by a human or an AI. 60% of all those surveyed said that they were confident in they could do so accurately. 

“We’re seeing cybercrime barriers crumble, as AI crafts ever more convincing phishing lures,” added Nurse. “The line between real and fake is blurring, and without immediate action, companies will face unprecedented cybersecurity risks.”

These concerns are amplified given the fact that the uptake of AI at work is increasing at a rapid pace. A new report by management consulting firm McKinsey has labelled 2023 the breakout year for AI, with nearly 80% in its survey claiming to have had at least some exposure to the technology at home or at work.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Overseas domestic helper.