Artificial intelligence is already changing professional working patterns across almost every industry. It has the power to drastically reduce the time we spend on routine tasks and free us up to think more strategically in our day-to-day professional lives.
This is no different for the IT and cybersecurity sector – at ISACA, our survey of business and IT professionals in Europe found that almost three quarters (73%) of the businesses we surveyed reported that their staff use AI at work.
Yet the key issue with AI, as transformative as it can be, is that we need to ensure we are using it responsibly and securely. After all, LLMs are trained on data which is oftentimes sensitive, and we need proper guardrails on these programs so that hallucinations do not affect the integrity of our work. Only 17% of the organizations we surveyed have a formal, comprehensive AI policy in place which outlines the business’ approach to these issues and provides best practices for use, despite the fact that employees are using AI at work.
Chief Global Strategy Officer at ISACA.
AI is changing the threat landscape
At the same time, cyber criminals also have access to AI, and they’re using it to strengthen their criminal enterprises and capabilities, making their threats more convincing and effective than ever before. Not only does this pose a threat to the individual, but it poses a significant threat to businesses as well. Businesses are interconnected organizations with networks of suppliers and professional relationships – when one suffers a breach, all organizations across the network are at risk.
The recent CrowdStrike IT outage highlights just how vulnerable businesses are should they experience even a single IT fault or cyber attack. When one service provider in the digital supply chain is affected, the whole chain can break, causing large-scale outages – a digital pandemic. One rogue update, the unfortunate result of a lack of foresight and expertise, sparked chaos across a number of critical industries, from aviation and healthcare to banking and broadcasting.
Sometimes such incidents are caused by unintentional mistakes when updating software, and sometimes it is the result of a cyberattack. But the irony is that cybersecurity companies are also part of the supply chain, and those same companies that are fighting to establish cyber resilience may too become victims themselves, affecting service continuity.
Cyber professionals are acutely aware of this fact – when we asked our survey respondents about generative AI’s potential to be exploited by bad actors, 61% of respondents were extremely or very worried that this might happen. When comparing this to our data from last year’s survey, the sentiment has virtually not improved.
Training and upskilling are the key to long-term resilience
AI is being used twofold – bad actors are weaponizing the technology to develop more sophisticated attacks, and in response, it is being used by cyber professionals to keep pace with the evolving threat landscape and better detect and respond to those threats. Employees know that they need to keep pace with cyber criminals, upskill themselves, and really get to grips with AI, but when we asked our survey respondents how familiar they are with AI, almost three quarters (74%) were only somewhat familiar or not very familiar at all.
The CrowdStrike incident has brought the need for a more robust and resilient digital infrastructure to the fore, and the rise of AI will only make cyber threats more significant. It’s important that as an industry, we invest in upskilling and training to avoid similar crises in the future, and advancements in technologies like AI could be the key to working more efficiently. The right protocols must be established well ahead of time to move quickly when attacks and outages happen to minimize the damage and disruption. But this isn’t possible without the people with the skills to establish bespoke security frameworks and ensure everyone involved is trained on how to follow them.
If businesses are to both protect themselves and their partners in the long-term as well as see the benefits of using AI, they need to have the right skills in place in order to be able to identify new threat models, risks and controls. Training in AI across the cybersecurity sector is sorely needed – at the moment, 40% of businesses provide no training to employees in tech positions. Further, 34% of respondents believe that they will need to increase their knowledge of AI in the next 6 months, and in total, an overwhelming 86% of respondents feel that this training will be necessary within the next two years.
By taking an approach to AI which prioritizes training and comprehensive workplace policies, businesses and employees alike can rest assured that they are harnessing AI’s potential and keeping pace with cyber threats as they evolve in a secure and responsible manner, protecting both the business itself and every other enterprise within their wider network.
We’ve featured the best IT infrastructure management service.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Services Marketplace – Listings, Bookings & Reviews