find-partner-btn-inner

London Tech Week Top Takeaways: Cybersecurity threats in the AI era – what is changing for businesses and what can be done?

Cybersecurity has hit the headlines again following the recent attacks on UK retailers Marks & Spencer, the Co-Op and Harrods. Whilst details of the precise methods and vulnerabilities exploited in these attacks may understandably be closely guarded at this stage, it’s fair to assume that suspected perpetrators such as Scattered Spider and/or DragonForce are increasingly leveraging AI to make their already powerful threat all the more potent. So what new dangers does AI pose to businesses, and what can they do to better protect themselves?

What new threats does AI pose?

Last month, the National Cyber Security Centre (NCSC) published an update to its January 2024 assessment of how AI will impact the efficacy of cyber operations and the implications for cyber threat from 2025–2027. Amongst its key judgments were the following:

1. Artificial intelligence will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats. Cyber threat actors are almost certainly already using AI to enhance existing tactics, techniques and procedures (TTPs) in victim reconnaissance; vulnerability research and exploit development; access to systems through social engineering (including optimised phishing campaigns and deepfakes); basic malware generation; and processing exfiltrated data.

2. Proliferation of AI-enabled cyber tools will highly likely expand access to AI-enabled intrusion capability to an expanded range of actors. Cyber criminals are likely to be able to make available, or make use of, AI-enabled cyber tools “as a service”, meaning that hackers of all levels, from novices, opportunistic criminals and hacktivists, all the way to state actors and organised cyber crime groups, will be able to conduct cyber attacks which are more sophisticated, targeted and difficult to detect.

3. The growing incorporation of AI models and systems across the UK’s technology base, and particularly within critical national infrastructure (CNI), almost certainly presents an increased attack surface for adversaries to exploit. AI technology is increasingly connected to company systems, data and operational technology for tasks. Threat actors will almost certainly exploit this additional threat vector. Techniques such as “prompt injection” (i.e. where input data is manipulated to trick the AI system into making incorrect decisions or providing harmful outputs – including participation in cyber attacks) and supply chain attack are already capable of enabling exploitation of AI systems to facilitate access to wider systems.

What can businesses do to defend against emerging threats?

Faced with such predictions, businesses could be forgiven for feeling a little overawed at the prospect of keeping their systems and data secure. However, cybersecurity professionals warn against despondency, characterising the developments as an evolution rather than a revolution in cyber security terms. Some key practical steps that can assist include:

  1.  Embracing AI as part of your cybersecurity armoury. The flipside to the increased threat posed is that AI will, in turn, aid system owners and software developers in securing systems, enhancing and optimising their defences and responses. Businesses can build and leverage defensive AI agents to combat the criminals’ offensive AI agents, with future cyberwars taking place at lightning speed. The NCSC therefore correctly recognises that keeping pace with frontier AI cyber developments will almost certainly be critical to cyber resilience for the decade to come.
  2. Training to identify AI powered attacks. At this stage, the primary use case of AI in cyber attacks seems to lie in generative AI and social engineering: namely voice cloning, deepfakes, phishing, fake portal pages etc. Business personnel at all levels must therefore be better trained to recognise these AI-powered, changing attacks, with continual adjustments to existing practices also being essential.
  3. Proactivity and testing. Given the speed with which AI can assist in the identification and exploitation of system weaknesses, proactivity in finding and patching vulnerabilities quickly is crucial. To this end, the regular and secure engagement of red teams/ethical hackers will become increasingly important.
  4. Increased collaboration amongst businesses. One way in which a hacking group like Scattered Spider may currently have the upper hand in the cyber war is their open-source approach, sharing knowledge in their communities about vulnerabilities in their targets. Businesses, however, operate in a more competitive environment, and are reluctant to share information and learnings (even after these have been exposed) for fear of regulatory scrutiny, reputational damage and loss of competitive edge. However, collaboration through organisations such as Information Sharing & Analysis Centres (ISACs) can help to combat this, providing forums in which information can be shared securely.

Conclusion

AI undoubtedly poses increased risks to businesses. However, aspects of AI – such as machine learning – are already largely embedded on both the attack and defence side. As such, the latest developments in AI should not be viewed as requiring businesses to start from scratch. The principles of cybersecurity, including sound data governance and data protection compliance, remain the same.

If you would like to discuss this article in more detail, please get in touch with Ben Milloy.

Featured Insights