Lindy Cameron, CEO of the UK’s National Cyber Security Centre (NCSC), has emphasised the crucial need for artificial intelligence (AI) to be developed with security as a foundational element.
Speaking at the influential Chatham House Cyber 2023 conference, Cameron warned against designing AI systems that are vulnerable to attacks and emphasised the importance of incorporating security measures from the outset.
“We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology,” Cameron said. She further advocated for a ‘secure by design’ approach, aligning with the Five Eyes security alliance’s emphasis on vendors taking greater responsibility for embedding cyber security into their technologies and supply chains from the beginning.
Cameron noted that the pace of AI development often relegates security to a secondary consideration. “AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems,” she warned.
The UK is a global leader in AI, with an industry that contributes £3.7 billion to the economy and employs 50,000 people. The country plans to host the first-ever global AI Safety Summit later this year to establish international standards for the safe development of AI.
In her speech, Cameron identified three key areas of focus for the NCSC. First, helping organisations understand the cyber security risks associated with AI, such as adversarial attacks through manipulated machine learning data. Second, maximising the benefits of AI in cyber defence. And third, understanding how adversaries, including hostile states and cyber criminals, are exploiting AI.
“We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft,” Cameron cautioned. She also mentioned that language learning models (LLMs) present significant opportunities for states and cyber criminals to lower the barriers for certain kinds of attacks, such as spear-phishing.