Lindy Cameron, CEO of the UK’s National Cyber Security Centre (NCSC), has emphasised the crucial need for artificial intelligence (AI) to be developed with security as a foundational element.

Speaking at the influential Chatham House Cyber 2023 conference, Cameron warned against designing AI systems that are vulnerable to attacks and emphasised the importance of incorporating security measures from the outset.

“We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology,” Cameron said. She further advocated for a ‘secure by design’ approach, aligning with the Five Eyes security alliance’s emphasis on vendors taking greater responsibility for embedding cyber security into their technologies and supply chains from the beginning.

Cameron noted that the pace of AI development often relegates security to a secondary consideration. “AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems,” she warned.

The UK is a global leader in AI, with an industry that contributes £3.7 billion to the economy and employs 50,000 people. The country plans to host the first-ever global AI Safety Summit later this year to establish international standards for the safe development of AI.

In her speech, Cameron identified three key areas of focus for the NCSC. First, helping organisations understand the cyber security risks associated with AI, such as adversarial attacks through manipulated machine learning data. Second, maximising the benefits of AI in cyber defence. And third, understanding how adversaries, including hostile states and cyber criminals, are exploiting AI.

“We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft,” Cameron cautioned. She also mentioned that language learning models (LLMs) present significant opportunities for states and cyber criminals to lower the barriers for certain kinds of attacks, such as spear-phishing.

You can read more by clicking here.

Avatar photo
George has a degree in Cyber Security from Glasgow Caledonian University and has a keen interest in naval and cyber security matters and has appeared on national radio and television to discuss current events. George is on Twitter at @geoallison
Subscribe
Notify of
guest

3 Comments
oldest
newest
Inline Feedbacks
View all comments
farouk
farouk
7 months ago

At 2pm on the 23rd of March 2023, The UK government  after a review by the country’s National Cyber Security Centre banned Tik Tok from all government devices. Why? TikTok is owned by ByteDance, a Chinese headquartered company, and Western governments fear it could give Beijing’s spies access to sensitive data. Cabinet Office Minister Oliver Dowden told the House of Commons at lunchtime that the ban, which starts immediately, was “proportionate” and “prudent” and followed similar moves by allies. “It is clear that there could be a risk around how sensitive government data is accessed and used by certain platforms,”… Read more »

Simon
Simon
7 months ago
Reply to  farouk

does not really bode well , does it !!

Jon
Jon
6 months ago

I’m all in favour of “secure by design”, which is why the Online Safety Bill’s provisions ensuring “insecure by law” is a case of the experts being ignored in favour of the “won’t somebody think of the children” pearl clutchers.