The next generations of autonomous military systems will be dependent on who is able to ensure cybersecurity and by when.
This article was submitted to the UK Defence Journal by Paul Theron. Paul is a Professor of Cyber-secure Engineering Systems and Processes at the Manufacturing Informatics Centre, Cranfield University. Paul was previously director of the Aerospace Cyber Resilience research chair in France, funded by Thales, Dassault and State-Major de l’Armée de l’Air. Paul is an active member of NATO’s IST 152 Research & Technology Group on Autonomous Intelligent Agents for Cyber Resilience.
IT systems are fast evolving towards autonomy and higher levels of complexity, both in the civil and military domains. From sensors to intelligent ammunitions and augmented cognition for fighters through to central command and control systems, autonomy will be the key to overcoming enemy military forces in increasingly fast paced fields of combat.
The current paradigm of cyber defence is based on centralisation and human operators. Intelligent as they are, human cybersecurity professionals are limited in terms of numbers available and their speed, of being possibly overwhelmed by the sophistication, pace and volume of cyber events – particularly in the context of warfare.
The UK’s part in the race for effective autonomous cyber defence started with the first NATO-Industry
Developing autonomous cyber defence systems can provide the next level of sophistication needed – providing the intelligent ‘goodware’ to take on the intelligent malware. The growing use of big data and machine learning techniques will provide the ‘always on’ supervision power that any number of skilled cyber-professionals couldn’t compete with. But there’s also the need for swarms of pro-active, self-learning cyber defence agents to be used to work across the web on the side of national infrastructure and lawful activities.
Multi Agent Systems are made of a set of individual agents. Multiple agents, while acting locally on the basis of their individual knowledge and rules, cooperate together towards a common goal, which requires some form of collective intelligence. They are close to naturalistic behaviours such as ants’ and bees’, their connectivity is in line with the doctrine of information superiority through high connectedness, their versatility implies a vast number of configurations and functions for a wide variety of issues, they help the decentralisation, distribution and sharing of resources and decisions.
The agents embed their own methods, policies, self-management capabilities, resources, energy-generation features and capacities for hiding, detecting and understanding attacks and their various signals; they are capable of devising their own reaction plans, keeping ‘Situation Awareness’ for sense making and changing or optimising reaction plans when and as circumstances require. They use local and distributed resources to perform or optimise tasks, collaborating with human operators as and if needed, at the same time as learning and improving their own capabilities.
As a result, these can be designed to recognise patterns of actual and potential attacks and the agents can manage the most appropriate forms of counter-measures for each individual attack. The report of their activity can be used by experts to recommend and implement adaptations based on greater breadth and depth of knowledge. These autonomous agents will flag only when expert human intervention or a key judgment call is needed – so merely requiring occasional oversight and input.
It’s an approach that needs serious testing before being put into practice on the live web. With this in mind, we at Cranfield University are creating a large-scale Internet of Things simulator, involving interactions with and between millions of objects. It will provide the kind of rich, complex and fast-moving cyber environment that’s needed for replicating modern levels of IoT transactions and those still more advanced to come.
Autonomous cyber defence is for the medium-term – we’re talking in terms of a working prototype in three years, operational within 10 to 15 years – but this approach needs to be part of cyber-defence planning now, for taking a pro-active, future-looking stance. We’ve identified a set of 12 particular research challenges that need to be overcome via ACyD: among critical ones, the issue of developing sophisticated decision-making by agents, technical compatibility and co-operation.
Early attention and involvement from a wide range of beneficiaries is important now: from governments with the key responsibility for defending national infrastructure and economic security, to state defence institutions, national intelligence agencies and the wider defence and security industry.
The 1st NATO-Industry workshop on Autonomous Cyber Defence was held on the 19th of March 2019 at Cranfield University, more information can be found here.
At the point were AI becomes more “expert” than the human, and response times become more critical, should we not start worrying? And at what point do we give autonomy to our nuclear deterrent?
Could AI (hypothetically) have saved HMS Sheffield, by intervening/pulling rank in a time critical event?
Will we end up, totally dependant on a relatively fragile electronic infrastructure?
Will we always have in place, a direct, hardwired button, on the end of a pre-programmed Inertial navigation system?
Evening
Whilst AI is definitely becoming more “expert” as you put it the human in the loop will always have the edge when it comes to the could we/should we dilemma that is often faced in warfare. When making decisions about what we do the human goes through a series of processes and which are understood by individuals and society in general – this means when we carry out actions we can relate to others why those actions were carried out and rationalise the reason why we did it.
Machines making judgements based on intelligence that is artificial will always leave doubt in the society that machine has made a decision on behalf of.
People are comfortable with established concepts, they can understand emerging concepts and the place they have in the world. Revolutionary concepts, letting AI make life or death decisions is still someway off and I would suggest looking at what we currently use AI and other automation technologies for we are definitely not ready for AI enabled warfare.