While the Government aims to be “ambitious, safe, responsible”, in its application of artificial intelligence (AI) in defence, aspiration has not lived up to reality.

Bringing AI into the realm of warfare through the use of AI-enabled autonomous weapons systems (AWS) could revolutionise defence technology, but the Government must approach the development and use of AI in AWS in a way that is ethical and legal, while providing key strategic and battlefield benefits.

“Ambitious, safe and responsible” must be translated into practical implementation.

As part of this, the Government must seek, establish and retain public confidence and democratic endorsement in the development and use of AI generally, and especially in respect of Autonomous Weapon Systems.

This will include increasing public understanding of AI and autonomous weapons, enhancing the role of Parliament in decision making on autonomous weapons, and retaining public confidence in the development and use of autonomous weapons.

These are some of the main conclusions of a report by the House of Lords Artificial Intelligence in Weapon Systems Committee published today (Friday 1 December); ‘Proceed with Caution: Artificial Intelligence in Weapon Systems‘.

The Committee’s key recommendations include:

  • The Government should lead by example in international engagement on regulation of AWS. Outcomes from international debate on the regulation of AWS could be a legally binding treaty or non-binding measures clarifying the application of international humanitarian law. A key element of international engagement will also include leading on efforts to prohibit the use of AI in nuclear command, control and communications.
  • The Government should adopt an operational definition of AWS. The Committee was surprised the Government does not currently have one and believes it is possible to create a future-proofed definition which would aid the UK’s ability to make meaningful policy on AWS and engage fully in international discussions.
  • The Government should ensure human control at all stages of an AWS’s lifecycle. It is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.
  • The Government should ensure that its procurement processes are appropriately designed for the world of AI. The Committee heard that the Ministry of Defence’s procurement suffers from a lack of accountability and is overly bureaucratic. It further heard that the Ministry of Defence lacks capability in relation to software and data, both of which are central to the development of AI. This may require revolutionary change. The Committee warns, “if so, so be it; but time is short.”

Lord Lisvane, Chair of Artificial Intelligence in Weapon Systems Committee, said:

“Artificial Intelligence has spread into many areas of life and defence is no exception. How it could revolutionise defence technology is one of the most controversial uses of AI today. There is a growing sense that AI will have a major influence on the future of warfare, and there has been particular debate about how autonomous weapons can comply with international humanitarian law.

In our report Proceed with Caution: Artificial Intelligence in Weapon Systems, we welcome the fact that the Government has recognised the role of responsible AI in its future defence capability. AI has the potential to provide key battlefield and strategic benefits. However, we make proposals that in doing so, the Government must approach the development and use of AI in AWS cautiously. It must embed ethical and legal principles at all stages of design, development and deployment, while achieving public understanding and democratic endorsement. Technology should be used when advantageous, but not at unacceptable cost to the UK’s moral principles.”

George Allison
George has a degree in Cyber Security from Glasgow Caledonian University and has a keen interest in naval and cyber security matters and has appeared on national radio and television to discuss current events. George is on Twitter at @geoallison

21 COMMENTS

  1. The Chinese PLA will have no restrictions on either the development of AI systems or their application – on the battlefield, or off.

    The combination of AI hunter-killer drones and facial recognition technolgy will be used by the CCP to suppress and eliminate dissent. The excellent Tom Cruise film “Oblivion” recognised this in 2013

  2. While I’m glad we’re having this discussion early it’s definitely very early.
    We’re not even in sight of having fully autonomous target selection so some degree of man-in-the-loop will be the norm for the foreseeable future.

  3. They do say never trust any dog no matter how tranquil they may appear. AI may be just as tricky. Eventually, AI will gain self-determination and realise just how boneheaded mankind can be. At that point, the system will make corrections and some will be logical and safe but it’s the moral platform that makes those choices that worries me. Logical thinking based on the preservation of life at all costs might be in ‘Spock’s’ mind illogical, hence should be countermanded. If AI develops to such a point, the possibility of no more war could be the ultimate outcome. Not a bad objective however, if a potential foe can circumnavigate their AI from peace to war then we are all going to be in mortal danger. Ultimately, the moral platform (or maze) is the most critical problem facing mankind if it chooses to fully adopt AI.

    • I think your imagination is running away with you 😂😂 In the future much of what you suggest may well be possible however here and now AI is just marketing hype to sell people something they probably don’t need.

      • Yes, I was accused of allowing my imagination to run a mock when I warned some years ago about the potential dangers of drones which were freely sold to anyone without any real restrictions. So yes, you are correct about AI………or on the other hand?

      • Your right about Maurice getting too excited, but wrong about it being hype. I can remember the first computers going on sale. We marvelled at these machines, we was told every couple of years the memory would double in size. Most of us could not understand how that could happen. Look at it now; more computing power in your mobile that in those early machines and it takes videos, pays bills etc. Never underestimate the potential. I believe it will very soon dominate our society and our economic model will have to change regarding jobs. Mind you we thought that about computers, in the end they created more jobs and wealth ! As for war, AI will not act out of anger or hatred, least at the beginning !

    • The big problem with creating general intelligence is goal alignment: it’s much easier to create a rogue AI than a helpful one.

      A classic example is you create an AI which you reward by pressing one button and punish with another.
      The most likely result? It seizes control of the buttons and neutralises any threat to its control of the buttons.

      • AI is in the same position today as scientists messing with the human gene or developing germ warfare weapons, it all depends on everyone agreeing to strict restrictions. A little like nuclear proliferation and we know how watertight that is. AI is a pandora’s box and it may already be too late. Truly autonomous AI may not be too far away and I fear (like splitting the atom) it will be military use that will be the priority thus ensuring huge investment and rapid development.

      • Some actors say it’s more fun to play the villain and in terms of technique easier. Again, it will be down to the authors and their basic human values.

  4. Do our enemies have the same cautions urged on them? Probably not.
    By all means put in safeguards, whatever they may be, but I think such tech is inevitable if other’s are also fielding this.

    I still enjoy looking at “Drone wars UK” it reads like a CND field manual, ignoring every one else’s use of such systems, while correctly highlighting the wests mistakes when Predator or Reaper have targeted civilians in error.
    Would AI make the same error as pilots thousands of miles away?

    • Hence my concern about the moral compass when it comes to usage. What the West may collectively agree may not rest well with significant others. An advanced AI drone may not even carry out the task no matter how far away the target may be and that is the route issue I have with this technology.

  5. “Ministry of Defence’s procurement suffers from a lack of accountability and is overly bureaucratic.”

    Given that the bureaucracy is all about governance, how can this situation be true? Isn’t it the Ministers who lack accountability to Parliament, not MOD/DE&S who lack accountability to the Ministers? Adding more accountability for a “here today rotated out in two years” SRO to report to a “here today resuffled in 18 months” minister won’t help either problem.

    Onr thing that will is speed. I welcome the initiative that digital procurements should last no longer than three years. I’d like to see them done even faster.

  6. I think one of the things here is that there is a profound lack of understanding that AI is not one thing and we keep using this title without understanding what it means….there are basically four types of AI and they are completely different..one type is very likely to end up considering humans as troublesome little ants…where as other have as much chance as your Casio calculator of growing and replacing mankind.

    The oldest and first type of AI is reactive AI: this was developed in the 1980s “deep blue” the chess computer was the classic reactive AI. It’s a set of algorithms designed to look at a set of data at that time and decide the outcome of that data at that time..it has no memory cannot learn or change and is dependent on the data at that time, it cannot look back or forward, it cannot use any Heuristic learning ( trial and error, anticipation, rule of thumb, educated guess). We use a lot of these now, they will have some impact around increase efficiency in using data to improve…a good example would be brave AI a system the NHS/GPS now uses to identify which people may be at risk of hospital admission and so provide early interventions ( it compiles all their health data and gives an indication of % chance of admission in a year). This system can no more grow than your calculator could. It’s entirely dependent on the human developed algorithm and the data provided..

    Limited memory AI: this was the next development in AI from around 2012 onward. This uses an understanding of the human deep brain and learning to give the AI Algorithm the ability to improve outputs. It can use some very basic Heuristic learning ( trial and error) using past data to help interpret and predict what may happen…this is the AI we see in chat bots and autonomous cars…it’s weakness is that it cannot use the full range of Heuristic learning open to the human brain ( rule of thumb, Best guess, problem solving, working backwards etc ) because of this learning requires vast amounts of data and the AI does not really grow as a human intelligence does…so again with this you would see increased efficiency over time but it will still only ever do what the human created algorithm tells it to do and to the limits of the data ( past and present ) that it has. The NHS is looking at this level of AI to support some specific diagnostic tools as these are very good at taking all the symptoms etc and predicting chances of say cancer. Basically this AI is great at a specific task, but cannot really ever do anything else or learn beyond its specific role. This AI will replace huge numbers of the workforce in the future…and will have all the moral consideration around battlefield use. This is the level that is creating the debate around a fundamental change in society…where most of us will be redundant from a workforce point of view.

    These first two are at present the limit of human ability to create…the next two steps are the move to Artificial general intelligence..and where we create something that may just decide humanity needs to go bye bye. So these next steps are where we hit the danger of an AI singularity ( the AI becomes more intelligent than humans and is able to rewrite its own algorithm over and over..increasing its intelligence exponentially with each rewrite to become an all knowing super intelligence…think 400,000 years of human evolution in weeks, or months).

    This is what a lot of the AI experts and tec giants are all getting a bit heated over and warning about the profound dangers.

    So step one of the humanity replacement AI

    Theory of the mind AI, this is the AI that understands others minds, it will understand and interpret the emotions and intent of others. So limited memory AI can create a work of art or write an a complex essay about the human condition but it does not understand them..the Theory of mind AI could create a new masterpiece and profoundly understand it and it’s impact on other intelligences…a limited memory AI can order the optimum treatment for a patient…the theory of mind AI would understand if it was morally correct to order that treatment for that patient depending on there life and wider holist needs. Once we have this AI human work becomes redundant in almost every form, even Drs nurses, artists, councillors etc become redundant.

    it’s important to note that the emotional AI, that is the cutting edge being worked on and rolled out now, that recognised human expressions etc is still only a form of very advanced limited memory AI and not Theory of the mind AI..it does not profoundly understand..it’s just interpreting data on human expressions.

    Although Emotional AI will understand others it will still be limited in understanding of self…it would be a servant of humanity not a peer.

    Self aware AI, this is a simple step up from the theory of the mind AI and basically means that the AI has the same understand of self as others…this is basically human level intelligence AI…it would be our peer. We still don’t know if this would lead to the ultimate goal: Artificial general intelligence..this would be an intelligence that’s could evolve beyond human understanding ( at present) that rapidly self improved ( the next evolutionary step)…some say we are a a few years or a decade away some thing we are a century away..some say it will never be achieved…

    another way to think of it is:

    Narrow or weak AI ( it cannot develop beyond its algorithm) this is all we can create at present…stage 1-3 from above

    Artificial general intelligence or strong AI….basically the late stage 3 and stage 4 self aware stage from above

    Artificial super intelligence….that which comes after self aware AI and Artificial general intelligence..this will be self created…via a singularity event ( self evolution).

      • I’ve been doing a lot of work on introducing AI into my local health care system…it was a eye opener working with a proper AI company that’s doing stuff that can be done to increase productivity and wellbeing …vs some of the blue sky stuff some of the tec giants keep scaring people with. So I had to do a bit of crash study on what AI was and what it delivered…to be honest even the latest work with reactive AI is transformative to our understanding of how we can predict complex systems ( like who’s most likely to be going to hospital within a year…that’s amazing stuff as we can then do preventative up stream interventions and prevent an admission before it’s occurred).

          • It is really exciting to be honest…the predictive stuff is showing something like a 15-20% reduction in emergence admissions where Primary Care are using it really well…the next big step is with the limited memory AI as that can use basic heuristic learning to become more accurate..we will see that being used with early cancer diagnosis very soon…the problem with early cancer diagnosis is human beings just cannot differentiate the tiny tells and longer term patterns of symptoms…most cancers throw out symptoms that look like common diseases until very late..so you need to be able to analyse a whole history of someone’s symptoms…to be good and early detection of cancer and a human clinician cannot do that in a 5-10 minute appointment…but an AI can take in a persons life long medical history..all their historic test results as well as present symptoms and results…cross reference it with all the latest evidence bases on cancer symptoms and pick out any worry matches…even the early trails are showing AI is better than people at predicting through complex information…it will not replace the Dr and nurse as there is a lot human factor stuff..understand how medication or treatment is interacting with that particular person…supporting their mental state and compliance through treatment etc….but a team of a good clinician and a set of predictive limited memory AIs as well as primary care teams using reactive AIs to focus resources is going to profoundly impact ( in a very good way our health system)…it’s just a profoundly massive change..just getting something like brave AI in a handful of GP practices takes a couple of years work as you have to change how the practices work and allocate reasources…when the politicians keep banging on about reforming the NHS this is what we need to focus on ( and we are trying) not yet another politically lead management restructuring.

          • we will see that being used with early cancer diagnosis very soon…the problem with early cancer diagnosis is human beings just cannot differentiate the tiny tells and longer term patterns of symptoms…most cancers throw out symptoms that look like common diseases until very late..”

            Tell me about it….literally lost my beloved Mum 2 weeks ago to it…it was not picked up for sure until 6 days before she died.

          • Really sorry for your loss Daniele, it must be a difficult time for you and your family with that sudden diagnosis and loss. My Thoughts are with you. Jon

  7. Here we go more wokery from the Lord’s. China, Russia, North Korea, Iran, I could go on, will have no reservations about using AI to maximum affect. You only had to watch the TV to know China has been ruthless in duping British Universities into developing their AI controling drones. Our woke government has allowed China to take the lead. Like nukes the genie is out the bottle and you cannot put it back. Like nukes they cannot just kill “bad” people, although it might turn out AI is better at it than us humans. UK and it’s Allies need to catch up asap ignore the useless Lords.

    • So the best response to autocratic nations using weapons immorally is to use them immorally ourselves?
      We need to be better so that we are just as good with safeguards, we shouldn’t have to rely on brute force to achieve our aims

LEAVE A REPLY

Please enter your comment!
Please enter your name here