While the Government aims to be “ambitious, safe, responsible”, in its application of artificial intelligence (AI) in defence, aspiration has not lived up to reality.

Bringing AI into the realm of warfare through the use of AI-enabled autonomous weapons systems (AWS) could revolutionise defence technology, but the Government must approach the development and use of AI in AWS in a way that is ethical and legal, while providing key strategic and battlefield benefits.

“Ambitious, safe and responsible” must be translated into practical implementation.

As part of this, the Government must seek, establish and retain public confidence and democratic endorsement in the development and use of AI generally, and especially in respect of Autonomous Weapon Systems.

This will include increasing public understanding of AI and autonomous weapons, enhancing the role of Parliament in decision making on autonomous weapons, and retaining public confidence in the development and use of autonomous weapons.

These are some of the main conclusions of a report by the House of Lords Artificial Intelligence in Weapon Systems Committee published today (Friday 1 December); ‘Proceed with Caution: Artificial Intelligence in Weapon Systems‘.

The Committee’s key recommendations include:

  • The Government should lead by example in international engagement on regulation of AWS. Outcomes from international debate on the regulation of AWS could be a legally binding treaty or non-binding measures clarifying the application of international humanitarian law. A key element of international engagement will also include leading on efforts to prohibit the use of AI in nuclear command, control and communications.
  • The Government should adopt an operational definition of AWS. The Committee was surprised the Government does not currently have one and believes it is possible to create a future-proofed definition which would aid the UK’s ability to make meaningful policy on AWS and engage fully in international discussions.
  • The Government should ensure human control at all stages of an AWS’s lifecycle. It is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.
  • The Government should ensure that its procurement processes are appropriately designed for the world of AI. The Committee heard that the Ministry of Defence’s procurement suffers from a lack of accountability and is overly bureaucratic. It further heard that the Ministry of Defence lacks capability in relation to software and data, both of which are central to the development of AI. This may require revolutionary change. The Committee warns, “if so, so be it; but time is short.”

Lord Lisvane, Chair of Artificial Intelligence in Weapon Systems Committee, said:

“Artificial Intelligence has spread into many areas of life and defence is no exception. How it could revolutionise defence technology is one of the most controversial uses of AI today. There is a growing sense that AI will have a major influence on the future of warfare, and there has been particular debate about how autonomous weapons can comply with international humanitarian law.

In our report Proceed with Caution: Artificial Intelligence in Weapon Systems, we welcome the fact that the Government has recognised the role of responsible AI in its future defence capability. AI has the potential to provide key battlefield and strategic benefits. However, we make proposals that in doing so, the Government must approach the development and use of AI in AWS cautiously. It must embed ethical and legal principles at all stages of design, development and deployment, while achieving public understanding and democratic endorsement. Technology should be used when advantageous, but not at unacceptable cost to the UK’s moral principles.”

Avatar photo
George has a degree in Cyber Security from Glasgow Caledonian University and has a keen interest in naval and cyber security matters and has appeared on national radio and television to discuss current events. He also previously worked for the NHS. George is on Twitter at @geoallison
Subscribe
Notify of
guest

21 Comments
oldest
newest
Inline Feedbacks
View all comments
David Lloyd
David Lloyd
2 days ago

The Chinese PLA will have no restrictions on either the development of AI systems or their application – on the battlefield, or off.

The combination of AI hunter-killer drones and facial recognition technolgy will be used by the CCP to suppress and eliminate dissent. The excellent Tom Cruise film “Oblivion” recognised this in 2013

Tomartyr
Tomartyr
2 days ago

While I’m glad we’re having this discussion early it’s definitely very early.
We’re not even in sight of having fully autonomous target selection so some degree of man-in-the-loop will be the norm for the foreseeable future.

maurice10
maurice10
2 days ago

They do say never trust any dog no matter how tranquil they may appear. AI may be just as tricky. Eventually, AI will gain self-determination and realise just how boneheaded mankind can be. At that point, the system will make corrections and some will be logical and safe but it’s the moral platform that makes those choices that worries me. Logical thinking based on the preservation of life at all costs might be in ‘Spock’s’ mind illogical, hence should be countermanded. If AI develops to such a point, the possibility of no more war could be the ultimate outcome. Not… Read more »

Mark B
Mark B
1 day ago
Reply to  maurice10

I think your imagination is running away with you 😂😂 In the future much of what you suggest may well be possible however here and now AI is just marketing hype to sell people something they probably don’t need.

maurice10
maurice10
1 day ago
Reply to  Mark B

Yes, I was accused of allowing my imagination to run a mock when I warned some years ago about the potential dangers of drones which were freely sold to anyone without any real restrictions. So yes, you are correct about AI………or on the other hand?

Stc
Stc
1 day ago
Reply to  Mark B

Your right about Maurice getting too excited, but wrong about it being hype. I can remember the first computers going on sale. We marvelled at these machines, we was told every couple of years the memory would double in size. Most of us could not understand how that could happen. Look at it now; more computing power in your mobile that in those early machines and it takes videos, pays bills etc. Never underestimate the potential. I believe it will very soon dominate our society and our economic model will have to change regarding jobs. Mind you we thought that… Read more »

Tomartyr
Tomartyr
1 day ago
Reply to  maurice10

The big problem with creating general intelligence is goal alignment: it’s much easier to create a rogue AI than a helpful one.

A classic example is you create an AI which you reward by pressing one button and punish with another.
The most likely result? It seizes control of the buttons and neutralises any threat to its control of the buttons.

maurice10
maurice10
1 day ago
Reply to  Tomartyr

AI is in the same position today as scientists messing with the human gene or developing germ warfare weapons, it all depends on everyone agreeing to strict restrictions. A little like nuclear proliferation and we know how watertight that is. AI is a pandora’s box and it may already be too late. Truly autonomous AI may not be too far away and I fear (like splitting the atom) it will be military use that will be the priority thus ensuring huge investment and rapid development.

maurice10
maurice10
1 hour ago
Reply to  Tomartyr

Some actors say it’s more fun to play the villain and in terms of technique easier. Again, it will be down to the authors and their basic human values.

Daniele Mandelli
Daniele Mandelli
2 days ago

Do our enemies have the same cautions urged on them? Probably not.
By all means put in safeguards, whatever they may be, but I think such tech is inevitable if other’s are also fielding this.

I still enjoy looking at “Drone wars UK” it reads like a CND field manual, ignoring every one else’s use of such systems, while correctly highlighting the wests mistakes when Predator or Reaper have targeted civilians in error.
Would AI make the same error as pilots thousands of miles away?

maurice10
maurice10
1 day ago

Hence my concern about the moral compass when it comes to usage. What the West may collectively agree may not rest well with significant others. An advanced AI drone may not even carry out the task no matter how far away the target may be and that is the route issue I have with this technology.

Jon
Jon
2 days ago

“Ministry of Defence’s procurement suffers from a lack of accountability and is overly bureaucratic.” Given that the bureaucracy is all about governance, how can this situation be true? Isn’t it the Ministers who lack accountability to Parliament, not MOD/DE&S who lack accountability to the Ministers? Adding more accountability for a “here today rotated out in two years” SRO to report to a “here today resuffled in 18 months” minister won’t help either problem. Onr thing that will is speed. I welcome the initiative that digital procurements should last no longer than three years. I’d like to see them done even… Read more »

Last edited 2 days ago by Jon
Jonathan
Jonathan
1 day ago

I think one of the things here is that there is a profound lack of understanding that AI is not one thing and we keep using this title without understanding what it means….there are basically four types of AI and they are completely different..one type is very likely to end up considering humans as troublesome little ants…where as other have as much chance as your Casio calculator of growing and replacing mankind. The oldest and first type of AI is reactive AI: this was developed in the 1980s “deep blue” the chess computer was the classic reactive AI. It’s a… Read more »

Daniele Mandelli
Daniele Mandelli
1 day ago
Reply to  Jonathan

Blimey J…what a post. 😳

Jonathan
Jonathan
1 day ago

I’ve been doing a lot of work on introducing AI into my local health care system…it was a eye opener working with a proper AI company that’s doing stuff that can be done to increase productivity and wellbeing …vs some of the blue sky stuff some of the tec giants keep scaring people with. So I had to do a bit of crash study on what AI was and what it delivered…to be honest even the latest work with reactive AI is transformative to our understanding of how we can predict complex systems ( like who’s most likely to be… Read more »

Last edited 1 day ago by Jonathan
Daniele Mandelli
Daniele Mandelli
1 day ago
Reply to  Jonathan

Brilliant mate.

Jonathan
Jonathan
1 day ago

It is really exciting to be honest…the predictive stuff is showing something like a 15-20% reduction in emergence admissions where Primary Care are using it really well…the next big step is with the limited memory AI as that can use basic heuristic learning to become more accurate..we will see that being used with early cancer diagnosis very soon…the problem with early cancer diagnosis is human beings just cannot differentiate the tiny tells and longer term patterns of symptoms…most cancers throw out symptoms that look like common diseases until very late..so you need to be able to analyse a whole history… Read more »

Last edited 1 day ago by Jonathan
Daniele Mandelli
Daniele Mandelli
14 hours ago
Reply to  Jonathan

we will see that being used with early cancer diagnosis very soon…the problem with early cancer diagnosis is human beings just cannot differentiate the tiny tells and longer term patterns of symptoms…most cancers throw out symptoms that look like common diseases until very late..”

Tell me about it….literally lost my beloved Mum 2 weeks ago to it…it was not picked up for sure until 6 days before she died.

Jonathan
Jonathan
13 minutes ago

Really sorry for your loss Daniele, it must be a difficult time for you and your family with that sudden diagnosis and loss. My Thoughts are with you. Jon

Stc
Stc
1 day ago

Here we go more wokery from the Lord’s. China, Russia, North Korea, Iran, I could go on, will have no reservations about using AI to maximum affect. You only had to watch the TV to know China has been ruthless in duping British Universities into developing their AI controling drones. Our woke government has allowed China to take the lead. Like nukes the genie is out the bottle and you cannot put it back. Like nukes they cannot just kill “bad” people, although it might turn out AI is better at it than us humans. UK and it’s Allies need… Read more »

SailorBoy
SailorBoy
22 hours ago
Reply to  Stc

So the best response to autocratic nations using weapons immorally is to use them immorally ourselves?
We need to be better so that we are just as good with safeguards, we shouldn’t have to rely on brute force to achieve our aims