In a recent AI in Weapons Systems Committee session, experts debated the ethical, legal, and technical concerns of AI in weapons like the Royal Navy’s Phalanx system, discussing potential bans on specific autonomous systems.

The Artificial Intelligence in Weapons Systems Committee recently held a public evidence session, inviting experts to discuss ethical and legal concerns surrounding the use of AI in weaponry.

The session included testimony from Professor Mariarosaria Taddeo, Dr. Alexander Blanchard, and Verity Coyle, who examined the implications of AI in defence and security.

Professor Taddeo highlighted three main issues with the implementation of AI in weapons systems, stating, “We need to take a step back here, because it is important to understand that, when we talk about artificial intelligence, we are not just talking about a new tool like any other digital technology. It is a form of agency.”

She emphasised concerns regarding the limited predictability of outcomes, difficulty attributing responsibility, and the potential for AI systems to perpetrate mistakes more effectively than humans. Taddeo argued that the unpredictability issue is intrinsic to the technology itself and unlikely to be resolved.

Verity Coyle, a Senior Campaigner/Adviser at Amnesty International, emphasized the potential human rights concerns raised by autonomous weapons systems (AWS), saying, “The use of AWS, whether in armed conflict or in peacetime, implicates and threatens to undermine fundamental elements of international human rights law, including the right to life, the right to remedy, and the principle of human dignity.”

She argued that without meaningful human control over the use of force, AWS cannot be used in compliance with international humanitarian law (IHL) and international human rights law (IHRL).

During the session, Verity Coyle provided an example of an existing AWS, the Kargu-2 drones deployed by Turkey, which have autonomous functions that can be switched on and off. She warned that, “We are on a razor’s edge in terms of how close we are to these systems being operational and deadly.”

In response to questions about existing AI-driven defence systems, such as the Phalanx used by the Royal Navy, Coyle stated, “If it is targeting humans, yes,” indicating that any system targeting humans should be banned.

The experts recommended the establishment of a legally binding instrument that mandates meaningful human control over the use of force and prohibits certain systems, particularly those that target human beings.

Avatar photo
George has a degree in Cyber Security from Glasgow Caledonian University and has a keen interest in naval and cyber security matters and has appeared on national radio and television to discuss current events. George is on Twitter at @geoallison
Subscribe
Notify of
guest

58 Comments
oldest
newest
Inline Feedbacks
View all comments
Marked
Marked
10 months ago

Come on, Phalanx is decades old, albeit upgraded, its still hardly AI though! There’s a world of difference between automated and AI.

John Clark
John Clark
10 months ago
Reply to  Marked

I’m assuming an AI element has been added to the software to allow the system to better react to multiple incoming target scenarios, handing off targets to the next Phalanx mount and concentrating fire for maximum effect etc.

I can certainly see a positive case for its use by self defense systems like Phalanx.

Hermes
Hermes
10 months ago
Reply to  John Clark

It is not necessary to have an AI for a system like the Phalanx.
It is possible to have an automated system without any AI.
I mean, we can call the most basic algorithm AI if we want, but that’s not the case.

John Clark
John Clark
10 months ago
Reply to  Hermes

I disagree Hermes, automated systems could for instance continue to engage a threat that is clearly going to miss, or has been disabled but still on a vague trajectory towards the ship, while a second missile is ignored.

AI systems can make snap decisions based on a constantly revised threat assessment, automated systems simply can’t do that.

Hermes
Hermes
10 months ago
Reply to  John Clark

Not necessarily no.

AIs are not magic and are always data-driven, like any automated system…

You can easily assess a threat based on its speed, trajectory and eventually size. It is unlikely that a missile hit by the Phalanx will keep its trajectory, velocity and size.
And these changes can easily be computed by non-AI algorithms.

Also, your example falls where it is also interesting to keep a man in the middle.
And remember, I said AI wasn’t necessary, not that AI couldn’t be better.

John Clark
John Clark
10 months ago
Reply to  Hermes

I still disagree Hermes, quite frankly a man in loop doesn’t work when critical decision making might be measured milliseconds…

My point being that reasoned decision making by AI, will have the edge over simple automated tracking and firing.

AI absolutely has its role in defensive systems, it will no doubt be extensively used in UCAV’s in years to come too.

Supportive Bloke
Supportive Bloke
10 months ago
Reply to  Marked

Phalanx has been fully properly computer driven for ages. The early, unreliable, versions were an unholy mess of things and that wasn’t resolved until the very late 1980’s when computer power/memory/reliability radically improved. It is usually in automatic mode in a threat environment as the times to react are so, relatively, small. There was, I don’t know if there still is, a red mushroom button that the WO station can hit to stop it. So there is Man In The Loop. It also, did don’t know if the latest version(s) does, sounds a tone, as do most systems, as a… Read more »

Hermes
Hermes
10 months ago

Having an automated system does not mean having an AI-driven system.
Also, AI means everything and nothing.
The most basic AI is just a complex algorithm.

For something like the Phalanx, you don’t need an AI, which is why it was done long before AI was a thing.
You don’t even need a complicated algorithm to do something like what the Phalanx does…

Supportive Bloke
Supportive Bloke
10 months ago
Reply to  Hermes

Which is pretty much what I said.

Do you use AI to:-

– create and refine the software rules that are then inspectable; or
– do you use fuzzy AI to create a dynamic software mess isn’t inspectable.

The former is a very good idea: the latter not so much.

I’m very aware that half the reason Phalanx became so reliable was actually very (relatively) simple modular software.

Gunbuster
Gunbuster
10 months ago
Reply to  Marked

Phalanx, Goalkeeper, Sea Wolf all had software that a human had to step in to stop it killing a target. Sea Wolf detection to shooting a missile 3-5 seconds… All 3 do target threat assessment, work out the highest threat from many and then develop a threat table. The info was shared between all systems and they each engage a target on the table assessing which system has the highest hit probability and when hit if a target still is a threat. If it isn’t they move onto the next one in the table. Not AI but cleverer than a… Read more »

OkamsRazor
OkamsRazor
10 months ago
Reply to  Gunbuster

Problem is most public discourse revolves around comic book notions of A.I. which have little to do with AI. True AI is really difficult and generative A.I. has a relationship to A.I. similar to that between a bicycle and a fighter plane. All this nonsense about sentient systems spurred by GAI is like believing in aliens. We have the top A.I. house on the planet in the U.K. and most people have never heard of them (hint they have just been put in charge of all Google AI).

Rickster
Rickster
10 months ago
Reply to  OkamsRazor

1) Not sure what you mean about believing in aliens… you say that like that’s something only stupid people do when I think all top AI experts and intellectuals accept the mathematical probability that humans are not alone in the universe as almost certain. 2) Google’s the AI champ!?
Where’ve you been? Not any more. By Google’s own admission they don’t have a special sauce to counter OpenAI’s efforts with GPT. Have you tried their Bard and seen how badly it sucks? Try as they might their initial efforts at least are embarrassing.

Jack
Jack
10 months ago

I am not reading anymore today. This upset me so much. The thought of nasty men and their machines being shot down because they are trying to kill others? It is despicable that anyone would want to hurt someone or something because they are trying to kill you. I am now going into my safe space and not coming out for a whole day.

Daniele Mandelli
Daniele Mandelli
10 months ago

Is this the “usual suspects” again, who really should make a visit to Russia, China, or our own personnel when under threat ?

I thought the CIWS like Phalanx, although having to be auto due to time pressure when under attack, still have a person in control who can stop the system if necessary, so not AI at all?

So the problem is?

AlexS
AlexS
10 months ago

It is the usual suspects. You can se their ignorance when they don’t include all automated systems in the list.

AlexS
AlexS
10 months ago
Reply to  AlexS

Sea Wolf , CAMM, Aster, the Trophy system in Challenger and many other systems in the world from many countries can work automatically and will have to work automatically in a war.

Jonno
Jonno
10 months ago
Reply to  AlexS

Right, you drop a Paveway and its AI all the way to the target. These people need to be very careful how they define AI or with Hypersonics there will be no time for a human to react and AI may be the only viable defence.

AlexS
AlexS
10 months ago
Reply to  Jonno

In that i disagree, a Paveway needs a laser designator pointed to the target which often is an human controlling.

John
John
10 months ago

I understand the idealistic goals of such a move. However, the problem is that our enemies will have AI-equipped weapons.

Unfortunately AI is not a choice in warfare, unless you plan to be on the losing end.

Jim
Jim
10 months ago
Reply to  John

Global weapons bans and moratoriums have been working more or less since the 1920’s. China is often just as wary as us about anything new on the weapon space especially when it’s a technology they will be behind in, rouge states like Iran and Russia are to thick to deploy anything at scale. We should begin talks now because it will take some time for AI to become a threat but starting regulation process will make it easier to avoid problems, if China and NATO+ say no then it won’t happen. Industry will further support this as key AI players… Read more »

Frank62
Frank62
10 months ago
Reply to  Jim

“Global weapons bans and moratoriums have been working more or less since the 1920’s.”

Only for the few decent law abiding nations. Most deranged dictatorships pretend to honour them while actually cheating big time.

Jim
Jim
10 months ago
Reply to  Frank62

Deranged dictatorships find weapons development pretty hard on the extreme high end.

Jonno
Jonno
10 months ago
Reply to  Jim

Try Hitler! He had V2’s in 1944, extreme high end if I recall. I think they are really quite ‘good’ at it.

Callum
Callum
10 months ago
Reply to  Jonno

He also had a major industrialised nation with a strong history of scientific development. Not sure you can say the same about IS or the Taliban

Bob
Bob
10 months ago

Phalanx does not target humans, so no issue.

Coll
Coll
10 months ago
Reply to  Bob

Unless it’s a manned aircraft. But if it’s that close, the pilot is mighty brave or the ship’s air warefare operators are having a day off.

Last edited 10 months ago by Coll
Jon
Jon
10 months ago
Reply to  Coll

Or it’s friendly with a broken IFF?

Last edited 10 months ago by Jon
Frank62
Frank62
10 months ago
Reply to  Coll

Or the system in OOS or switched off in error.

Gunbuster
Gunbuster
10 months ago
Reply to  Bob

A manned aircraft. Surface contacts….

AlexS
AlexS
10 months ago
Reply to  Bob

There are a couple allied ships that found themselves the end of Phalanx rounds because the safe distances/fire sectors were not respected or setup correctly.

Last edited 10 months ago by AlexS
wil
wil
10 months ago

Include in this discussion systems such as Blindfire Rapier. There certainly are others, with greater capability, from other suppliers availlable today.

Tom
Tom
10 months ago

So now sophisticated electronic weapons systems, could be curtailed, or even ‘outlawed’, by non military types? Is this the ‘soft world’ gone mad?? A review of operational caveats, rules of engagement and other ‘restrictions’ placed on the Armed Forces, is long overdue. These ‘humane restrictions’ have cost many lives over the past 15+ years, especially the lives of special ops forces, on occasion, the most notable of which being Operation Red Wings in 2005. I could go on and on however, the main point here is that the main protagonists faced by the ‘west’ in 2023, do NOT comply with… Read more »

Frank62
Frank62
10 months ago
Reply to  Tom

Or “Memo’s of mass distruction”.

Coll
Coll
10 months ago

Did they define AI?

Rob N
Rob N
10 months ago

I think they are living in a dream world the fact is in some cases humans are just too slow to respond to threats – some degree of automation is vital. By the time the ethics, health and safety and legal factors are considered the incoming hypersonic missile will have killed the target.

These folk need to allow the military to adapt to the modern world and the threat….

Supportive Bloke
Supportive Bloke
10 months ago
Reply to  Rob N

It isn’t just hypersonics but even subsonic sea skimmers just don’t give reaction times.

Frank62
Frank62
10 months ago

The bounders! It’s just not cricket! What would captain Mainwearing say?

Stc
Stc
10 months ago

When it comes to all out war most ethics goes out the window. Russia, China etc will develop AI. If we let these people govern our defence policy we are lost. If they had their way British soldiers would be armed with balloons on sticks. But you know what the real fear of AI is ? That it would just obey it’s program and behave better than Humans ! Show us up: all our short comings as very flawed beings. AI is as ethical as it’s program. A program instigated in a laboratory not decisions made in the harsh, frighting… Read more »

Jonno
Jonno
10 months ago
Reply to  Stc

Read what General Frazer says about the Rules of engagement brought in in 1945 by the UN on the Allies side. They cost many lives to 2nd Army Group. A Tank became a sitting target before it could fire on a likely gun position. I dont know if these rules still apply. Its Belgrano all over, I suppose. Same useful idiots.

Rob N
Rob N
10 months ago

They should host these committed meetings on a warship in a combat zone and then see if they would like the CIWS turned off!

Supportive Bloke
Supportive Bloke
10 months ago
Reply to  Rob N

Good thinking.

Geoff Roach
Geoff Roach
10 months ago

I bet the Chinese and the Russians are worried sick about the ethics of their weapons.🙄

farouk
farouk
10 months ago

I find it very interesting how the people who protested against: Laser dazzle emitters in the Falkland’s, Napalm White Phosphorus AP Mines Neutron bombs DU rounds UAVS Once they get their way in the West, remained silent when others (China, Russia, Iran etc) deployed those banned weapons. I mean look at UAVs, so many pressure groups, yet the likes of the 3 above have developed them further than the UK has, and yet they don’t say a word. In fact CND which is openly against any form of splitting the atom, not only invited the Iranian ambassador to talk about… Read more »

Daniele Mandelli
Daniele Mandelli
10 months ago
Reply to  farouk

The enemy within, as usual.

Jonno
Jonno
10 months ago

Absolutely.

Bill Masen
Bill Masen
10 months ago

Amnesty international best friends with Terhan, Beijing and Moscow

Daniele Mandelli
Daniele Mandelli
10 months ago
Reply to  Bill Masen

The Shadow DS is a member I believe. Worried? I am.

Frank62
Frank62
10 months ago
Reply to  Bill Masen

That’s nonsense & untrue. Amnesty are exactly the free speaking human rights defenders all those nations regimes hate.

Jonno
Jonno
10 months ago
Reply to  Frank62

There are times when human intervention seems to be absent in Amnesty’s thinking.

Quentin D63
Quentin D63
10 months ago

Bit of a simplistic question, but if the “West” goes down a route of banning AI in weapon systems will our “friends” on the other side of the fence willingly do the same? Can’t see that happening at all.

Jon
Jon
10 months ago

Mines, sea and land, are dumb and sufficiently ethically problematic that Britain doesn’t use either. Perhaps a landmine with AI would be significantly less of a problem than one without. Is it really better for a landmine to predictably explode no matter who steps on it, or one that might be able to figure out it’s a farmer poughing a field fifteen years after the war is over? Then there are loitering munitions, which aren’t even supposed to be defensive. They can be dumb or smart. I’d feel happier either way with a person in the loop, but absent that,… Read more »

Tom
Tom
10 months ago
Reply to  Jon

Bloomin good point Jon! Mines have been the Bain of civilians lives since… well way back. Even in Roman times, Caltrops were indiscriminately sewn to impale horses and people.

Graham Moore
Graham Moore
10 months ago
Reply to  Jon

Think we have only given up anti-personnel mines, not anti-tank mines.

Frank62
Frank62
10 months ago

I don’t see the conflict/risk with a CIWS like Phalanx(1970/80s technology) which enables a warship carrying 100+ crew etc to defend against missiles which may have gained too short a detection/intercept time for any other system to deal with. I do however have huge issues with autonomous, higher intelligence weapons systems which could prosecute offensive action against people & targets without firm moral human oversight. We really DO need to avoid a “Terminator”-Skynet scenario, which is perilously close. This sort of thing needs quality moral leadership integrety, which appears extremely rare nowdays in whatever society. Imagine dictators or terrorists able… Read more »

Michael Warr
Michael Warr
10 months ago

It was always my understanding that all weapons on Royal Naval vessels were under the control of the Weapons Officer and higher up the chain the Captain of the vessel and weapons would be activated on the orders of that gain of command should there be aa threat to the vessel or vessels within the naval group. Therefore call to ‘Action Stations would be made and the weapons would be activate under human control. The only automated part would come once it was switched on and it’s tracking system became active. Before that happened the friend or foe ‘ident’ would… Read more »

James
James
10 months ago

So you ban them, then what is going to happen is all the tyrants and warmongers of the world are going to ignore any such ban so really in real world terms banning them is not an option unless you want the tyrants and warmongers like Iran, China and russia who ignore international law to take over as they will use them regardless of what international law says.

Cygnet261
Cygnet261
10 months ago

At the moment all weapons systems need to be switched on by an operator given an order by command. Hardly AI is it. Before you armchair warriors get your knickers in a twist, get your facts right.