In a recent AI in Weapons Systems Committee session, experts debated the ethical, legal, and technical concerns of AI in weapons like the Royal Navy’s Phalanx system, discussing potential bans on specific autonomous systems.
The Artificial Intelligence in Weapons Systems Committee recently held a public evidence session, inviting experts to discuss ethical and legal concerns surrounding the use of AI in weaponry.
The session included testimony from Professor Mariarosaria Taddeo, Dr. Alexander Blanchard, and Verity Coyle, who examined the implications of AI in defence and security.
Professor Taddeo highlighted three main issues with the implementation of AI in weapons systems, stating, “We need to take a step back here, because it is important to understand that, when we talk about artificial intelligence, we are not just talking about a new tool like any other digital technology. It is a form of agency.”
She emphasised concerns regarding the limited predictability of outcomes, difficulty attributing responsibility, and the potential for AI systems to perpetrate mistakes more effectively than humans. Taddeo argued that the unpredictability issue is intrinsic to the technology itself and unlikely to be resolved.
Verity Coyle, a Senior Campaigner/Adviser at Amnesty International, emphasized the potential human rights concerns raised by autonomous weapons systems (AWS), saying, “The use of AWS, whether in armed conflict or in peacetime, implicates and threatens to undermine fundamental elements of international human rights law, including the right to life, the right to remedy, and the principle of human dignity.”
She argued that without meaningful human control over the use of force, AWS cannot be used in compliance with international humanitarian law (IHL) and international human rights law (IHRL).
During the session, Verity Coyle provided an example of an existing AWS, the Kargu-2 drones deployed by Turkey, which have autonomous functions that can be switched on and off. She warned that, “We are on a razor’s edge in terms of how close we are to these systems being operational and deadly.”
In response to questions about existing AI-driven defence systems, such as the Phalanx used by the Royal Navy, Coyle stated, “If it is targeting humans, yes,” indicating that any system targeting humans should be banned.
The experts recommended the establishment of a legally binding instrument that mandates meaningful human control over the use of force and prohibits certain systems, particularly those that target human beings.
Come on, Phalanx is decades old, albeit upgraded, its still hardly AI though! There’s a world of difference between automated and AI.
I’m assuming an AI element has been added to the software to allow the system to better react to multiple incoming target scenarios, handing off targets to the next Phalanx mount and concentrating fire for maximum effect etc.
I can certainly see a positive case for its use by self defense systems like Phalanx.
It is not necessary to have an AI for a system like the Phalanx.
It is possible to have an automated system without any AI.
I mean, we can call the most basic algorithm AI if we want, but that’s not the case.
I disagree Hermes, automated systems could for instance continue to engage a threat that is clearly going to miss, or has been disabled but still on a vague trajectory towards the ship, while a second missile is ignored.
AI systems can make snap decisions based on a constantly revised threat assessment, automated systems simply can’t do that.
Not necessarily no.
AIs are not magic and are always data-driven, like any automated system…
You can easily assess a threat based on its speed, trajectory and eventually size. It is unlikely that a missile hit by the Phalanx will keep its trajectory, velocity and size.
And these changes can easily be computed by non-AI algorithms.
Also, your example falls where it is also interesting to keep a man in the middle.
And remember, I said AI wasn’t necessary, not that AI couldn’t be better.
I still disagree Hermes, quite frankly a man in loop doesn’t work when critical decision making might be measured milliseconds…
My point being that reasoned decision making by AI, will have the edge over simple automated tracking and firing.
AI absolutely has its role in defensive systems, it will no doubt be extensively used in UCAV’s in years to come too.
Phalanx has been fully properly computer driven for ages. The early, unreliable, versions were an unholy mess of things and that wasn’t resolved until the very late 1980’s when computer power/memory/reliability radically improved.
It is usually in automatic mode in a threat environment as the times to react are so, relatively, small.
There was, I don’t know if there still is, a red mushroom button that the WO station can hit to stop it. So there is Man In The Loop.
It also, did don’t know if the latest version(s) does, sounds a tone, as do most systems, as a target is identied but is moving into the engagement envelope.
Sea Wolf in automatic mode was pretty similar although tone -> lock -> fire was appreciably slower. Where I witnessed it on a T22 the radar spotted the target well outside of the missile envelope.
The problem with AI us what does machine learning actually mean? If the machine learning refines and optimises a set of identifiable and inspectable rules: fine. If it self creates a mess of software that is not so fine as it could be creating rabbit holes too?
It all depends how the AI is used?
Having an automated system does not mean having an AI-driven system.
Also, AI means everything and nothing.
The most basic AI is just a complex algorithm.
For something like the Phalanx, you don’t need an AI, which is why it was done long before AI was a thing.
You don’t even need a complicated algorithm to do something like what the Phalanx does…
Which is pretty much what I said.
Do you use AI to:-
– create and refine the software rules that are then inspectable; or
– do you use fuzzy AI to create a dynamic software mess isn’t inspectable.
The former is a very good idea: the latter not so much.
I’m very aware that half the reason Phalanx became so reliable was actually very (relatively) simple modular software.
Phalanx, Goalkeeper, Sea Wolf all had software that a human had to step in to stop it killing a target. Sea Wolf detection to shooting a missile 3-5 seconds…
All 3 do target threat assessment, work out the highest threat from many and then develop a threat table. The info was shared between all systems and they each engage a target on the table assessing which system has the highest hit probability and when hit if a target still is a threat. If it isn’t they move onto the next one in the table.
Not AI but cleverer than a Gunner or RP need g to make millisecond decisions.
Problem is most public discourse revolves around comic book notions of A.I. which have little to do with AI. True AI is really difficult and generative A.I. has a relationship to A.I. similar to that between a bicycle and a fighter plane. All this nonsense about sentient systems spurred by GAI is like believing in aliens. We have the top A.I. house on the planet in the U.K. and most people have never heard of them (hint they have just been put in charge of all Google AI).
1) Not sure what you mean about believing in aliens… you say that like that’s something only stupid people do when I think all top AI experts and intellectuals accept the mathematical probability that humans are not alone in the universe as almost certain. 2) Google’s the AI champ!?
Where’ve you been? Not any more. By Google’s own admission they don’t have a special sauce to counter OpenAI’s efforts with GPT. Have you tried their Bard and seen how badly it sucks? Try as they might their initial efforts at least are embarrassing.
I am not reading anymore today. This upset me so much. The thought of nasty men and their machines being shot down because they are trying to kill others? It is despicable that anyone would want to hurt someone or something because they are trying to kill you. I am now going into my safe space and not coming out for a whole day.
Is this the “usual suspects” again, who really should make a visit to Russia, China, or our own personnel when under threat ?
I thought the CIWS like Phalanx, although having to be auto due to time pressure when under attack, still have a person in control who can stop the system if necessary, so not AI at all?
So the problem is?
It is the usual suspects. You can se their ignorance when they don’t include all automated systems in the list.
Sea Wolf , CAMM, Aster, the Trophy system in Challenger and many other systems in the world from many countries can work automatically and will have to work automatically in a war.
Right, you drop a Paveway and its AI all the way to the target. These people need to be very careful how they define AI or with Hypersonics there will be no time for a human to react and AI may be the only viable defence.
In that i disagree, a Paveway needs a laser designator pointed to the target which often is an human controlling.
I understand the idealistic goals of such a move. However, the problem is that our enemies will have AI-equipped weapons.
Unfortunately AI is not a choice in warfare, unless you plan to be on the losing end.
Global weapons bans and moratoriums have been working more or less since the 1920’s. China is often just as wary as us about anything new on the weapon space especially when it’s a technology they will be behind in, rouge states like Iran and Russia are to thick to deploy anything at scale.
We should begin talks now because it will take some time for AI to become a threat but starting regulation process will make it easier to avoid problems, if China and NATO+ say no then it won’t happen.
Industry will further support this as key AI players like Google don’t want to build weapons and key defence players like LM don’t want people out of the aircraft as there is no money in drones.
Everyone watched terminator, no serious person in the world wants to see AI weapons.
“Global weapons bans and moratoriums have been working more or less since the 1920’s.”
Only for the few decent law abiding nations. Most deranged dictatorships pretend to honour them while actually cheating big time.
Deranged dictatorships find weapons development pretty hard on the extreme high end.
Try Hitler! He had V2’s in 1944, extreme high end if I recall. I think they are really quite ‘good’ at it.
He also had a major industrialised nation with a strong history of scientific development. Not sure you can say the same about IS or the Taliban
Phalanx does not target humans, so no issue.
Unless it’s a manned aircraft. But if it’s that close, the pilot is mighty brave or the ship’s air warefare operators are having a day off.
Or it’s friendly with a broken IFF?
Or the system in OOS or switched off in error.
A manned aircraft. Surface contacts….
There are a couple allied ships that found themselves the end of Phalanx rounds because the safe distances/fire sectors were not respected or setup correctly.
Include in this discussion systems such as Blindfire Rapier. There certainly are others, with greater capability, from other suppliers availlable today.
So now sophisticated electronic weapons systems, could be curtailed, or even ‘outlawed’, by non military types?
Is this the ‘soft world’ gone mad?? A review of operational caveats, rules of engagement and other ‘restrictions’ placed on the Armed Forces, is long overdue.
These ‘humane restrictions’ have cost many lives over the past 15+ years, especially the lives of special ops forces, on occasion, the most notable of which being Operation Red Wings in 2005.
I could go on and on however, the main point here is that the main protagonists faced by the ‘west’ in 2023, do NOT comply with these rules, regulations and pathetic restrictions. If this situation continues, armed forces of the west will get to the point, where the only ‘weapon’ they will have against adversaries, will be ‘harsh language’.
Or “Memo’s of mass distruction”.
Did they define AI?
I think they are living in a dream world the fact is in some cases humans are just too slow to respond to threats – some degree of automation is vital. By the time the ethics, health and safety and legal factors are considered the incoming hypersonic missile will have killed the target.
These folk need to allow the military to adapt to the modern world and the threat….
It isn’t just hypersonics but even subsonic sea skimmers just don’t give reaction times.
The bounders! It’s just not cricket! What would captain Mainwearing say?
When it comes to all out war most ethics goes out the window. Russia, China etc will develop AI. If we let these people govern our defence policy we are lost. If they had their way British soldiers would be armed with balloons on sticks. But you know what the real fear of AI is ? That it would just obey it’s program and behave better than Humans ! Show us up: all our short comings as very flawed beings. AI is as ethical as it’s program. A program instigated in a laboratory not decisions made in the harsh, frighting deadly chaotic heat of the moment cauldron of war.
Read what General Frazer says about the Rules of engagement brought in in 1945 by the UN on the Allies side. They cost many lives to 2nd Army Group. A Tank became a sitting target before it could fire on a likely gun position. I dont know if these rules still apply. Its Belgrano all over, I suppose. Same useful idiots.
They should host these committed meetings on a warship in a combat zone and then see if they would like the CIWS turned off!
Good thinking.
I bet the Chinese and the Russians are worried sick about the ethics of their weapons.🙄
I mean, China also has concentration camps?
I find it very interesting how the people who protested against:
Laser dazzle emitters in the Falkland’s,
Napalm
White Phosphorus
AP Mines
Neutron bombs
DU rounds
UAVS
Once they get their way in the West, remained silent when others (China, Russia, Iran etc) deployed those banned weapons. I mean look at UAVs, so many pressure groups, yet the likes of the 3 above have developed them further than the UK has, and yet they don’t say a word. In fact CND which is openly against any form of splitting the atom, not only invited the Iranian ambassador to talk about Nuclear disarmament in 2006 at their annual AGM, they actually defended Iran’s right to a nuclear program.
The enemy within, as usual.
Absolutely.
Amnesty international best friends with Terhan, Beijing and Moscow
The Shadow DS is a member I believe. Worried? I am.
That’s nonsense & untrue. Amnesty are exactly the free speaking human rights defenders all those nations regimes hate.
There are times when human intervention seems to be absent in Amnesty’s thinking.
Bit of a simplistic question, but if the “West” goes down a route of banning AI in weapon systems will our “friends” on the other side of the fence willingly do the same? Can’t see that happening at all.
Mines, sea and land, are dumb and sufficiently ethically problematic that Britain doesn’t use either. Perhaps a landmine with AI would be significantly less of a problem than one without. Is it really better for a landmine to predictably explode no matter who steps on it, or one that might be able to figure out it’s a farmer poughing a field fifteen years after the war is over?
Then there are loitering munitions, which aren’t even supposed to be defensive. They can be dumb or smart. I’d feel happier either way with a person in the loop, but absent that, surely better with AI than without.
It seems to me that the issue with these kinds of system isn’t AI. It’s the period of time between intent and effect, during which unexpected things could happen.
Bloomin good point Jon! Mines have been the Bain of civilians lives since… well way back. Even in Roman times, Caltrops were indiscriminately sewn to impale horses and people.
Think we have only given up anti-personnel mines, not anti-tank mines.
I don’t see the conflict/risk with a CIWS like Phalanx(1970/80s technology) which enables a warship carrying 100+ crew etc to defend against missiles which may have gained too short a detection/intercept time for any other system to deal with.
I do however have huge issues with autonomous, higher intelligence weapons systems which could prosecute offensive action against people & targets without firm moral human oversight. We really DO need to avoid a “Terminator”-Skynet scenario, which is perilously close. This sort of thing needs quality moral leadership integrety, which appears extremely rare nowdays in whatever society.
Imagine dictators or terrorists able to program drone swarms to assasinate any & everybody they deem in need of purging, using facial recognition & small explosive charges. “Enemies of the state/people”, “counter revolutionaries”, “Infidels” etc. Or larger systems(aircraft, helicoptors gunships, armoured vehicles, etc).
All these drones seem marvellous until they become subverted, corrupted or hacked by your enemy to their use.
It was always my understanding that all weapons on Royal Naval vessels were under the control of the Weapons Officer and higher up the chain the Captain of the vessel and weapons would be activated on the orders of that gain of command should there be aa threat to the vessel or vessels within the naval group. Therefore call to ‘Action Stations would be made and the weapons would be activate under human control. The only automated part would come once it was switched on and it’s tracking system became active. Before that happened the friend or foe ‘ident’ would have been made. These therefore are not full AI weapons systems.
So you ban them, then what is going to happen is all the tyrants and warmongers of the world are going to ignore any such ban so really in real world terms banning them is not an option unless you want the tyrants and warmongers like Iran, China and russia who ignore international law to take over as they will use them regardless of what international law says.
At the moment all weapons systems need to be switched on by an operator given an order by command. Hardly AI is it. Before you armchair warriors get your knickers in a twist, get your facts right.