The Ministry of Defence has addressed the use of artificial intelligence (AI) in relation to the UK’s nuclear capabilities, following a written question from James Cartlidge, Conservative MP for South Suffolk.
Cartlidge sought clarification on whether AI has been used to support routine operations or inform policy decisions regarding the UK’s nuclear capability.
In his response, Defence Minister Luke Pollard stated that the “delivery of defence capabilities enabled by artificial intelligence (AI) will be ambitious, safe and responsible.” He noted that research is actively underway to “identify, understand, and mitigate against risks of applying AI for sensitive defence affairs.”
Pollard stressed that routine nuclear deterrence operations are “conducted in accordance with the highest standards and controls.” He further reassured that “regardless of any potential application of artificial intelligence in our strategic systems, we will ensure human political control of our nuclear weapons is maintained at all times.”
The response underscores the MoD’s commitment to maintaining human control over nuclear weapons, regardless of technological advancements in AI. This principle is aligned with longstanding policies to ensure that decisions involving nuclear force remain under direct political authority.
Pollard also indicated that further details on AI applications within the nuclear domain would not be disclosed, as doing so “would prejudice the capability, security and effectiveness of the Armed Forces.”
Do you want to play a game?
The old ones are the best.. it’s funny that 40 years later it’s now a serious question.
AI involved in nuclear weapons… It’ll launch all our nukes to start WW3 and then mobilise drones to hunt down Sarah Connor!
Depressing how mainstream the confusion between LLMs and general intelligence is.
It’s like seeing people ask if their phone’s autocomplete will have the authority to launch the nukes.
Obviously work on other forms of AI, such as for swarming, is ongoing but the advancements in that area are mostly separate from the language model based systems that we’re familiar with now.
I think it’s because we don’t actually know what a general intelligence really is. The truth is we still don’t really know how the human mind works and what consciousness is… we have not even truely answered some basic questions like what the hell is sleep and really why do we do it especially rem sleep..we know that without it’s for some reason essentially our consciousness collapses in a heap of goo and we die.. but really how it works..we are in the dark..we have structures in the brain we really have little idea on .. even the structures we have some good idea on like the hippocampus we really struggle with the how..so how it complies all the elements of memory from different parts of the brain and gives it emotional context ( like some form of emotional date stamp) we really don’t know.. just that it does it.
So really we struggle with a true definition because we still truely struggle with how and why our own consciousness truly works. But one thing we do know is that for a high order consciousness like a human being..a fundamental requirement is to recognise, generate and manipulate language.. so these LLMs are a first step, but a very very long away from what true consciousness is..
But to be honest I think an artificial general intelligence is a way away..in reality to get a true General intelligence Neuromorphic computers are going to have to come a step forward yet. The latency issue is the killer for normal computers and the neuromorphic computers are still bound by it so even the best are only at 130-250 billion synapses..vs the human brains 800 trillion.. so around 3200 times more in the human brain, but as you build in more synapses so latency starts to works against you.