AI is already being melded with robotics – one outcome could be powerful new weapons.

Interest in the incorporation of robots into security, policing and military operations has been steadily increasing over the last few years.

It’s an avenue already being explored in both North America and Europe.


Written by Mark Tsagas, University of East London. This article is the opinion of the author and not necessarily that of the UK Defence Journal. If you would like to submit your own article on this topic or any other, please see our submission guidelines


Robot integration into these areas could be seen as analogous to the inclusion of dogs in policing and military roles in the 20th century. Dogs have served as guards, sentries, message carriers and mine detectors, among other roles.

Utility robots, designed to play a support role to humans, are mimicking our four-legged companions not only in form, but in function as well. Mounted with surveillance technology and able to ferry equipment, ammunition and more as part of resupply chains, they could significantly minimise the risk of harm to human soldiers on the battlefield.

However, utility robots would undoubtedly take on a different dimension if weapons systems were added to them. Essentially, they would become land-based variants of the MQ-9 Predator Drone aircraft currently in use by the US military.

In 2021, the company Ghost Robotics showcased one of their four-legged robots, called Q-UGV, that had been armed with a Special Purpose Unmanned Rifle 4. The showcase event leaned into the weaponisation of utility robots.

It is important to take note of how each aspect of this melding of weaponry and robotics operates in a different way. Although the robot itself is semi-autonomous and can be controlled remotely, the mounted weapon has no autonomous capability and is fully controlled by an operator.

In September 2023, US Marines conducted a proof of concept test involving another four-legged utility robot. They measured its abilities to “acquire and prosecute targets with a M72 Light Anti-Tank Weapon”.

The test reignited the ethics debate about the use of automated and semi-automated weapon systems in warfare. It would not be such a big step for either of these platforms to incorporate AI-driven threat detection and the capability to “lock on” to targets. In fact, sighting systems of this nature are already available on the open market.

US marines test an anti-tank weapon mounted on a robot “goat”.

In 2022, a dozen leading robotics companies signed an open letter hosted on the website of Boston Dynamics, which created a dog-like utility robot called Spot. In the letter, the companies came out against the weaponisation of commercially available robots.

However, the letter also said the companies did not take issue “with existing technologies that nations and their government agencies use to defend themselves and uphold their laws”. On that point, it’s worth considering whether the horse has already bolted with regards to the weaponisation of AI. Weapons systems with intelligent technology integrated into robotics are already being used in combat.

This month, Boston Dynamics publicised a video showing how the company had added the AI chatbot ChatGPT to its Spot robot. The machine can be seen responding to questions and conversation from one of the company’s engineers using several different “personalities”, such as an English butler. The responses come from the AI chatbot, but Spot mouths the words.

Boston Dynamics added ChatGPT to its robotic dog, Spot.

It’s a fascinating step for the industry and, potentially, a positive one. But while Boston Dynamics may be maintaining its pledge not to weaponise their robots, other companies may not feel the same way. There’s also the potential for misuse of such robots by people or institutions that lack a moral compass. As the open letter hints: “When possible, we will carefully review our customers’ intended applications to avoid potential weaponisation.”

UK stance

The UK has already taken a stance on the weaponisation of AI with their Defence Artificial Intelligence Strategy, published in 2022. The document expresses the intent to rapidly integrate artificial intelligence into ministry of defence systems to strengthen security and modernise armed forces.

Notably, however, an annex to the strategy document specifically recognises the potential challenges associated with lethal autonomous weapons systems.

For example, real world data is used to “train” AI systems, or improve them. With ChatGPT, this is gathered from the internet. While it helps AI systems become more useful, all that “real world” information can also pass on flawed assumptions and prejudices to the system itself. This can lead to algorithmic bias (where the AI favours one group or option over another) or inappropriate and disproportionate responses by the AI. As such, sample training data for weapons systems needs to be carefully scrutinised with ethical warfare in mind.

This year, the House of Lords established an AI in Weapon Systems select committee. Its brief is to see how armed forces can reap the benefits of technological advances, while minimising the risks through the implementation of technical, legal and ethical safeguards. The sufficiency of UK policy and international policymaking is also being examined.

Robot dogs aren’t aiming weapons at opposing forces just yet. But all the elements are there for this scenario to become a reality, if left unchecked. The fast pace of development in both AI and robotics is creating a perfect storm that could lead to powerful new weapons.

The recent AI safety summit in Bletchley Park had a positive outcome for AI regulation, both in the UK and internationally. However, there were signs of a philosophical split between the summit goals and those of the AI in Weapon Systems committee.

The summit was geared towards defining AI, assessing its capabilities and limitations and creating a global consensus with regard to its ethical use. It sought to do so via a declaration, very much like the Boston Dynamics open letter. Neither, however, is binding. The committee seeks to clearly and rapidly integrate the technology, albeit in accordance with ethics, regulations and international law.

Frequent use of the termguard rails” in relation to the Bletchley summit and declaration suggests voluntary commitments. And UK prime minister Rishi Sunak has stated that countries should not rush to regulate.

The nobility of such statements wanes in consideration of the enthusiasm in some quarters for integrating the technology into weapons platforms.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Subscribe
Notify of
guest

8 Comments
oldest
newest
Inline Feedbacks
View all comments
AlexS
AlexS
4 months ago

The West is really behind for the next war. Seems to have a fetish for vectors: ships, hugely expensive complex aircraft that take 20 years to fly, tanks with only a 4km range gun… Any development moves at snail pace…not surprising in high fashionable and entropic societies. Instead Long range fires, missiles, drones, satellites and miniaturisation. Next 6-Day War will be a 1-Day War where all airfields will be attacked 10000’s of missiles and drones. And will not be our side doing it. No need for F-35, Tempest, AEW, Type 45 etc… AI and robotics i am afraid a big… Read more »

Last edited 4 months ago by AlexS
Jim
Jim
4 months ago
Reply to  AlexS

Yes but to a certain extent the west looks at logistics while Russia and China build internet meme weapons. We have had self targeting munitions for 20 years. Stick a rocket launcher on the back of a robot dog and what’s it going to achieve that a smart landmine can’t do for a fraction of the price with no logistic hangover. How is that robot dog going to get charged up? How can you realistically guanrtee it’s not going to blow up your one side. I can drop one sensor fused bond in a Forrest it will find and kill… Read more »

AlexS
AlexS
3 months ago
Reply to  Jim

Well a huge part of the problem is that a lot of tech in civilian side move at much faster pace than in military. In the past the military was at forefront of almost every tech, today in certain extreme areas still is but a big chunk of useable “democratic” technology now moves much faster in civilian site. And you can win a war with that.

Jim
Jim
3 months ago
Reply to  AlexS

Yes but it’s easy to look at civilian technology take quad copters and extrapolate it as a wonder weapon but the battle field is much more complicated and logistics will often be much more important than simple tech. Self targeting weapons are a game changer but we have had them arguably since the first land mines were made. AI is not necessarily a factor or that useful.

AlexS
AlexS
3 months ago
Reply to  Jim

I did not specified what fast paced tech weapon, it can go from a rocket, to a drone, to a missile, to a 2000km ballistic missile with terminal guidance. All of this can extract significant causalities to sides vulnerable to small number of causalities.
Japan needed an air fleet and very high training one to pull a Pearl Harbor, today that can be accomplished with enough quantities of ballistic, cruise missiles from distance.

Jim
Jim
3 months ago
Reply to  AlexS

Yes but almost anything with range including drones instantly becomes military issue due to potential misuse much like nuclear material and it’s only available to state actors. It’s takes a significant logistical foot print to operate such items at scale especially if the other side has intel and is shooting back.

AlexS
AlexS
3 months ago
Reply to  Jim

Hamas fired more than 10000 rockets at Israel, some with 250km range. In 10 years part of those rockets can have guidance unless Hamas prefer terror quantity.
Now lets make an exercise, those guided rockets can be concentrated and saturated into an objective like an air base. naval base, city center etc.
What does that tell you?
West favours the past, the vector : ship, aircraft, tank, that was necessary when there wasn’t a reliable missile capability, but now their complexity , cost and time to operation are too much that affects any combat staying power.

Jim
Jim
3 months ago
Reply to  AlexS

Yes if you build a wall around a territory and let them build up capability for a decade they can launch a major strike. Much the same happening with suicide drones in Russia Ukraine war. But in a real war your not going to be able to stack up drones or missiles like this they will be taken out. Again all these capabilities exist today without AI. Getting back to the premise, I don’t think AI is the big deal it made out to be because we already have self targeting weapons and having true AI comes with a series… Read more »