UK and US scientists have trialled the use of artificial intelligence to support battlefield medical decision-making, examining whether personnel are willing to trust and delegate life-and-death choices to AI.

The work, led by the Defence Science and Technology Laboratory in collaboration with the US Defense Advanced Research Projects Agency, explored how AI could be aligned with human decision-making in high-pressure environments, the UK Defence Journal understands.

The trials drew on DARPA’s “In the Moment” programme, which focuses on whether AI systems can be tailored to reflect individual human preferences and priorities. Researchers are attempting to address a central problem in AI deployment: systems do not naturally think or behave like humans, and there is currently no established way to measure or replicate human decision-making in complex scenarios.

Experiments took place in October 2025 at Merville Barracks in Colchester and RAF Brize Norton, using simulated mass casualty scenarios. Participants were first assessed to establish their own decision-making tendencies, including how they balance factors such as saving the greatest number of lives, prioritising quality of life, or favouring certain individuals based on affiliation or role.

AI systems were then configured to mirror, or deliberately diverge from, those preferences, effectively acting as a “lead medic” in the scenario. Participants reviewed the AI’s triage decisions and were asked whether they would trust it enough to delegate authority. They were not told they were interacting with AI until after the exercise.

The trials are intended to explore whether aligning AI with human values increases trust and willingness to delegate decisions. If effective, such systems could allow medics to process and triage larger numbers of casualties more quickly, potentially improving survival rates in high-intensity environments.

“We’re looking at human-AI teaming in a medical triage setting,” said a Dstl human factors specialist involved in the work. “We’re really interested in how the warfighter makes decisions based on increasing amounts of information and how AI systems can support that.”

Analysis of the results is ongoing and will feed into further research on human-AI teaming and decision-making within defence, including how such systems might be deployed safely in operational settings.

2 COMMENTS

  1. Nope. Nope Nope nope.

    I *never* want an AI making clinical decisions for me. You treat the Patient, not the numbers and AI 100% will treat the numbers not the patient.

LEAVE A REPLY

Please enter your comment!
Please enter your name here