UK and US scientists have trialled the use of artificial intelligence to support battlefield medical decision-making, examining whether personnel are willing to trust and delegate life-and-death choices to AI.

The work, led by the Defence Science and Technology Laboratory in collaboration with the US Defense Advanced Research Projects Agency, explored how AI could be aligned with human decision-making in high-pressure environments, the UK Defence Journal understands.

The trials drew on DARPA’s “In the Moment” programme, which focuses on whether AI systems can be tailored to reflect individual human preferences and priorities. Researchers are attempting to address a central problem in AI deployment: systems do not naturally think or behave like humans, and there is currently no established way to measure or replicate human decision-making in complex scenarios.

Experiments took place in October 2025 at Merville Barracks in Colchester and RAF Brize Norton, using simulated mass casualty scenarios. Participants were first assessed to establish their own decision-making tendencies, including how they balance factors such as saving the greatest number of lives, prioritising quality of life, or favouring certain individuals based on affiliation or role.

AI systems were then configured to mirror, or deliberately diverge from, those preferences, effectively acting as a “lead medic” in the scenario. Participants reviewed the AI’s triage decisions and were asked whether they would trust it enough to delegate authority. They were not told they were interacting with AI until after the exercise.

The trials are intended to explore whether aligning AI with human values increases trust and willingness to delegate decisions. If effective, such systems could allow medics to process and triage larger numbers of casualties more quickly, potentially improving survival rates in high-intensity environments.

“We’re looking at human-AI teaming in a medical triage setting,” said a Dstl human factors specialist involved in the work. “We’re really interested in how the warfighter makes decisions based on increasing amounts of information and how AI systems can support that.”

Analysis of the results is ongoing and will feed into further research on human-AI teaming and decision-making within defence, including how such systems might be deployed safely in operational settings.

7 COMMENTS

  1. Nope. Nope Nope nope.

    I *never* want an AI making clinical decisions for me. You treat the Patient, not the numbers and AI 100% will treat the numbers not the patient.

  2. This is worrying. Not just using AI for life and death decisions, but hiding the fact that the lead medic is an AI, prone to hallucinations. How did they get this past the Ethics Committe?

  3. AI should be kept as far away from life and death decision making as can be.. treatment planning involves huge numbers of ethical and moral decisions that revolve around a profound understanding of the human condition…just because you can, does not mean you should and sometimes just because you shouldn’t does not mean you don’t give it a go…

    This is essentially true in mass casualty events where the ethnics of decision making is profound and on a knife edge… and when push comes to shove we sometimes as humans just find a little bit more and do a bit more.. that child gets an extra round of because they are a child… we don’t push when we know we are resuscitating for a few more hours of pain.. when we have 4 major casualties/ resuscitations come in and we only have 3 teams and 3 resuscitation bays.. we cobble together another team and a makeshift bay and give all three a shake at life…

    An AI could never be able to encompass all that you see in healthcare.. I have seen people die when everything said they shouldn’t.. I’ve saved lives because I simply had a feeling.. chatted to the discharging reg and said..why don’t we keep her in.. why and the answer was she could and should go home.. but it was close to the edge and I just had enough red flags to make me want to be safe.. she was vague about mechanics of injury.. a bit frail.. etc.

    In my time I have seen people recover from terminal illness.. I have investigated cases in which the notes essentially read “rest in peace” then on the next page “ recovering well, home the next day” and on the flip side I have reviewed cases of people with nothing but common flu like symptoms who 2 hours later were dead as their lungs literally liquified..

    AI is good for 2 things in healthcare..

    1) providing a risk profile of how likely a person is to be hospitalised and therefore provide a guide to help you prioritise prevention resources.
    2) complex diagnostic support.. so confirming likely potential provisional diagnosis..

    But in this they are tools because health interventions are moral and ethical lead decisions… let’s just remember we are in the process of having a serious look at AIs from a mental health support point of view because they keep ending up encouraging people to kill or harm themselves…

  4. Leaks from military sources are that army chief R George was fired because he did not block the promotion of 2 black and 2 female officers. Army Chaplain Green, a black, was the only chaplain ever fired. There might be a trend, 5 others had the same fate

  5. Leaks from military sources are that army chief R George was fired because he did not block the promotion of 2 black and 2 female officers. Army Chaplain Green, a black, was the only chaplain ever fired. There might be a trend, 5 others had the same fate.

LEAVE A REPLY

Please enter your comment!
Please enter your name here