UK and US scientists have trialled the use of artificial intelligence to support battlefield medical decision-making, examining whether personnel are willing to trust and delegate life-and-death choices to AI.

The work, led by the Defence Science and Technology Laboratory in collaboration with the US Defense Advanced Research Projects Agency, explored how AI could be aligned with human decision-making in high-pressure environments, the UK Defence Journal understands.

The trials drew on DARPA’s “In the Moment” programme, which focuses on whether AI systems can be tailored to reflect individual human preferences and priorities. Researchers are attempting to address a central problem in AI deployment: systems do not naturally think or behave like humans, and there is currently no established way to measure or replicate human decision-making in complex scenarios.

Experiments took place in October 2025 at Merville Barracks in Colchester and RAF Brize Norton, using simulated mass casualty scenarios. Participants were first assessed to establish their own decision-making tendencies, including how they balance factors such as saving the greatest number of lives, prioritising quality of life, or favouring certain individuals based on affiliation or role.

AI systems were then configured to mirror, or deliberately diverge from, those preferences, effectively acting as a “lead medic” in the scenario. Participants reviewed the AI’s triage decisions and were asked whether they would trust it enough to delegate authority. They were not told they were interacting with AI until after the exercise.

The trials are intended to explore whether aligning AI with human values increases trust and willingness to delegate decisions. If effective, such systems could allow medics to process and triage larger numbers of casualties more quickly, potentially improving survival rates in high-intensity environments.

“We’re looking at human-AI teaming in a medical triage setting,” said a Dstl human factors specialist involved in the work. “We’re really interested in how the warfighter makes decisions based on increasing amounts of information and how AI systems can support that.”

Analysis of the results is ongoing and will feed into further research on human-AI teaming and decision-making within defence, including how such systems might be deployed safely in operational settings.

11 COMMENTS

  1. Nope. Nope Nope nope.

    I *never* want an AI making clinical decisions for me. You treat the Patient, not the numbers and AI 100% will treat the numbers not the patient.

  2. This is worrying. Not just using AI for life and death decisions, but hiding the fact that the lead medic is an AI, prone to hallucinations. How did they get this past the Ethics Committe?

    • Because it was on an exercise with simulated casualties, not an actual clincal setting, so no ethics is required. In those situations you can literally draw your side arm and shoot a casualty if you want to and the fall out will be anything from a disciplinary meeting to a laughing joke with your chain of command (depending on who you are working with, and whether you damage the training equipment).

      • The entire thrust of the experiment is putting an AI in charge of life or death decisions, something that we won’t do when attacking an enemy. We demand a person in or on the loop: a person who knows that they are dealing with AI. Shouldn’t we demand the same when we deal with our own? If the entire thrust of the experiment was finding out when we should blow the commanding officer’s brains out because they are a numpty, we wouln’t say who cares, it’s only simulated. There are some experiments that are themselves just wrong.

  3. AI should be kept as far away from life and death decision making as can be.. treatment planning involves huge numbers of ethical and moral decisions that revolve around a profound understanding of the human condition…just because you can, does not mean you should and sometimes just because you shouldn’t does not mean you don’t give it a go…

    This is essentially true in mass casualty events where the ethnics of decision making is profound and on a knife edge… and when push comes to shove we sometimes as humans just find a little bit more and do a bit more.. that child gets an extra round of because they are a child… we don’t push when we know we are resuscitating for a few more hours of pain.. when we have 4 major casualties/ resuscitations come in and we only have 3 teams and 3 resuscitation bays.. we cobble together another team and a makeshift bay and give all three a shake at life…

    An AI could never be able to encompass all that you see in healthcare.. I have seen people die when everything said they shouldn’t.. I’ve saved lives because I simply had a feeling.. chatted to the discharging reg and said..why don’t we keep her in.. why and the answer was she could and should go home.. but it was close to the edge and I just had enough red flags to make me want to be safe.. she was vague about mechanics of injury.. a bit frail.. etc.

    In my time I have seen people recover from terminal illness.. I have investigated cases in which the notes essentially read “rest in peace” then on the next page “ recovering well, home the next day” and on the flip side I have reviewed cases of people with nothing but common flu like symptoms who 2 hours later were dead as their lungs literally liquified..

    AI is good for 2 things in healthcare..

    1) providing a risk profile of how likely a person is to be hospitalised and therefore provide a guide to help you prioritise prevention resources.
    2) complex diagnostic support.. so confirming likely potential provisional diagnosis..

    But in this they are tools because health interventions are moral and ethical lead decisions… let’s just remember we are in the process of having a serious look at AIs from a mental health support point of view because they keep ending up encouraging people to kill or harm themselves…

    • The absolute maddest thing is the British Army has, in a Triage Sort scenario, effectively 6 Triage categories
      -Uninjured
      -T3/Walking Wounded
      -T2/Wounded but not walking
      -T1/Priority Care
      -Dead
      -T4/Expectant

      There’s a common misconception that’s rife in the army that T4 and Dead are the same thing, they aren’t. Dead is… well… dead. Combat Medics are allowed to recognise the 7 signs of death and provisionally say a casualty is dead, non RAMS personnel can declare a casualty dead in line with a Triage flowchart (and it’s a really simple chart, AI not needed. It’s litterally “Are you being shot at? Yes. Are they breathing? No. Did opening their airway get them to start breathing? No. Dead.”)
      T4 is this is a survivable casualty that the Clinican is making an active decision not to work on in order to preserve reasources for more saveable casualties. In order to use T4, in theory, a Lead Clinician, either Medic, Nurse or Doctor, needs to get *THEATRE COMMANDER MED* on the line (or relay if they’re, very likely, too busy) and say “Sir I have 10 casualties, of which 5 are T1’s, my reasources are sufficient to care for 3. Permission to use T4?”

      We don’t trust Doctors to make the clinical decision to use T4 without referring to the most senior medical person in a region. And yet some ass hat tech bro is suggesting we use AI in the triage process?

      Also this is RAMS being RAMS, imagining they can operate in Bastion like settings where they can put up nice comfy field hospitals with great com links to AI, that will never be offline. Elements of the RAMS are on top of this sort of thing, getting to grips with the fact that emissions make you a target, that you can’t have a big facility out in the open with a big red cross on it, but clearly some elements still haven’t gotten the memo.

  4. Leaks from military sources are that army chief R George was fired because he did not block the promotion of 2 black and 2 female officers. Army Chaplain Green, a black, was the only chaplain ever fired. There might be a trend, 5 others had the same fate

  5. Leaks from military sources are that army chief R George was fired because he did not block the promotion of 2 black and 2 female officers. Army Chaplain Green, a black, was the only chaplain ever fired. There might be a trend, 5 others had the same fate.

  6. And yet everyday you let the government and insurance companies decide your treatment… But you don’t worry about that at all

LEAVE A REPLY

Please enter your comment!
Please enter your name here