The Royal Navy is using artificial intelligence for the first time at sea in an effort to help defeat missile attacks.
The Royal Navy say here that leading-edge software is being tested at sea against live missiles during the largest exercise of its type off the coasts of Scotland and Norway.
“Involving more than 3,000 military personnel, Formidable Shield tests the ability of NATO warships to detect, track and defeat incoming missiles, from sea-skimming weapons travelling at twice the speed of sound just above the waterline, to ballistic missiles.
Three Royal Navy warships are taking part in the exercise, which runs until early June: destroyer HMS Dragon and two frigates, Lancaster and Argyll. HMS Lancaster and Dragon are trialing artificial intelligence and machine learning applications which offer a glimpse of the future of air defence at sea.”
It is claimed that experts from the Government’s defence laboratory Dstl and industry partners from Roke, CGI and BAE Systems are using the three-week exercise to test their ‘Startle’ and ‘Sycoiea’ systems.
“Startle is designed to help ease the load on sailors monitoring the ‘air picture’ in the operations room by providing real-time recommendations and alerts. Sycoiea builds upon this and is at the forefront of automated Platform and Force Threat Evaluation Weapon assignment, effectively allowing operations room teams to identify incoming missiles and advise on the best weapon to deal with them more quickly than even the most experienced operator.”
Above Water Tactician Leading Seaman Sean Brooks aboard HMS Lancaster is among those who was impressed by the software.
“I was able identify missile threats more quickly than usual and even outwit the operations room!” he was quoted as saying.
Good to see this technology being tested and integrated so much more quickly, I was one of the first CIS ratings back in the day (re-trained from Comms, then subsequently leaving the Navy!) and we were a long way behind at that stage. I assume this is software running on the ships existing shared infrastructure, will expect it needs some powerful computing power anyway (no cloud option!)
It will all be processed using high power graphics cards in servers. They are inherently massively parallel in their design and execution.
Well, iPhones are now doing a lot of their AI work on the phone so I’m sure a specialist product of this nature can be pretty powerful and self contained and surprisingly compact I suspect. A long way mind from the Quantel based systems for graphics we used to use back in the early 80s that originated from state of the art defence silicon with a price reflecting it just to move images around in real time.
Yes, I totally agree.
Very little ‘AI’ software contains any actual AI!
Read the French shot down a supersonic target, flying at more than 3,000 km/h (1,864 mi/h), using an MBDA Aster 30 surface-to-air missile on the same exercise
Here’s a tweet from the French navy regards the above including the launch and shootdown:
https://twitter.com/MarineNationale/status/1397548486162276357?s=20
Impressive!
Interesting compared to the US failure to do the same recently using Aegis and Standard missiles, though they have declined to share more than basic details. Not sure how comparable the tests were. I do believe they have had previous successful interceptions mind.
Not sure if you are referring to the US MDA test over the pacific this weekend that failed to intercept the intended target. This was actually a test to intercept a medium range ballistic missile target with a salvo of two sm-6 missiles which as I understand is significantly more challenging than intercepting a supersonic cruise missile, which itself is a great capability demonstration. Note that the USN has been intercepting this cruise missile target going back many years.
Brian: That’s O.K. The “missile attack” was artificial, also.
But was it intelligent?
I guess the answer is 42, ie you have to define the actual question. It’s all very subjective and humans tend to be very arrogant about how they define intelligence to overstate our own form of it.
Maybe start with the Turing test?
Very little open source information on how these tools work and to what extent they are actually ‘AI’. The original request for proposals provides a bit more detail:
https://www.gov.uk/government/publications/competition-intelligent-ship-phase-2/competition-document-intelligent-ship-phase-2
Sea Wolf was doing fully auto shoots in the late 70s and early 80s.
967 picked up the target. FM1600 b computers did the treat evaluation and allocated trackers. The 1600b program in those days was loaded via bootstrapping the computer with toggle switch combinations on the cabinet and loading in reels and reels of punch tape. The memory was a ferrite core store.
Trackers used the same computers.
All in all it shot at a thing that met the threat parameters.
Was it AI?
No.
Its programming in machine code without all the bloat that modern computer programs have so it ran quickly, in real time, even in a small ferrite core store
But it was bloody good at what it did.
I re-programmed something similar in the ‘90’s. It is an art working with tiny amounts of storage. Requires a lot of reductionist thinking – how do I make that simpler?
It also helps if you are able to work in a high level language (FORTRAN, COBOL etc) that are well suited to complex calculations. Trouble is that these days everything is just masses of sloppily coded C++ which has not been modularised properly. And when it doesn’t run that well + more cores + more memory….
It is quite surprising, when you pull a lot of niche programming apart, how often you find a lump of a PhD students untested code hiding in there and how lumps of code just get aggregated.
FYI ferrite core memory lived on in MILSPEC longer than in civvy street as it was supposedly better at being EMP resistant.
It’s an art working with tiny amounts of storage but it’s even more of an art programming performance-sensitive real time systems with limited storage. Often in more general programming one can choose at least somewhat between code that maximises space efficiency at the expense of performance or code that maximises time efficiency (maximises performance) at the expense of code size. Needing to make something both time efficient and space efficient is where it gets really tricky.
I totally agree with that.
What I was programming had various maximum recycle times based around the hardware recycle times measured in ms (micro) and some us (mu).
The biggest problem was always to process the signal feed (effectively radar) in near real time as it would be a disaster for there to be time slippage. There being, at the time, no real way of storing the raw feed.
The first company I worked for back in 1982 had two computers in its computer room, a PDP8 and an FM1600, the latter because we did a lot of work for the RN so needed it to develop and support the software we provided to the RN. The software house I was working for might even have been the one that developed that Sea Wolf code. I was working in a different group so don’t know exactly what we did for the RN although just before I left I was about to join an RN project and was going through developed vetting because I was going to need access to some very sensitive data. By that time we weren’t toggling in bootloaders to read in programs from paper tape but in my earlier university days we were doing just that. So tedious!
At that company the Argentinian Navy was also using our software. I still remember one day in the office when a junior support guy came into my open plan area looking bemused trying to find a manager because his manager was out for the day. It was the middle of the Falklands war and he had just had a call from Argentina asking for software support for one of our products! My manager obviously advised him not to respond. Those were less sophisticated times but I suppose that in theory we could have delivered a malicious patch that at least appeared to fix the problem but also did some sort of damage. Who knows, maybe we did, it was my first job after university so any decision like that would have been well above my pay grade.
When I did my apprentiship we dida huge swathe of courses to ground you in everything that you may come across. So fitting and turning, electronics, hydraulics, control engineering, radar principles, and computer coding. You started doing raw coding directly using machine code. COBAL OR CORAL66 I think. Once you grasped that you moved onto the nice and easy Basic.
Made you appreciate the need to keep code short and compact. None of the modern day cut and paste of vast tracks of bloated code. You had a finite memory store to run it in and it wouldn’t work if you didn’t keep it compact.
Wow. What a great apprenticeship. So well thought out and so well rounded to include giving you a solid exposure to coding too. In a way it makes me sad. I really do wish the MoD could somehow get the message out more widely about what a great career the Navy can be, in this case particularly for someone interested in and wanting to get a really solid understanding of all sorts of aspects of engineering, electronics, computing etc. At least those are my assumptions from what I hear and from seeing the depth of expertise that you and other people in or ex RN display in their posts. Maybe it’s not as good now with such tight budgetary constraints and under-staffing (back to the challenge of attracting good new recruits and training them well) but if that’s the case the government really should be trying to reverse the decline. Oh, for an advertising campaign that could really get that message across. I am happy with my career choices, a comp sci degree and then a lifetime in the commercial computing sector, but I do sometimes think that the Navy was a road not taken for me.
I had heard of CORAL66 but had never used it and didn’t know much about it so I looked it up in Wikipedia. Interestingly it was actually developed in Malvern and the “R” originally stood for “Radar” then renamed to the more general purpose “Real time”. From the Wikipedia article – “CORAL, short for Computer On-line Real-time Applications Language is a programming language originally developed in 1964 at the Royal Radar Establishment (RRE), Malvern, Worcestershire, in the United Kingdom.[1] The R was originally for “radar”, not “real-time”.
With that heritage it is no surprise that was one of the languages you were exposed to.
If we do not get a grip of A1 now 100% ,we will be in trouble !!!
You are never going to get a grip of Ai now 100% though I’m afraid it’s an ever moving target as it develops that increasingly as it goes beyond a function of our input or permutations of that original input becomes less predictable as with more natural biological life form’s behaviour that we are attempting to mimic to a degree. Maybe I’m misinterpreting your intent there (indicative itself of the problem) but surely the more AI develops the more that concept is counter intuitive. Be easier to heard biologically Intelligent cats. We are going to have to be extremely careful mind I do accept but that simply presents complex problems in itself especially in a competitive and suspicious threat filled world.
That sounds like a good idea for a film or a series of films…😂
Yes. The term AI has been totally hijacked recently by some people calling almost any computer automation “AI”. I learned to add, subtract, multiply and divide at school and am at least reasonably good at doing mental arithmetic. Does the fact that my calculator can do calculations faster, and usually more accurately if not dealing with round numbers, mean that my calculator should be labelled as artificial intelligence? Actually one could get even more absurd and, following the same line of “faster than mental arithmetic” logic, call a slide rule (for those who remember those!) artificially intelligent. (I also worked in what I would consider “real” AI way back in the early part of my career.)
Well if it were Deep Mind based it certainly would qualify, as its work is considered a true Turing Machine and learns without direct human input through experience and analysis and extremely quickly at that. But in this case I grant you that sort of technology is highly unlikely at this stage. Though I’m sure other companies are delving into similar techniques or certainly soon will be so I don’t share your negativity as this technology develops.
As it stands it very much depends on how you define ‘Artificial Intelligence’, its artificial and there are arguably indicators of basic intelligence at least in specialist areas. At least if one considers that with all the levels of intelligence on Earth only a tiny fraction is at the level of humans and there are instances where some of those life forms deemed far less intelligent than humans can still in specialist functions at least show intelligent abilities superior to our own. So it’s all rather subjective but it’s certainly developing extremely quickly.
I’m interested.
Why don’t you think a Deep Mind type system can be implemented? A T45 has plenty of space and power for a super computer?
BDR might be an issue. But there again it can be recompiled to run on a robust system.
If people start throwing hypersonic or Ballastic missiles at ship the reaction time will be down to seconds. So I can only see this as positive, we need to also accelerate the necessary counter to these threats