Deployed US service members are going to have to ditch their “geolocation devices” in response to a new memo from Deputy US Defense Secretary Patrick M. Shanahan.

This includes physical fitness aids, applications in phones that track locations, and other devices and apps that pinpoint and track the location of individuals.

After fitness data service Strava revealed bases and patrol routes with an online heat map (shown above) the US military has reexamined its security policies for the social media age.

“Effective immediately, Defense Department personnel are prohibited from using geolocation features and functionality on government and nongovernment-issued devices, applications and services while in locations designated as operational areas,” Pentagon spokesman US Army Col. Robert Manning III told Pentagon reporters today.

Deployed personnel are in “operational areas”, and commanders will make a determination on other areas where this policy may apply.

The market for these devices has exploded over the past few years, with many service members incorporating them into their workout routines. They use the devices and applications to track their pace, running routes, calories burned and more. These devices then store the information and upload it to central servers where it can be shared with third parties. That information can present enemies with information on military operations.

“The rapidly evolving market of devices, applications and services with geolocation capabilities presents a significant risk to the Department of Defense personnel on and off duty, and to our military operations globally,” Manning said according to an official release.

These Global Positioning System capabilities can expose personal information, locations, routines and numbers of DoD personnel. Their use in overseas locations “potentially create unintended security consequences and increased risk to the joint force and mission,” Manning said.

Personal phones and other portable devices also contain apps that rely on GPS technology, and they will be affected. Commanders will be responsible for implementing the policy, and they will be allowed to make exceptions only after conducting a thorough risk assessment.

George Allison
George has a degree in Cyber Security from Glasgow Caledonian University and has a keen interest in naval and cyber security matters and has appeared on national radio and television to discuss current events. George is on Twitter at @geoallison

8 COMMENTS

  1. It’s very scary that even a forward think governmental organ like the US DOD is so far behind the curve in regards to the future world view of the technology companies and leaders, these guys are ( the the technologists) are lead us into a brave new world that no one apart from them really gets or will truly profit from, and the politicians who should be the guardians of the future of our society just can’t keep up with.

    • Agreed. In this case it’s quite ionic given how much of the US high tech industry was kick-started by US DoD funded projects.

      I’m not even sure that they’ve fully closed the door on this one. I often get asked by my phone to turn on my WiFi because it can increase locational accuracy. Most modern phones can also triangulate their location from known WiFi access points so even a non-GPS-enabled device could potentially get a location fix without GPS enabled if it could see at least 3 suitable WiFi access points especially if it was infected with malicious malware. (https://www.scienceabc.com/innovation/how-does-turning-on-wifi-improve-the-location-accuracy-of-a-device.html)

  2. Its mental the amount of information phones consume without people realising. I had a work colleague talking about buying a security camera for their house, then without ever googling it when they visited a website the targeted advert from amazon had a security camera on it. The only way we could figure it out was the phone picked up our conversation.
    Apparently David Cameron kicked out a minister from a cabinet meeting when his phone went off in the meeting because their policy to put them in a sealed box before entering the room because they know full well they are being recorded and that was over 5 years ago.

    • Unless your colleague was a person of interest to the security services, who then decided that breaking the OSA to sell the intercepts on to commercial organisations for advertising purposes was a good idea, then the security camera advert was either a coincidence (they do happen, I’ve had some amazing ones in my life) or your colleague had done something related to security cams beforehand (maybe visited another site to check prices or discussed them in Gmail). The technology simply doesn’t exist to do that level of mass surveillance and then post-process it to extract meaning with that level of accuracy especially within a commercial organisation, even one the size of Amazon. The security services certainly could switch on the microphone (or camera) in a targeted mobile phone, as described in the “Smurfs” bit of the traitorous Snowden leaks, and I assume they still can do those things but doing that on random phones owned by the general public and then post-processing the captured speech to target ads at them? The likelihood is vanishingly small.

      • I’m not talking about government level surveillance, but the likes of apple, google, amazon definitely track and build their analytics based on every individual users who has signed into an app on their phone. There is no reason to believe that microphones do not pick up every single conversation to pick up target words like I need to buy a xxxxx and store it for marketing purposes. I know there has been a lot of change in the last couple of months regarding privacy and data sharing etc, but before that and probably still today every input and speech would have been taken my phone apps and processed for analytics.

        • Yes, but not talking about government surveillance makes it even more unlikely because commercial organisations trying to use speech processing for targeting adverts need to target way more people with correspondingly fewer computational resources per person being monitored compared to government level surveillance.

          A commercial organisation being able to capture all conversations overheard by presumably a huge number of people’s mobile phones and doing the necessary natural language processing to reliably extract buying signals is utter fantasy with today’s technology. It’s not down to privacy, it’s down to the technology not being there yet. Data analytics where the semantic context of the data has pretty much been pre-defined due to its source, e.g. explicitly being location data because it’s coming from the GPS system or explicit being potential buying intent because it’s a record of a visit to a product page or a supplier’s web site is one thing, extracting potential buying intent for specific products from general conversations, possibly in noisy environments with multiple voices is not even an order of magnitude more difficult, it’s many, many orders of magnitude more difficult to the extent that we don’t know how to do it yet, at least not in a robust real-world scenario.

          To give some very rough context, Amazon Echo (Alexa) and the other voice assistant devices can only recognise their wakeup words. After wakeup the speech data then has to be sent to cloud services (Amazon Voice Services in Alexa’s case – https://developer.amazon.com/alexa-voice-service) to provide the necessary computational power to do the speech recognition and natural language processing. And that’s for short, relatively clearly enunciated, relatively carefully phrased isolated utterances spoken to a 7-microphone (in the case of Amazon Echo) far-field echo/noise-cancelling array. Now translate that to your scenario where essentially everyone who owns a mobile phone has effectively inadvertently purchased a home assistant that they are constantly keeping awake and bombarding with the equivalent of thousands of ill-formed utterances every day (because it’s listening to their conversations, or perhaps just the TV, radio or traffic noise in the background), with other people in the room adding their own utterances on top or interrupting others mid way through, all making no attempt to phrase clear intentions with extensive use of potentially quite old prior conversational context and extensive use of both long and short distance anaphoric references, and all potentially in noisy environments spoken into a much cruder microphone system without all the fancy far-field noise-cancelling microphone array properties of an Amazon Echo or a Google Home device. You’re talking about maybe 50 times more devices each needing to process maybe 10 hours of acoustic signal per day (let’s assume silence is filtered out early, probably on the listening device itself) vs maybe 6 minutes per day for a home assistant device. That’s about 100 times the amount of acoustic data to process from each device each day and, due to the more challenging processing required, let’s say 100 times more backend computing power needed per second of data (and actually I’m doubtful it could be done at all with today’s software technology without creating an unmanageable number of false positive trigger signals but I need a number so have selected one that I think is actually way too low). Putting those rough assumptions together means that to do what you suggest might be happening would at a very rough guess mean that Amazon or whoever is doing it would need about a 50 x 100 x 100 = a half-million-fold increase in their back-end computing power compared to how they’re provisioned for Alexa right now. And, come to think of it, in your scenario who was extracting and analysing the conversation? Amazon? The mobile phone provider? Are they all at it?

          Sorry for the long rant but this was actually my field. My PhD was in natural language processing (some fairly obscure aspects of the use of semantic tagging for disambiguation of multiple candidate syntactic parse trees) and I worked for quite a while afterwards in natural language research in commercial organisations before being seduced away into the more sales and ultimately general management aspects of the IT industry. It was the case during the time that I was working in the field, and I think it still is, that because most people have spent their whole adult life seemingly effortlessly holding conversations and extracting information from those conversation, most people who haven’t worked in the field just do not realise what a monumental challenge it still is for us to get computers to do this in any sort of robust manner in an unrestricted domain. I’m not privy to what places like GCHQ and NSA can accomplish with massive computer power focussed on a reasonably restricted set of people but envisaging a scenario where everyone’s phone is being constantly listened to and advertising triggers reliably extracted without creating an unmanageable number of false positives is nonsense.

  3. This is the real problem “These Global Positioning System capabilities can expose personal information, locations, routines and numbers of DoD personnel.” IIRC it was a European investigative team that used one or more of the sports apps correlated with other social media sources like Facebook and LinkedIn to personally identify individuals in sensitive intelligence and military job positions.

    I can’t find the original article to link to. However, the method used was to identify activity at a secure location, i.e. military base or intelligence HQ, then identify the home location for that individual by observing exercise tracking from a residential location. With the address the researchers were then able to identify a name and cross-checked with LinkedIn to see what job description was posted to get a sense of seniority and expertise. Note that in some examples the app user had actually used their own name for tracking which just made the exercise even easier. The researchers were then able to physically corroborate by actual observation of an individual and his family. Another identified military individual from a European military location was tracked on a deployment to Mali.

    Clearly significant information was obtained that would be open to abuse.

  4. For the size of US dod they should use their own OS and have their own app’s for training plus others for anything else needed, so all data can be kept on DoD owned servers. This would give staff the technology to use apps and keep it privet, it could also be expanded to use as a form of reporting and monitoring for command. Trying to just ban it won’t work in the long run, giving them DoD versions of most used apps will allow them to get same benefits but without the risk

LEAVE A REPLY

Please enter your comment!
Please enter your name here