An unusual and alarming trend has emerged on Twitter: a noticeable increase in bot accounts using ChatGPT to reply to tweets, particularly in the defence and security sectors.
These bots, often masquerading as defence analysts or professionals, reply to established accounts and paraphrase the original tweets in an effort to build credibility and trust.
This sophisticated strategy raises serious concerns about the potential for disinformation and manipulation; what is the end goal?
Unmasking the Bots
The UK Defence Journal has observed a series of replies to our tweets that exemplify this trend. These replies, although seemingly genuine, are generated by bots using ChatGPT.
Here are a few examples:
- Jonas Klein (@CareFromDD): “UK enhances MLRS, adding strategic depth to Europe’s defense.”
- Hugo Scott (@globalsecnexus): “The Fujian shift in naval power is pivotal for UK defense analysis.”
- Gabriel Scott (@GScotCyberOps): “Key insights into the RN’s 2040 strategy for UK security at #SPC2024.”
- ArcticAnalyst (@arctictactician): “Royal Navy adopts VR for training, enhancing tactics for modern warfare.”
- Henry Sibley (@deftechhenry): “Shipyard protests: misguided or strategy deficit? Defense sector crucial.”
These accounts, appearing to be individuals deeply involved in the defence sector with their usernames, add a veneer of legitimacy to their responses. The subtlety and relevance of their paraphrased replies make it challenging to identify them as automated,
The Research Behind the Trend
Recent research by Kai-Cheng Yang and Filippo Menczer at the Indiana University Observatory on Social Media has shed light on this issue.
They identified a Twitter botnet doing something similar, albeit to a different audience, dubbed “fox8.” It comprises over 1,140 accounts that use ChatGPT to generate human-like content. These bots promote crypto, blockchain, and other types of content.
https://twitter.com/CareFromDD/status/1789376727778570480
The discovery of this botnet was not the result of advanced forensic tools, the pair say, but rather a creative approach to detecting machine-generated text. The researchers used Twitter’s API to search for the phrase “as an AI language model” over a six-month period between October 2022 and April 2023.
This phrase is a common response generated by ChatGPT when it encounters prompts that violate its usage policies. By targeting this self-revealing text, the researchers were able to identify a significant number of tweets and accounts displaying patterns indicative of machine generation.
https://twitter.com/globalsecnexus/status/1789376602679185834
Once these accounts were identified, the researchers analysed their relationships and noted that the Fox8 network appeared to promote three “news” websites, all likely controlled by the same anonymous owner. These sites and the associated tweets were primarily focused on crypto, blockchain, and NFTs, suggesting a coordinated effort to influence opinions in these areas.
To further validate their findings, the researchers applied tools designed to detect language generated by large language models to the corpus of tweets from the fox8 botnet. Unfortunately, these tools, including OpenAI’s own detector and GPTZero, struggled to accurately classify the tweets as machine-generated. This indicates that current detection methods are not yet sophisticated enough to reliably identify AI-generated content in the wild, underscoring the difficulty of combating this new wave of digital manipulation.
https://twitter.com/arctictactician/status/1789668462136078724
Despite these challenges, the researchers identified that the behaviour of the bot accounts, likely due to their automated nature, followed a “single probabilistic model” that determined their activity types and frequencies. This means that the pattern of when these bots post new tweets, reply, or share content is predictable to some extent. Recognising such patterns could be crucial in developing more effective detection methods for automated accounts.
Detection Challenges
Despite the bots’ sophistication, detecting them remains a significant challenge. Tools like OpenAI’s detector and GPTZero have had limited success distinguishing bot-generated content from human tweets. This difficulty arises because bots’ responses are highly contextual and human-like, rendering traditional detection methods less effective.
https://twitter.com/deftechhenry/status/1788910109717913816
The researchers warn that fox8 is likely just the tip of the iceberg. More sophisticated botnets may already be operating, using advanced AI capabilities to autonomously process information, make decisions, and interact with APIs and search engines.
This potential for autonomous operation poses a serious threat to public discourse.
A cybersecurity expert who wished to remain unnamed expressed serious concerns about the implications of this trend. “The use of ChatGPT by these accounts and their ability to mimic human interactions with such accuracy makes them particularly dangerous, nefarious even”, I was told.
Implications and Concerns
OpenAI CEO Sam Altman previously expressed concerns about the risk of AI systems being used to influence elections, calling it one of his “areas of greatest concern.” With elections looming in multiple democracies in 2024, the urgency to develop policies and tools to counteract these threats cannot be overstated.
This trend is likely to grow.
I wonder if this is related to the highly successful pig butchering scam, which first spends long periods building victims’ trust before it encourages them to invest in cryptocurrencies, ultimately milking them of tens if not hundreds of thousands of dollars over many months. This scam market is reported to be in the billions of dollars just involving Americans who reported it to the FBI. The background set up is supposed to be highly detailed and spread out, so validity checking will normally succeed.
These scams were run out of China starting in the Covid years. Might China have found another use for the principle?
I remember reading several articles in recent years about our nuclear deterrent, aircraft carriers, tanks etc, and pretty much all of the comments were saying things like “why do we need a defence budget” or “stop spending money on bombs”, “we have enough soldiers” etc. etc.
While I’m fully aware that people are entitled to their opinions and there is certainly a large number of people that think defence is a waste of money or are pacifists, but there was something very suspicious about the comments and “who was writing them”.
It was the sheer volume of people all saying negative comments about the armed forces from very generic white, English sounding names. Like almost every comment was from a “Mark, John, Gary, Henry” etc … almost as if a bot had put together a list of what a foreigner might consider “very English names” to make it sound more convincing to the real Brits reading the comments.
I honestly wished I’d saved some of the links because it all looked like… “Propaganda” to me. Perhaps to make it appear like we were discrediting our own military from the inside and in huge numbers… to make it seem like the popular opinion is to get rid of the armed forces… or make it seem like an unpopular thing to discuss.
I dunno… it could have been legitimate I suppose… but it just seemed a little odd. A little fake. A little bit of foreign propaganda. Not sure if anyone else has noticed this.
Cheers
M@
Is all about creating volume so it looks like its the normal opinion of the masses. This creates a new reality bubble where people start to think something is ture without checking what they hear is factual, just because there’s a lot of volume. It’s becoming very frequent the over amplification of certain narratives to drown out the opinion of others.
This set of comments is so quiet, you’d almost welcome a bot to get the party started.
Yes. That should do it! Cyber doesn’t need a troll as that’s what the article is actually about. Welcome to Friday afternoon. 😄