Digital Threat Digest - 13 April 2022
PGI’s Digital Investigations Team brings you the Digital Threat Digest, daily SOCMINT and OSINT insights into disinformation, misinformation, and online harms.
We’re hitting the SEO hard this morning and cracking into bots, NFTs, and Web3. Come at us cryptobros.
Bots all the way down
I’ve never been to Los Angeles, bought a copy of the LA Times, or met Russ Mitchell, but yesterday he wrote an article in the paper that merits dismantling, so here we go. In 1000 words the article seeks to explain, contextualise, and attribute a series of Twitter bots that have supported Elon Musk and Tesla over the years, specifically from 2010-2020. The bots have been investigated by two researchers for the past couple of years, and the article is essentially a summary of their research. Fair enough, Elon Musk certainly has a cult of personality on the go and his behaviour online, in particular on Twitter, merits attention. But then it all starts to go wrong.
The first of the three major problems in this article concerns definitions. Paragraph two introduces the idea of automated Twitter posting, then paragraph three classifies accounts as bots. Further down we get a tenuous definition of a Twitter bot as an inauthentic crawler auto-responding to keywords, a definition written by someone that has only just discovered that bots are a thing after having sent 170 tweets since they joined the platform eight years ago. But surely the researchers behind the piece had a better definition right?
Paragraph something-in-the-middle tells us that sadly, they did not: “Using a software program called Botometer that social media researchers use to distinguish bot accounts from human accounts”. Botometer, formerly BotOrNot, is a useful tool when bearing in mind a multitude of factors: it can tick the behaviour box of inauthenticity, but it forgets content, doesn’t concern itself with infrastructure, and does nothing for context. It’s a binary interpretation of behaviour – you’re either authentic or inauthentic. This is really problematic because inauthenticity isn’t black or white. It was certainly more binary in 2010, before the full spectrum of inauthenticity was understood. And I can bring examples – if you want to call an auto-keyword responder a bot then fine, but what about what one French firm that insists on calling ‘cyborgs’ networks of seemingly inauthentic accounts run manually by a human operator? What about pseudonymous accounts which display are not the representation of a ‘real’ person, but behave organically? What about organic users who are paid to push certain narratives? The image at the bottom of this entry shows two accounts that I ran through Botometer. One is my personal Twitter account – it uses a picture of me and my real name. Botometer thinks it’s a bot. The other is a pseudonymous account I created in 2018 to post about Algerian militants. Botometer reckons it’s authentic. Neither have any form of automation, but that shouldn’t be the bar for assessment of inauthenticity.
The second issue is that the article seems to be a puff piece plugging the book that one of the researchers published in 2019, and that the research itself seems to have focused on self-legitimising. To exemplify the former, paragraph four has an embedded Amazon affiliate link for a book published by one of the two researchers. It’s a sales pitch wrapped in an article about Musk. To exemplify the latter, “While any direct link between bot tweets and stock prices has yet to be determined, the researchers found enough “smoke” to keep their project going”. Mmhmm.
And finally, the article exemplifies a wider problem here, which is mainstream media coverage of emergent threats. This is going to sound like I’m gatekeeping again, but if something’s worth doing, then it’s worth doing right. There’s a reason that media entities have begun setting up dedicated disinformation/OSINT/digital threat desks, because this is an area that requires a specific skillset and specific contextual knowledge. If I tried to write an article about the electric car industry, it would be rubbish. And this article shows that the inverse holds true. Emergent digital threats require specific contextualisation, otherwise you’re just contributing the noise and confusion rattling around the information environment and labelling things randomly as bots.
Twitter bots helped build the cult of Elon Musk and Tesla. But who’s creating them? | LA Times
CryptoKids
Hey kids, are you ready to bring the power of Web3 into your home! While your friends will be spending the summer holidays doomscrolling TikTok, you’ll be at Crypto Kids Camp learning about blockchain. Battle the evils of fiat money; marvel at the wild fluctuations of a currency you invested all your pocket money in; and gain enough Twitter clout to speculate wildly about the Paw Patrol coin you’re about to launch! Ignore the shaky technology and go with it because it’s definitely not a bubble.
It feels like we’ve been discussing crypto and blockchain forever at this point, the recent rebrand of the concepts linked to it as Web3 appears to be a conscious effort by those optimistic about the technology to move the conversation away from bored apes and Elon Musk buying Dogecoin. In the US there seems to be a growing movement to educate children about Web3, with Vox reporting summer camps popping up across the country to teach children about blockchain, mining, online gaming and virtual reality. While I’m sure many of the organisers of these camps have the best of intentions, to me there’s something quite dark about lining up the next generation to buy wholeheartedly into these concepts when a cryptocurrency exchange just spent a fortune on a Super Bowl advert essentially saying, “don’t worry about understanding how it works, just give us your money because it’s the future”. Well at least there’s no history of corporate interests socialising children to buy into risky or dangerous products… There’s many an argument for kids learning a range of digital skills early and it’s almost certain that elements of Web3 will be part of the future of the internet, but as we’ve discussed previously in the Digest, a bit of critical thinking can go a long way.
The growing, lightly controversial industry teaching kids crypto | Vox
More about Protection Group International's Digital Investigations
PGI’s Social Media Intelligence Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.