Digital Threat Digest - 28 February 2022
PGI’s Digital Investigations Team brings you the Digital Threat Digest, daily insights into disinformation, misinformation, and online harms.
A tumultuous night’s sleep, results in us grumpily defending the platforms and attacking shoddy press coverage of disinformation.
Ghostwriter
I’ve written before about the evolving methodologies of influence, and the colliding cyber and influence worlds. My go-to example for this activity is always hack-and-leak, using the case of Ghostwriter, a Belarusian group who have previously run multiple such campaigns in which they compromise nodes with significant legitimacy and use them to subsequently disseminate disinformation. This both bypasses the requirement to build your own inauthentic dissemination point and—assuming the account remains compromised—locks the actual owner out and prevents them from denying the content posted.
Having read on Friday that UNC1151 (also known as Ghostwriter) had begun stepping up activity targeting Ukraine, there where a few times over the weekend when I wondered what Monday would bring in terms of investigation. What I didn’t expect was that Meta would already have detected Ghostwriter’s activities and made efforts to mitigate their impact. A full statement from the company is still pending, but initial details show the campaign was classic hack-and-leak style, with efforts to compromise Facebook profiles and share links to off-platform videos showing a mix of propaganda and disinformation alleging weakness in the Ukrainian military. A cross-platform campaign using each platform to their strengths, the Facebook nodes as the dissemination point and the video host as the content repository – nice. A platform acting proactively to mitigate novel threats in a complex threat environment – also nice.
Ukraine says its military is being targeted by Belarusian hackers | Reuters
Bots & Cats
I understand that media has to resonate, and that explainers are important, and that this entry is going to read like I’m gatekeeping digital investigations, but if I have to read one more article that says, “Oh my goodness, there’s Russian propaganda on the cat video platform” then my head will explode.
If I was making a bingo card of tired tropes in the disinfo sphere it would contain a selection of – bot armies; TikTok is for kids; DeepFakes as a threat; poor attribution saying something might be Russian because it looks Russian; inexplicable quant statistics applied to quali analysis; a tech startup headquartered out of Israel promising algorithmic detection with a website with a dynamic network graph that twinkles when you scroll your mouse. Bingo, Associated Press. What’s that Lassie? There’s been an 11,000% increase in ‘anti-Ukrainian Twitter posts’ on Valentine’s Day? Quick, buy access to a data lake and we’ll fix it.
I feel quite strongly about articles like this, because they essentially consist of pointing out ‘basic bad’ and claiming it as analysis. This is a wider problem across OSINT, and one that has really come to the fore following Russia’s invasion of Ukraine. Yes, there is a low of power to be found in harnessing the power of crowdsourced OSINT to hold threat actors accountable for their actions. However, there is no value to be gained from finding a singular piece of ‘basic bad’ online and amplifying it in order to boost your ego with engagement-triggered endorphins. If you’re going to analyse strategic Russian narratives, great, analyse them, don’t just repeat talking points for the sake of looking engaged with a topic. If you’re going to analyse Russian sponsored content on TikTok, don’t write an article around one video which you don’t attribute and don’t contextualise.
War via TikTok: Russia’s new tool for propaganda machine | Associated Press
More about Protection Group International's Digital Investigations
PGI’s Social Media Intelligence Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.