Digital Threat Digest - 25 August 2022
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
Disinformation before the storm
Last week, I spoke a bit about APTs, from APT1 to RedAlpha, and how these clandestine cyber ops play into China’s TTPs (Tactics, Techniques and Procedures) to unsettle its enemies and source actionable intelligence for the People’s Liberation Army (PLA). While APTs represent one of the top rungs on the ladder of sophistication, they’re by no means the favourite because a) they require quite a bit of effort and money and b) cybersecurity teams around the world are extremely good at detecting and countering them now. So, when we look to China’s desire to re-unify Taiwan with the mainland, we shouldn’t necessarily be looking at an increase in traditional cyber-attacks as a threat indicator for a possible invasion (though, that doesn’t mean they shouldn’t be considered). Instead, we need to be looking at Taiwanese domestic social media and the narratives forming there. See, over the past decade, the Chinese Communist Party (CCP) have focused their efforts on understanding the significance of information operations and the deployment of disinformation to manipulate foreign powers to align with CCP objectives. Whether they’ve taken note from the Soviet IO handbook or just deployed the likes of the 50-cent army in a game of trial and error until they figured out exactly how to conceive a solid interference campaign, they’re now pretty damn good at it.
In terms of Taiwan, the CCP will ‘attack’ using a three-pronged methodology. They’ll create pro-CCP, pro-union content and seed it on Taiwanese domestic social media sites using inauthentic accounts. This will then generate some form of organic discussion in these spaces, which CCP assets will amplify above and beyond its natural reach and longevity, ensuring their narratives remain popularised and are continuously circulating. Alongside this, they’ll likely have already set up content farms and teams within Taiwan that link back to the CCP, and they’ll use these to attack pro-Democracy politicians and activists in Taiwan. This will then be amplified by domestic media outlets that have financial ties back to the mainland and then, what started as fake accounts saying fake things will quickly become real people, believing the fake narratives they are spreading are real. It’s important to note here that these campaign narratives aren’t just conceived out of nowhere – the best conspiracy is that which starts with a grain of truth. The same is true for the best disinformation campaign. The CCP will play off existing tensions and societal divisions to divide and demoralise – we see this with their anti-US campaigns, where they leverage real stories—such as police brutality rates and school shootings—but they frame them in a misleading way that perpetuates the notion of deep state corruption and ultimately sows distrust and chaos.
For Taiwan, this will take the shape of disinformation campaigns highlighting perceived failures by the DPP government, from mishandling of natural disasters in the past few years, to COVID, to creating concern and distrust around President Tsai’s pension and economic reform policies. The overall goal here is to ‘naturally’ reduce Tsai’s popularity so that, electorally, the chances of the Democratic Progressive Party (DPP) maintaining power in the region diminish and it becomes much easier to insert a pro-unity, pro-CCP figurehead and then, just as the case is with Hong Kong, it becomes a matter of time before the word democracy needs quotation marks when discussed concerning Taiwan.
But the good thing is that we know that this is more or less what is going to happen and Taiwan knows too. The DPP has pumped money into funding media literacy training for not only schools but general corporate environments too. They’ve also got several anti-disinformation think tanks and organisations, and according to 2021 RAND research these things are working with a large population of Taiwan confidently able to identify narratives that originate off the island. The DDP has also made it a prosecutable offence to spread disinformation, which will largely act as a deterrent to any CCP-sympathetic Taiwanese citizens contemplating joining a pro-CCP content farm. Perhaps due to all this, or perhaps entirely unconnected altogether, pro-independence sentiment is extremely high and has been consistently growing over the past few years. Whether that continues as tensions increase and the CCP moves out of passive operations and into more aggressive territory remains to be seen but, Taiwan is no sitting duck when it comes to disinformation and if we in the West help them in pointing out shady narratives, work on our own media literacy, and continue bringing China’s clandestine operations into the light, democracy and independence may yet prevail.
Ethical swatting
In a failed attempt to limit my exposure to work related topics in my personal life, I watched Netflix’s ‘Web of Make Believe’ which included an episode on Swatting. Swatting involves placing a fake call to the emergency services to get a SWAT team deployed to a specific address. This process has been weaponised in the past few years and feeds into a myriad of online harms such as doxxing, harassment and various other threats, and has even resulted in the death of an innocent man. When talking about this to friends, one said they found it funny to watch a gamer being utterly shocked when an armed SWAT team enters their bedroom unannounced. I found this quite strange, surely this would be a terrifying and dangerous experience that could end terribly?
Swatting normally goes hand in hand with doxxing—the publishing of personally identifiable information without that person’s consent. There are thousands of spaces across the internet where person A can request the community doxx person B. Within this process, the ‘internet decorum’ involves publishing a reason as to why this person’s information is being placed on the internet forever. These reasons can range from “Well, he called me an idiot on Xbox Live” to “he is part of antifa”. This takes a threat that is normally isolated within one specific part of the online community, into the wider and very real world.
Marjorie Taylor Greene was the latest victim of a SWAT yesterday at her home in Georgia. Police reported a computer-generated voice called in to take responsibility, citing Greene’s stance on transgender youth rights as the reason.
The article states that the police rang the doorbell to Greene’s home, as opposed to barging in armed as the caller would have wanted. I initially found all of this quite humorous, but, after a few hours of stewing, I wondered why? Just two days ago I was saying how terrifying the experience of swatting must be. So why did I find this instance particularly funny? Because I don’t agree with Greene politically?
This invites our old friend ethics into the argument. Why is it when thinking of an anonymous figure being swatted I can muster empathy, but when it comes to someone I disagree with politically my first instinct is to respond with a mocking gif? This is a very much ‘check myself’ moment, where I need to actively apply my ethics equally across the board. I realised that by saying “swatting is bad and Greene should not be harmed”’, I am not also agreeing with her politics.
Countering online harms should be applied across the board, regardless of personal feelings towards a specific individual. No one can be the arbiter of who deserves to be swatted and who doesn’t, as no one should be subject to that kind of behaviour. So the next time I find myself sending funny gifs about something that has happened to a politician I disagree with, I will remind myself that my ethics should be at the forefront of my considerations and I would invite you to do the same.
More about Protection Group International's Digital Investigations
PGI’s Social Media Intelligence Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.