The Digital Climate Change
Holding our generation accountable to the decline of data privacy.
The last few months have felt rather absurd for me with terms like “data privacy” and “psychographics” plastered all over the headlines. The world is suddenly freaking out about the slow-motion, apocalyptic information privacy crisis that’s been over a decade in the making. It felt like an uncanny throwback to 2018 when that surfer dude on the bus and your mum started telling me about their opinion on “the cryptocurrencies”.
And while this content has been flooding the media streams for well over a year now, the fact remains that in a post-Netflix era, not one like or share is given unless there’s a special, four-part mini-series. Well between Netflix and the UK’s Channel 4 we’ve gotten The Great Hack and Brexit: The Uncivil War, and boy have they raised eyebrows. As a result, I got a call from a friend some weeks back telling me that he’d deleted his entire online presence from Facebook to Snapchat, horrified at how much personal information could be inferred from just a few simple pieces of data about you. And he’s not alone. I’m beginning to see a multitude of students around me starting to take notice and delete, or at least more carefully manage, their social media presence.
It’s an issue that heavily affects the younger generations that have been lovingly curating their online profile for most of their lives. And yet it hasn’t yet appeared as a student issue alongside the likes of climate change and property prices. Is it a lack of understanding, or a hesitation to give up the addictions of Facebook and Twitter? I think I’d be hard-pressed to find someone my age that isn’t alarmed by the prospect of companies manipulating their behaviour.
It’s an issue that heavily affects the younger generations that have been lovingly curating their online profile for most of their lives.
When a company has access to data on things you like, your friends, how you interact with apps, and even your messages, it can put together a pretty comprehensive profile on who you are. They’ll have intimate details about a small, sample group of a few thousand, and they use that to compare you to a set of personality features, like one of those psychometric personality tests.
Now we’ve known for a while this technology has been used to sell us things through those creepy targeted advertisements, that seems to know about that one time our friend told us about their new GoPro and now Facebook just won’t quit. But that’s small game compared to what they can really do with that technology – the ability to change people’s minds, behaviour, personality. In essence it’s a PSYOP weapon (psychological operations). A weapon that can be used to build personalised realities for each individual, feeding them hate and fear to create social division. The less we trust our fellow human being, the more we rely on our social media scroll for our worldview. So instead of that GoPro, what if this method was being used to to tell you who to vote for? Or easier still, who not to vote for.
“Well marketing doesn’t work on me” is a common response to this, although terribly arrogant and delusional. It’s not like any typical ad you might think of anyway. It’s a high- pressure stream of personalised content, specifically constructed to push your buttons. You’re bombarded with news items, notifications and friend’s posts in order to solicit an emotional response – none of which will look out of the ordinary.
For years, technologies like this were being used (read: tested) in developing countries all over the world to fan tensions, or generate apathy amongst voters in order to swing elections, such as in Kenya, Trinidad & Tobago, the Philippines, and many more. This all came to a head, once data-driven behaviour-adjustment firms, Cambridge Analytica and sibling company Aggregate IQ, got involved with two key democratic events: the 2016 U.S. presidential election, and the Brexit referendum. Their contributions to the Trump and Vote Leave victories were primarily in winning over non-voters and centrists. By sending out highly personalised, targeted messaging that preyed on individual’s hopes, fears, dreams, and biases, they were able to not just convince them, but turn them into a vehicle to spread campaign messaging which often included “fake news” items. However due to the hyper-personalised and anonymous messaging methods, those pieces of advertising could not be traced back to the campaigns, keeping their hands clean at the time.
When Cambridge Analytica’s elaborate and precarious house of cards came to topple spectacularly, starting with their involvement with the Ted Cruz campaign, details began to emerge that over 87 million user profiles had been collected by the company. Questions were asked, companies folded, and arrests were made. The world was starting to unravel a conspiracy that undermined the democratic process and individual liberties. Unfortunately many used this as an opportunity to delegitimise outcomes, when in reality, these incidents were indicative of a far darker omen of a dystopian political landscape.
One might argue that it’s “just advertising”. But does advertising send you personalised messages, learn about how you respond, and keep trying until you change your mind? I daresay that sounds more like brainwashing than presenting options to the voter. Even ex- Cambridge Analytica employees have labelled the practice as “weapons grade communication tactics” designed to be used on enemy combatants. So is it ethical to apply such techniques on our own citizens? And should we really be entrusting political power to whomever acquires the most effective technology? The unfortunate reality is that we’ve now opened the door to this abuse of democracy and until such a time that legislation catches up with technology, we will be at the mercy of psychographic targeting by the data-brokers, political forces, and corporations of this world.
Does advertising send you personalised messages, learn about how you respond, and keep trying until you change your mind?
So what can you do? In truth, not that much by yourself. In general you should be especially careful about who you entrust your data to. Every time an app asks for something, even small, consider whether you actually believe they’d use your data responsibly, or indeed if they’d keep it to themselves. Additionally you should be very wary of regularly viewing any media platform that offers a tailored feed of information, like your Facebook feed or news apps. I hope that we will start seeing privacy-conscious alternatives move to replace the data- siphoning platforms we currently rely on – the Facebooks, Googles, Amazons, Twitters, and Instagrams of this world.
Political factions and big business are keenly watching this space, now-aware that democracy can be undermined with such ease and ruthless efficiency. The technology is so effective, that if they didn’t use it, they’d be negligent in ignoring the impending psychographic arms-race. As such, this issue isn’t about who wins, but about making sure that whomever wins does so without cheating. And the only way for us to restore trust, fairness, and legitimacy in our democratic processes ever again is to start passing legislation to stop it. I see fellow students every day speaking up about the mistakes of older generations, however this one will be on us. So in the sustainability discussion, I hope that we can add digital sustainability, because of all the issues we’re facing, this one is our responsibility.
Jonathan is a research student in video games at UTS, who aspires to build interactive experiences that apply emergent and procedural design to digital storytelling.