The Scariest Thing About the Misinformation Epidemic? No One Is Immune to It.
Yes, that includes you and me as well
The Noösphere is an entirely reader-supported publication that brings social sciences research into frequently overlooked topics. If you read it every week and value the labour that goes into it, consider sharing and liking this essay or becoming a paid subscriber! You can also buy me a coffee instead.
I’m not sure why the Algorithm Gods of social media decided to place an AI-generated video of a dancing toddler dressed as a cabbage on my homepage a few days ago, but they did.
And that sent me down a rabbit hole of similarly bizarre — yet insanely popular — content featuring babies dressed as all sorts of other vegetables and fruits, happily bopping to tunes you’d expect to hear at a disco for heavily caffeinated badgers. What’s slightly worrying, though, is that quite a few of the comments on these videos are along the lines of, ‘the mother did a great job filming this’ and ‘what a talented baby.’
Our ability to discern what’s AI-generated from what’s real is clearly eroding — or perhaps it was never that sharp to begin with — but so is our ability to tell what information is true and what isn’t. And that’s scary. Because for every relatively harmless dancing cabbage baby (although I think I lost a few brain cells watching it), there are countless posts out there that aren’t so innocent.
We’re swimming in a digital ocean increasingly polluted with low-quality information, spam, deepfakes, half-truths, mistruths and blatant disinformation, and the rise of generative AI tools is only making it worse.
Yet we still don’t talk about it — or its consequences — as often as we should.
When tragic events strike, so do viral gut-wrenching images and videos. Only today, not all of them are real.
You might have already seen the AI-generated series of images depicting a distressed little girl holding a puppy in the aftermath of Hurricane Helene, the massive Category 4 storm that hit Florida in late September. In one photo, the girl even clearly has an extra, misplaced finger. Still, these images were widely shared — including by Senator Mike Lee of Utah — and used to criticise the Biden-Harris administration’s response to the disaster.
Just weeks earlier, social media was flooded by a tsunami of AI images and videos of pets holding political signs and rifles or being chased by Black people, prompted by a false story about Haitian immigrants in Ohio eating pets. Some of these images even showed the Republican presidential nominee, Donald Trump, hugging and kissing the pets or rescuing them from the supposed ‘pet-eating’ immigrants. Earlier this year, Trump supporters — and Trump himself — also shared a series of fake ‘Swifties for Trump’ deepfakes, including one with the message, ‘Taylor wants you to vote for Donald Trump.’ Ironically, this only prompted Swift to publicly voice her support for Kamala Harris.
Now, the problem isn’t that all people who come across these fake images or videos might not realise they’re fake. The problem is that even if you know something is AI-generated, you might believe the narrative it’s pushing anyway. These deepfakes campaigns can essentially fuel broad, national, or even global sentiments that can be exploited for political gain — whether against immigrants, political opponents, or any other group of people. And the 2024 US Presidential Election is just one example.
Ever since AI tools have become easily accessible and affordable — or even free — allowing anyone with an internet connection and an electronic device to produce an almost unlimited deluge of text and visuals, there’s been a noticeable surge in all kinds of mis- and disinformation. What’s even more worrying, generative AI has made it easier to automate the creation and spread of falsehoods. Filippo Menczer, a professor at Indiana University who studies misinformation campaigns, recently discussed this trend in The Conversation:
We have uncovered many examples of coordinated inauthentic behaviour. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organisers also control ‘like’ and ‘unlike’ it hundreds of times in a short time span. (…) Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds.
As Menczer points out, these large-scale efforts can then effectively ‘shift public opinion, push false narratives or change behaviours among a target population.’
A recent Guardian investigation, for instance, found that videos made with an AI tool developed by Synthesia, a startup based in London, and using real-life people’s likenesses, have been used to push fake news supporting dictatorships in Venezuela and Burkina Faso on platforms like X (formerly Twitter) and Telegram. One of the most bizarre misinformation campaigns I’ve seen this year, though, is the ‘North Korea is great, actually’ one. The videos, posted primarily on TikTok, combine AI-generated and real images of North Korea with captions that criticise mainstream narratives about the country. But in addition to spreading propaganda, they also double as… anti-ageing supplement advertisements.
I imagine many people — including me and you, dear reader — assume we’re savvy enough not to fall for such blatantly false stories or AI-generated content. Are we really, though? Are any of us truly immune to the onslaught of misinformation around us?
There’s been a lot of research in recent years exploring not only why people believe misinformation they encounter online but also which demographic cohorts are most vulnerable to it. And, perhaps unsurprisingly, one of the most commonly studied ones is older adults.
A 2019 Science Advances study analysing fake news shared during the 2016 US election revealed that older Americans — especially those over 65 — were significantly more likely to share these stories than younger people. On average, they shared nearly seven times as many fake news articles as the youngest group. But the study also found that conservatives were more likely to spread fake news than liberals or moderates.
This latter pattern was confirmed in a paper published in Nature earlier this month as well. The researchers examined content shared by politically active Twitter users with the help of both professional fact-checkers and politically-balanced groups of laypeople and found, with both methods, that conservative users shared four times more links to low-quality news outlets than their liberal counterparts. Analysis of similar data from 16 different countries, spanning 2016 to 2023, showed a clear association between conservatism and the sharing of low-quality and fake news, too.
However, studies also show that misinformation can influence users across the political spectrum, particularly those with more extreme beliefs.
Still, age and politics aren’t the only predictors of who believes in and shares fake stories online. There’s evidence that people with heavy mobile phone usage tend to be less vigilant about the information they consume and, consequently, more likely to believe falsehoods they encounter on their little screens. The same pattern applies to heavy internet users — those who spend nine or more hours of recreational time online daily — and those who use social media as their primary news source. Young people, who are both more likely to be heavy internet users and rely on social media for news, are also at greater risk.
But even if you aren’t glued to your devices, consume news from a variety of sources, and stay cautious while scrolling through the social media abyss, misinformation could still get to you over time.
A recent study published in PLOS ONE explored whether people who believe in human-caused climate change would start to doubt their views after being repeatedly exposed to climate-sceptical claims. The results were striking: even after just a single repetition, participants rated climate-skeptical statements as more ‘true.’ This effect persisted even among the most staunch climate science supporters, who described themselves as ‘alarmed’ by climate change.
On the one hand, we’re all more likely to find truth in statements that mirror our own beliefs — this is known as confirmation bias. On the other hand, the more we’re exposed to an idea — and regardless of our original convictions — the more likely we are to accept it as true. This phenomenon, dubbed the illusory truth effect, has long been exploited by propaganda experts. As Nazi propaganda minister Joseph Goebbels famously said, ‘repeat a lie often enough, and it becomes the truth.’
We might pride ourselves on being rational, free-thinking apes with big heads and brains and all that. But in reality, we’re not well-equipped to handle the constant flood of information we now face daily. And I’m afraid no demographic is entirely immune to misinformation — we’re all susceptible.
What’s dangerous, though, is assuming that you, personally, aren’t.
It’s hardly an exaggeration to say that the chance of encountering misinformation today is close to 100%, particularly on social media. Yet, a recent Oxfam survey revealed that nearly one in three internet users (30%) are unaware that online content might be false or biased, and one in twenty (6%) believe everything they see online.
But while some people don’t realise this is an issue, others actively deny it. Unsurprisingly, Elon Musk, the current owner of arguably the most misinformation-riddled social media platform, falls into the latter camp. Figures like Musk, who position themselves as ‘free-speech absolutists’ and ‘heroic rebels,’ also often claim that any interventions to prevent the internet from turning into one, big sewer of brain-rotting content and lies is a form of ‘censorship.’
If anything, what could be seen as censorship is allowing falsehoods to spread unchecked. The more our digital landscape is polluted with unambiguously false or misleading and harmful information, and the less we do about it, the bigger an impact it has on suppressing authentic, valuable voices. That’s especially concerning given that our Stone Age brains have an upper limit on how much information we can process in a given time. The sheer quantity of information — and how frequently we’re bombarded by it — is clearly troubling enough on its own. So, what happens when an increasing share of it isn’t true? What if, one day, there are more mistruths than truths? Will we simply give up trying to discern what’s real from what’s not?
Or will we stop trusting anything altogether?
History has shown us, repeatedly, that the erosion of trust — in democratic institutions, electoral system, public discourse, etc. — fueled by the unchecked spread of disinformation is an ideal situation for fascist politics to thrive and for authoritarian leaders to rise to power. And, needless to say, that never ends well.
But the good news is that we already know how to fight this epidemic. Misinformation experts have identified many ways for technology platforms to help users make informed judgments, such as accuracy prompts that nudge people to question the veracity of information, friction elements that make content sharing more deliberate, and fact-checking labels or community notes. Another, more obvious solution is to implement educational interventions — countering specific misleading messages with evidence-based campaigns and teaching people how vulnerable they are to deceptive, low-quality content and how to spot it. Many countries already do that; some have even introduced spotting misinformation into school curriculums.
Unfortunately, it’s unrealistic to assume we can completely sanitise our online environment — whether for ourselves or our kids — even if Big Tech were fully committed to tackling misinformation (which, let’s be honest, it doesn’t seem to be). Individually, the best thing we can do is equip ourselves with tools to assess the information we encounter critically and not be afraid to flex our scepticism muscles.
The vaccine against the misinformation epidemic lies in critical thinking and education but also in acknowledging that none of us is immune to it in the first place.
One day, and likely much sooner than we think, that AI-generated video of a dancing baby in a cabbage outfit will become almost indistinguishable from reality. We really shouldn’t wait until we reach this point to take action against the unchecked spread of misinformation.
For better or worse, we’re all publishers or amplifiers of information today.
But we must also be its responsible curators.
I recently had to eat some metaphorical humble pie as I was made aware of the charlatanism of Andrew Huberman and his HubermanLab podcast. I thought I was pretty good at being critical of my sources, but I totally fell for his apparently rigorous, pro-science style. Even when I was skeptical of some parts of it (his dudebro biohacking tendencies raised a lot of red flags for me and I tended to avoid those episodes), I still accepted other episodes uncritically. There’s never any point where we can stop being vigilant here, because as you pointed out the misinformation is getting better all the time
I think they're already weaponizing 'fact-checking' against those who try to warn of the fascist threat, so there's that too. Yes, the goal is for everyone to trust nothing. We might already be past the tipping point, idk.