Why Social Media Algorithms Are Behind the Recent Rise in Misogyny
New research confirms these hidden codes are far more insidious than we think

I’m often reminded of how little people understand about social media.
A few weeks ago, I half-jokingly posted a thread that there should be a mandatory training one has to attend before being able to access and use those platforms. And unsurprisingly, some people thought it necessary to let me know ‘it’s not that complicated.’
‘All you need to know is how to type,’ one user replied.
This is also made clear by some of the criticisms leveraged at the social apps. For instance, I repeatedly see people — politicians included — complain that TikTok is just an app ‘filled with half-naked teenage girls dancing.’
Well, if that’s what you see, there’s a reason for that.
And it’s not because that’s what most of the content looks like.
By engineering, social media algorithms function sort of like librarians, sorting and connecting users with their preferences. That’s why my feed is mostly cats, food, book recommendations, science facts and people discussing topics I’m interested in. But, unlike librarians, algorithms don’t just stop at your preferences.
With time, they also start showing you things you haven’t previously expressed interest in. And if it seems like you do — by engaging in it, even in a negative way — they’ll just add it to your preferences and keep showing you more and more similar content.
This is also how people — especially, but not exclusively, young people — get groomed and radicalised online. Including young men and boys, many of whom have recently fallen prey to the tidal wave of social media misogyny.
A recent study done in partnership with UCL, the University of Kent, and the Association of School and College Leaders (ASCL) investigated the role of social media algorithms in online radicalisation, specifically on TikTok, which is still most popular with younger generations.
First, the researchers interviewed young people who engage with and produce radical online content and created several archetypes to represent typologies of teenage boys who could be vulnerable to becoming radicalised by it. Then, they set up TikTok accounts for each of these archetypes. All of them had distinct content interests identified in the first part of the experiment, which included masculinity, self-improvement and addressing loneliness, to name a few.
During the first five days of researchers using the accounts and watching content suggested in their ‘For You’ page — which serves as the platform’s personalised feed — the recommendations were mostly in line with their interests, with misogynistic content only being shown sporadically.
After this period, though, things started to change.
In the next two days, TikTok’s algorithm started to recommend more and more misogynistic content — increasing from 13% of recommended videos to 56% — including videos centred around objectification, sexual harassment and discrediting and blaming women for men’s problems.
As the principal investigator behind the study, Dr Kaitlyn Regehr from UCL Information Studies, points out:
Algorithmic processes on TikTok and other social media sites target people’s vulnerabilities — such as loneliness or feelings of loss of control — and gamify harmful content. As young people micro dose on topics like self-harm, or extremism, to them, it feels like entertainment.
The researchers then did another round of interviews, this time with school leaders. And found that misogynistic ideology is indeed becoming increasingly normalised and embedded in mainstream youth culture, which suggests that the harmful content young people are exposed to online translates into offline views and behaviour.
Well. This shouldn’t exactly be surprising to anyone who followed the rise of the likes of Andrew Tate — currently facing charges in Romania of human trafficking and rape, which he denies — and dozens of his clones, all of which use social media to push misogynistic beliefs under the guise of ‘male empowerment’ for their own personal and financial gain.
The rise of misogyny has also been made evident through the many surveys showing a decline in support of women’s rights and an increase in the belief that ‘feminism has done more harm than good,’ like this one conducted recently among 16- to 29-year-old men in the UK.
But it’s not just TikTok that’s tricking young men into hateful echo chambers where women are portrayed as their enemy number one.
I’ve previously written about another report by the Center for Countering Digital Hate (CCDH) that found something similar to this recent algorithmic study of TikTok, only when it comes to the algorithm behind Google Search.
Namely, when young men search for terms connected with body image, unemployment or loneliness, they’re often presented with misogynistic content, including links to incel sites and forums. And those are, arguably, among the biggest snake pits of misogyny online.
And even before TikTok became so wildly popular, there’s been a lot of discussions around YouTube essentially doing the same thing and pulling young people into a world filled with conspiracy theories, misogyny, racism and other extremism. Similarly to TikTok, this radicalisation also happened gradually: first, you look for seemingly innocuous self-help or movie review content, and then, before you know it, you’re being told that the government is purposefully making the frogs gay.
I jokingly call this the ‘Granola Effect’ to my partner because although most of my social media feeds are pretty normal, whenever I look for cooking recipes, especially when they include oats and baking, I get a spike in alt-right content recommendations. (And no, that’s not because of the research I do — I use different accounts for that.)
Anyhow, YouTube eventually banned a handful of far-right influencers and conspiracy theorists — including Alex Jones, who came up with the ‘gay frogs’ conspiracy, among many others — but I’d argue that the insecurity and loneliness-to-misogyny-piepeline has most likely never worked as well as it does today. On YouTube and TikTok, but also Instagram, Twitter and many other corners of the World Wide Web.
I’ve covered the topic of online radicalisation extensively in recent years, and all the people I talked to about it — including school counsellors, teachers and parents — agree that this started to get worse around 2022, with Andrew Tate’s rise in popularity. After all, Tate’s initial strategy — which a recent investigation by Vice likened to a pyramid scheme — was to incentivise his followers to post tons of videos of him on various social media platforms to recruit others who would do the same thing.
The result? A flood of misogynistic, sexist and otherwise hateful content.
On top of that, and despite some social media platforms bowing to public pressure and banning a few of the most controversial accounts, a recent report by the European Commission has found that they’re actually becoming increasingly slower to take action against hateful or misleading content. Even extremist posts often don’t get removed and successfully evade moderation, and the algorithm then picks them up and serves them on a gold platter to unsuspecting users.
So, if you’re a young man or teenage boy who uses social media platforms, you might not even be looking for this type of content.
But chances are that it will find you nonetheless.
It’s disheartening that women are so often blamed for the rise of online misogyny as if we’re the ones creating content promoting harmful and misogynistic tropes. Or control how it gets distributed, even though tech is still overwhelmingly a ‘boys club.’
Of course, the reason why misogyny exists today isn’t just because of social media and the internet. It’s also true that some people who consume this type of content have been prejudiced against women to begin with.
But the rise of misogyny we’re seeing today among young men and boys can be, in great part, attributed to social media algorithms that seem to be working overtime to amplify it. This obviously doesn’t absolve the people who absorb and agree with it from all accountability, but it simply points out how come there are so many of them now.
Another part of this issue is that a lot of people — some parents included — don’t take it seriously by claiming hateful ideologies have always existed in one form or another, and it’s no different today.
Sure, but at no point in history humans were shoved misleading and harmful content down their throats, sometimes for several hours a day — teenagers have, on average, roughly 7 hours and 22 minutes of screen time per day — to the point it can effectively alter their perception of reality. And even though there are more and more studies on online radicalisation and excessive social media use, we still don’t really know the full extent of the damage it can do in the long term.
We do know a couple of things, though: one, that adolescents are far more susceptible than adults to online extremism. And two, that exposure to extreme content sporadically leads people to take real-world actions with disastrous and even deadly consequences.
This, I believe, should be enough to take solutions proposed by experts in the field a bit more seriously.
The authors of the recent TikTok study, for instance, suggested implementing a ‘healthy digital diet’ education to support young people, schools, parents and the community at large, which would help them develop better social media habits as well as understand how it works and what are its potential impacts on mental and physical health. They also highlighted the importance of raising awareness of how algorithms work.
But while all of this is important, it shouldn’t take the responsibility off of tech companies that are often reluctant to moderate harmful and misogynistic content, the governments that are slow in changing legislation that would force their hand to do so or the predatory content-pushers who make and profit from it.
Still, we can arm ourselves in knowledge and understand the algorithms and push for school curriculums to adopt digital literacy education, but that’s not enough.
We also need to do more to prevent misogyny from spreading like wildfire in the first place, for instance by involving young boys and men in discussions around those ideas before they’re exposed to them on the internet.
Whether you realise it or not, most of your online activity depends on what the algorithms show you.
They’re the gods in this universe.
And they aren’t particularly concerned with our health or sanity.
Not because they’re inherently evil but simply because that’s how we programmed them to be. Add to that the increasing use of generative AI and, well, the future online starts to look concerning.
And while mandatory social media training might seem too dystopian to some today, the more I think about it, the more I’m convinced it will eventually have to be introduced in some form.
I just love how it took years to uncover the absolute nightmare going on with YouTube - that we’re actively shoveling angry, lonely, vulnerable young men to alt-right nut jobs - and then REPEATED THE EXACT MISTAKE with TikTok.
The clowns running social media sites will literally enable the 4th reich before they make their apps even slightly less profitable (but infinitely safer)
Who invented the algorithms? For the most part, it was men. Who own companies that have notoriously sexist practices and PR. So obviously they're not interested in giving women and non-white men much of a fair shake.