How Bias in AI Leads to Inequality on Steroids
AI’s perception of reality often mirrors our own biases — only they are projected onto a much larger canvas
The Noösphere is an entirely reader-supported publication that brings social sciences research into frequently overlooked topics. If you read it every week and value the labour that goes into it, consider liking or sharing this essay or becoming a paid subscriber! You can also buy me a coffee instead.
When my partner told me the company he was doing a project for had started using AI to screen CVs, I just gave him a long, quiet look; the kind that says, ‘I really doubt this is going to end well.’
That was two years ago. Since then, AI has marched even further into the recruitment process, with an estimated 99% of Fortune 500 companies now using AI tools to make hiring decisions. Recently, I’ve even come across several videos of people having job interviews, not with human beings, but AI models. And as you might expect, the results are… weird. The AI interviewer glitches, repeats itself, spouts nonsense, or just goes completely silent. In one clip, it even ends up hiring the other AI interviewer. (Well, at least the machines have solidarity. That’s one thing humans could learn from them.)
But AI’s presence hardly ends with recruitment. Every day, it reaches deeper into our world, showing up everywhere from offices and hospitals to classrooms, lecture halls, and government institutions. And while it’s marketed as a tool to handle the dull, repetitive tasks no one wants to do, we’re instead increasingly asking it to replace human judgment altogether.
That’s where the trouble begins. Because AI systems aren’t just prone to glitches or so-called ‘hallucinations’ — responses that present misleading or false information as if they were fact — but also biases. And those are frequently as bad, if not sometimes worse, as the ones we, humans, have.
It’s been clear that AI tends to play favourites almost from the start.
As early as 2018, Black researchers Joy Buolamwini and Timnit Gebru exposed racial and gender bias in commercial facial-analysis software, showing it performed far worse on women and people with darker skin tones, especially Black women. Gebru was later fired from Google, where she co-led the AI ethics team, after raising concerns about these very issues. Surprise, surprise.
Meanwhile, one of the first AI recruitment tools, trialled by tech giant Amazon in 2015, had to be scrapped after it was discovered to consistently favour male candidates, downgrading CVs that included the word ‘women’ and penalising graduates of women’s colleges. In 2019, another AI hiring tool by HireVue — used at the time by hundreds of companies worldwide — was found to favour certain facial expressions, speech patterns, and voice tones, disproportionately disadvantaging minority applicants.
But it doesn’t seem we’ve learned much from these early warnings. Or perhaps we simply assume AI has since become neutral. Well, no — it hasn’t. Just a few months ago, over 200 researchers from academic institutions worldwide signed the ‘Scientific Consensus on AI Bias,’ affirming that ‘AI can exacerbate bias and discrimination in society.’ Their conclusion is backed by a growing body of research that unequivocally demonstrates that AI systems still very much reflect existing social biases.
One recent study from the University of Washington screened hundreds of real-world résumés using three large language models (LLMs) — a subset of AI focused on text — and found they overwhelmingly favoured those from white people, particularly men. Overall, candidates with white-associated names were preferred 85% of the time, while those with male-associated names were preferred 53%. Black men fared the worst, with AI choosing other candidates in nearly 100% of cases. Even for roles typically dominated by women, like those in HR, white men were still more likely to be chosen.
As Kyra Wilson, the study’s lead author, explains:
These groups have existing privileges in society that show up in training data, [the] model learns from that training data, and then either reproduces or amplifies the exact same patterns in its own decision-making tasks.
Wilson also notes that even stripping names from CVs won’t fix this issue, as AI can still infer someone’s identity based on other clues, such as where they went to school, which city they live in, or how they write.
Similar recent studies also show that AI is far more likely to recommend men over equally qualified women — especially for high-paying roles — and more likely to respond in more condescending or demeaning ways to users who write in ‘non-standard’ varieties of English.
Text-to-image generative AI models are just as biased, if not more so. Another study published last month in Scientific Reports found that Stable Diffusion — a generative AI used by millions worldwide — usually depicts secretaries and nurses as women, while managers, doctors, and professors as men. Janitors, garbage collectors, and cleaners are, on the other hand, portrayed as Black or Middle Eastern, whereas people in prestigious, high-paying roles as white — and male. The study also examined whether exposure to these stereotypical representations could reinforce users’ existing biases, and found that it could. The good news? Exposure to more inclusive and balanced imagery can help reduce them.
The bad news is, though, that inclusive representation isn’t (yet) as common as it should be. One analysis revealed that out of 133 AI systems reviewed, 44% demonstrated gender bias, and about 25% showed both gender and racial bias.
Still, what’s most worrying is that these models don’t even accurately reflect our world’s unequal realities. A recent large-scale study on gender bias in text-to-image AI found, for instance, that when prompted to generate images of ‘financial analysts,’ only 16% of the outputs included women. In reality, women now make up nearly 44% of financial analysts in the US.
AI is often said to be a mirror of humanity, but it’s not a clear one; it both reflects and magnifies the biases we’ve built into our world, leading to potentially disastrous real-world consequences.
If humans are what they eat, then the machines we build are what data they are trained on. And that data is usually, well, far from flawless.
Today’s dominant forms of AI — like algorithmic decision-making systems and generative models — are trained on vast amounts of largely unfiltered, biased, and sometimes outright unethical data scraped from across the internet. This data tends to over-represent and favour certain groups, while under-representing and stereotyping others. It then certainly doesn’t help that the people building this technology tend to come from a narrow demographic as well. Women make up just 29% of the AI workforce, with even lower representation at senior levels. At many leading AI companies — like OpenAI, Anthropic, and within the AI divisions of big tech — those at the top are overwhelmingly men, particularly white men. Meta’s AI council, for example, is composed entirely of white men.
Gender, racial, ethnic, and other forms of discrimination perpetuated by AI aren’t just accidental — they’re baked into this technology by design. And because the people in charge of it are frequently insulated from its potential harms, they either fail to recognise that those harms even exist, or they do, and simply don’t care.
Historical biases that still permeate our world already do plenty of damage on their own, a topic I often cover in my work. But what happens when it’s not just people who hold these biases, but also machines? What if they are used by nearly all the companies and institutions that shape our lives? And what if their biases are worse than ours, as they increasingly seem to be?
Given that AI can spread information and make decisions much faster, and at a much, much greater scale than any human ever could, the inequalities it produces could be more extreme than anything we’ve seen before. This would essentially be inequality on steroids — automated, invisible, and embedded into everything from housing and healthcare to education, public services, and criminal justice.
Take hiring, for instance. When biased AI makes those decisions, it will tend to favour the demographics of those who’ve historically held similar roles, not candidates who truly deserve it. In healthcare, on the other hand, AI bias could lead to less accurate diagnoses or treatment recommendations for women and minority groups, since they remain underrepresented in medical research and, consequently, the datasets that power it. Meanwhile, in public institutions, it could impact access to services and benefits, unfairly discriminating against already marginalised groups, like low-income communities or immigrants. In fact, this is already starting to happen. In 2020, Dutch tax authorities wrongfully flagged thousands of parents for benefit fraud after racially profiling them with the help of an AI tool.
However, there’s also the risk that our own biases may deepen in an increasingly AI-dictated and AI-generated world. Research by UNESCO has shown that even virtual assistants like Siri and Alexa, both of which default to female voices, reinforce stereotypes of women as subservient and compliant. AI can also amplify a wide range of other existing biases that still hold women and marginalised groups back, for example, by assuming that leaders are male or that high-status professionals are white.
As Tali Sharot, neuroscientist and co-lead author of a study involving over 1,200 participants interacting with AI systems, explains:
(…) we’ve found that people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI.
One recent study found that simply witnessing AI behave unfairly towards others has a spillover effect, weakening people’s sense of accountability and making them less likely to stand up to injustice themselves.
Clearly, the impact of AI extends well beyond individual unfair decisions, potentially affecting most or all of our world. And not exactly for the better.
It’s perhaps no surprise that women are more concerned about the ethics of using AI than men, and that fewer women use it. According to the latest estimates, 85% of the mobile user base of ChatGPT is male, for instance.
Women and girls are also less likely to feel optimistic about the technology’s future. In one recent survey of thousands of students aged 12 to 17, 71% of girls expressed concerns about AI reinforcing gender bias, and 70% linked AI recommendation algorithms to poor mental health. Most boys, by contrast, believed AI would help create more jobs and were less concerned about its societal impact. Their interests also diverged: girls were drawn to ethics and policy, while boys leaned toward AI development and robotics.
Would things then be different if women were leading AI development instead? Maybe. Maaike Harbers, Professor of AI & Society at Rotterdam University of Applied Science, notes:
My research shows (…) that women are more committed to resolving ethical issues, such as disinformation, privacy violations and data bias. Women and minorities at tech companies who are committed to resolving ethical issues are often seen as party poopers, as putting the brakes on technology. When it’s hard enough as it is for them at these male-dominated companies.
The approach of many AI companies today — charging ahead without meaningful guardrails or accountability — isn’t remotely surprising, though; it simply reflects the broader logic of patriarchal, capitalist systems that prioritise growth, profit, and dominance over ethics, safety, and consent. In this worldview, innovation isn’t about collective betterment but about consolidating power and wealth, even at the expense of public well-being and the planet. And those beliefs extend to technology itself, too. It must crush and dominate and exploit, just like its creators.
It’s then naive to expect the very people now ‘moving fast and breaking things’ to fix AI’s bias problem. Unless they’re forced — which they should be — they won’t do that.
Still, let’s not forget that AI isn’t a god, and we aren’t its passive worshippers. We built it, which means we also have the power — and the responsibility — to shape it in ways that actually serve all of society. That starts with ensuring AI training data is inclusive, diverse, and representative of a wide range of demographic groups. It means actively including diverse voices in AI development. It means not outsourcing thinking, decision-making, and creativity — the very things that make us human — to machines, but using them to free up our time and energy instead. And it means putting strong legal safeguards in place to guide technology toward a future that brings out the best in us, not the worst.
Ultimately, though, technology won’t change the status quo or the norms that sustain it. Humans have to do that first.
If current trends continue, the AI-powered future will be anything but neutral or conducive to human flourishing. Instead, it will just automate and amplify flawed thinking and the very hierarchies that have long kept some people down while lifting others even higher.
But it doesn’t have to be this way. The biases, prejudices, and exclusions that have shaped our world for centuries can be written out of the code that still governs it.
All it takes is a collective willingness to recognise these distortions for what they are, and to admit that they serve almost none of us in the long run, no matter how well they’re marketed to certain groups.
A few years ago when I began using AI quite a bit, I ran a similar experiment: an image of a {insert country of origin/ethnicity here} man eating a sandwich outside. The outputs were astonishing. The caucasian man was always dressed in a tie and slacks enjoying a sandwich in a scenic park or the steps of an important looking building, the Filipino man was eating a burger in labor ready attire on a picnic bench outside a seedy restaurant, the Black man was wearing casual clothes eating his sandwich on a curb. I ran similar experiments with women “having picnics” and found out that picnics are somehow sexualized by default, odd and unexpected. I agree that the bias is a distortion, a carnival mirror, not a true or faithful reflection of statistical reality.
AI also reflects people's bias in terms of individual usage. I cannot find it now but I saw a post where a guy had been using ChatGPT to translate texts (which has its own issues but that's beyond the scope of this comment), and he somehow found out that the translations it was providing were incorrect because it was reflecting back what it thought he wanted to hear based on previous questions. So ChatGPT was learning his own proclivities and then tailoring answers to suit. If that's true...isn't that terrifying?