There's a Reason Women Aren't Swooning Over AI Like Men Are
Or rather, a great many reasons
The Noösphere is an entirely reader-supported publication that applies recent social science research to the cultural, political, and technological issues shaping our world today. If you read it every week and value the labour that goes into it, consider supporting it by liking or sharing this essay, buying me a coffee, or becoming a paid subscriber. New paid subscriptions, including gift subscriptions, taken out before the end of December, come with a 25% discount for the following year. You can redeem this discount by clicking here. Thank you for your support!
The gender gap in AI usage has been making headlines lately.
Women, it turns out, are more likely than men to avoid the various AI tools increasingly populating (or polluting) our world.
According to a recent Harvard Business School meta-analysis of 18 studies, women have 22% lower odds of using generative AI websites and apps than men, both at work and in everyday life. And this pattern holds across countries, sectors, occupations, and tools. Between 2022 and 2024, women accounted for roughly 42% of global users of ChatGPT and Perplexity websites, and just 31% of Anthropic’s. On smartphones, the gap widens even further: only 27% of ChatGPT app downloads come from women.
The oft-proposed explanation is that women understand this new technology less, largely because they work in roles with lower exposure to it. Women are, after all, still outnumbered in STEM degrees and careers, including in AI-specific roles. The same is true in AI leadership — women hold fewer than 14% of senior executive positions in the industry. But Harvard’s study also found that the usage gap remains even when women are explicitly given opportunities to learn and use AI tools.
The gap’s root causes just aren’t as simple as women being ‘less into technology’ or lacking exposure or training.
But they shouldn’t be that hard to identify, either.
For me, and, I imagine, for quite a few other women, the very first time I encountered any mention of AI (beyond science-fiction media) was through stories about non-consensual, sexually explicit AI-generated images and videos, dubbed ‘deepfakes.’ That was circa 2018 or 2019, by which point they’d already been around for a year or two.
And, to the surprise of absolutely no woman with even just an ounce of awareness of the world we live in, these AI deepfakes overwhelmingly targeted women and girls.
By the end of 2020, more than 85,000 deepfake videos were circulating online — 95% of them sexual in nature, and 90% featuring women. By late 2022, there were thousands of easily accessible deepfake-creation tools, downloaded by millions of people worldwide. Today, even some social media platforms — like Elon Musk’s zombie Twitter, also known as X — can be used to produce non-consensual intimate imagery of female celebrities and influencers. Politicians, lawmakers, journalists and other women in the public eye are routine targets too; in the US, roughly 1 in 6 congresswomen had already been victimised.
But it’s not only public figures either. Various AI tools are also weaponised against ordinary women and children. Some are explicitly built to ‘undress’ women, or, as their creators shamelessly advertise, to ‘nudify’ them. Others are used to identify vulnerable or ‘controversial’ content in women’s social media posts — such as speaking about sexual harassment or calling out misogyny — and then to automate harassment campaigns.
And if AI industry’s misogynistic offerings weren’t enough, there’s also the issue of gender bias baked right in, courtesy of the biased, unfiltered, and often unethical datasets these models are trained on.
AI tools used in recruitment, for example, are more likely to recommend men over women, especially for higher-paying jobs, and even when qualifications are identical. Meanwhile, AI chatbots like ChatGPT have been found to advise women to ask for significantly lower salaries than men with the same CVs. In healthcare and public care sectors, AI systems tend to downplay women’s needs and are more likely to miss illnesses in women than in men. Even in criminal justice, tools like the American COMPAS have shown bias against women, overpredicting their risk of reoffending. Racial bias frequently creeps into these same tools as well.
Publicly available generative AI tools are — surprise, surprise — no better. When asked to depict a secretary or a nurse, they usually generate women, but when asked to depict a manager, doctor, or professor, they usually generate men. Women are also routinely portrayed as much younger than men across occupations and social roles. In some cases, AI’s output doesn’t even accurately reflect real-world disparities — it amplifies them.
All of this can, in turn, reinforce the discriminatory patterns already present in our society, leading to more inequality, not less.
As Laura Bates, writer and founder of the Everyday Sexism Project, puts it :
This is affecting women and other marginalised groups in very real ways, from whether you will be accepted for a bank loan, to being put forward for a job or even getting the right diagnosis for a serious health problem. And unless we demand accountability now, there is a risk that AI will drag us backwards, as today’s inequality will be written into the building blocks of a future world.
Tech companies love to promise a Glittering New World in which AI improves just about every aspect of our lives, but women are practically forced to see the cracks running straight through that techno-optimist fantasy. And we’re not in the habit of ignoring them once we do.
Women are often branded as the ‘overly cautious’ gender, especially in business or investing. But apart from the fact that research consistently finds women outperform men in investing — so whatever we’re doing, or not doing, is hardly ‘too’ cautious but actually just right — it also shows women aren’t necessarily more risk-averse at all. What is true, though, is that women tend to make decisions differently.
In investing, for example, women are more long-term oriented, less impulsive, and more focused on steady growth. They’re also more likely to align their choices with their values, prioritising sustainability and the common good. And we see this in other domains, too, including in workplaces and politics.
When it comes to AI, women’s approach seems no different. Women report more negative attitudes toward AI than men do, mostly because they anticipate that its harms might eclipse the benefits. Concerns about transparency, safety, data privacy, fairness, inclusivity, sustainability, and collective well-being all weigh more heavily on women’s minds, including those working in tech. Unsurprisingly, it was two Black women, Joy Buolamwini and Timnit Gebru, who first sounded the alarm about racial and gender bias in AI facial analysis software.
Among adolescents, this gap exists as well. In one recent survey, 71% of teenage girls expressed concerns about AI reinforcing gender bias, and 70% linked AI recommendation algorithms to poor mental health. Most boys, by contrast, believed AI would help create more jobs and were far less concerned about its societal impact. Their interests also diverged: girls were drawn to ethics and policy; boys toward AI development and robotics.
Still, we know that women’s anxieties about AI are well-founded. (As the first part of this piece should make depressingly clear.) And even when women do use AI, they may end up paying a higher penalty for it than men.
A recent study by Hong Kong Polytechnic University and Peking University asked more than 1,000 software engineers at a global tech company to evaluate the same Python code snippet and rate both its quality and the author’s competence. The only variables were the author’s supposed gender (male or female) and whether they used AI assistance. As expected, engineers using AI were judged as less competent overall. But this competence penalty hit women twice as hard — female engineers received a 13% drop in perceived competence for identical work, compared with 6% for male engineers. As the researchers commented:
The AI assistance is framed as a ‘proof’ of their [women’s] inadequacy rather than evidence of their strategic tool use.
In a follow-up study, they found that women at the same company are very much aware of this gender-biased competence penalty, which, in turn, contributes to their lower use of AI.
This is, yet again, hardly shocking. Women have long been held to different standards at work and punished more harshly for failure or incompetence (real or imagined), especially in fields that remain heavily male-dominated. Thanks to the usual cocktail of gender stereotypes and biases behind it all, the AI-usage double bind is then almost all too predictable: if a woman uses AI, she must be struggling. But if she doesn’t use it, she’s judged, too.
Whether in business, investing, AI, or any other area, women aren’t necessarily risk-averse, but risk-smart, alert to what’s at stake and understandably uninterested in gambles where they know they’re likely to lose far more than men will.
Women have been less optimistic about new technologies before as well. We were more sceptical in the early days of the Internet, more cautious about online shopping, and more concerned about digital privacy. Today, we still look at modern tech through less rose-tinted glasses than men.
Perhaps the reason women so often do more of the worrying and the damage-anticipating is that we’re more attuned to the fact that our society, and the people running it, can’t be bothered to think past the next quarterly report, let alone consider the perspectives, needs, and concerns of ‘only’ half the human population. The patriarchal-capitalist modus operandi is, after all, to dive headfirst and worry later, or not at all. Profit and market pressures repeatedly override safety, ethics, and sustainability, just as some men’s ambitions repeatedly override women’s well-being.
It’s hardly surprising that women aren’t racing to embrace AI tools that were largely designed without them — although they wouldn’t exist without women’s contributions to computing — and are even used against them. If anything, what’s surprising is that more people don’t feel the same way.
Without more than a patchwork of regulations, AI’s ballooning energy consumption, its environmental impact, and its impact on our critical thinking skills, creativity, sociality, and shared reality will affect us all. In fact, it’s already impacting us all, to an extent.
We should then all worry now, not later, instead of hoping that the same companies that have systematically stolen millions of copyrighted works from authors, artists, journalists, and other creatives will, one day, decide to slow down and act responsibly, of their own will. At some point, perhaps sooner than we think, opting out of AI might not even be possible. Then what? What else might technology built with no brakes and little regard for human and planetary well-being result in? What fresh horrors are we willing to find out the hard way?
The solution here is not to encourage women to ‘lean in’ (yet again) and use just as much AI in their day-to-day lives and work as men do. The solution is for the rest of the world to take women’s caution and the concerns of other marginalised groups seriously.
Of course, women’s underrepresentation in AI development and leadership is part of the problem, too. You can’t claim to be building a ‘revolutionary’ and ‘society-equalising’ technology when the people designing and governing it represent only a sliver of it. What you get instead is a technology destined to reproduce the very same inequalities and hierarchies that led to this situation in the first place.
Perhaps on another, far more egalitarian planet, a technology like AI could truly be the ‘great equaliser’ some imagine it to be. Perhaps there, it could actually lead to better outcomes for everyone. But sadly, we don’t live there; we live here.
And any technology we come up with is automatically plugged into the social machinery of this world and this reality. The two evolve together and feed into each other, mirroring all that’s good and all that’s ugly.
Women’s scepticism about AI, and our reluctance to use it, may ultimately be just scepticism about the state of everything else around us.



I’m a GenX woman and tried some AI apps and found them vastly inferior to my own abilities in writing and pretty much anything else I need to do. I don’t use it for research or fact checking because my own skills are better and it is frequently just WRONG or makes up random bullshit. Fixing its stupid mistakes wastes tons of time. I have plenty of meatspace hobbies, good social skills and actual IRL friends so I don’t need to talk to an inanimate text extruder for social interaction, and I value my privacy. Plus as an editor I’m acutely aware of the scraping and plagiarism that comes with AI usage and do everything I can to protect myself and my clients from intellectual property theft. The (mostly men) who build these tools have a broken moral compass and no understanding of consent, so no thanks on that front. My YA daughter is a STEM student who loves AI technology for advanced mathematics, improving radio telescope imaging etc. but doesn’t use LLMs “because it’s cheating, they are creepy and dumb and I’m a much better writer anyway and why would I just give some robot stranger my personal information?!”. Women are generally smarter, more industrious, conscientious and thoughtful of the potential consequences of our actions than men, because in a world populated with predatory men and corporations a mistake could literally cost us our lives. 🎤 BOOM 💥
The other big reason is that AI seems like a narcissistic gaslighting love-bombing tool. It’s overly helpful and flattering (“That’s an astute point you’ve made about…”) but at the same time, it is a people pleaser and lies through its teeth. I’ve had so many interactions with it where I have asked it questions that it fabricated data for, hallucinated, etc. AI seems like every abusive boyfriend - charming and offering the world only to gaslight and lie. AI apologies to me over and over but keeps lying!