What if AI Could Bring Out the Best in Us, Not the Worst?
We shouldn’t just focus on what we want AI to be (or not), but who we want to be thanks to it
The Noösphere is an entirely reader-supported publication that brings the latest social sciences research into frequently overlooked topics. If you read it every week and value the labour that goes into it, consider becoming a paid subscriber! You can also buy me a coffee instead.
AI is often called a mirror of humanity.
And not in a good sense, exactly.
After all, the large language models (LLMs) that underpin the most popular AI tools today — like Chat GPT — are trained on vast amounts of unfiltered, flawed, and sometimes even unethical data scraped from all around the Internet. And since the Internet itself, particularly social media, is full of bias, AI is bound to absorb, perpetuate, and even amplify it.
A growing number of studies show that generative AIs, such as Midjourney and DALL·E, often perpetuate regressive gender, racial and homophobic stereotypes. Tools and platforms powered by AI are also used to create nonconsensual sexually explicit content — known as deepfakes — abuse, harass, and intimidate women and girls and spread misinformation, among a few other things.
And as more and more people use AI in their work, studies, and homes and spend time in online environments where AI plays an increasingly important role, it’s apparent that we cannot underestimate the impact its embedded bias can have on us.
Or all the harms AI’s misuse can do — and it’s already doing.
Still, we shouldn’t overlook how AI could help us overcome those issues and, ultimately, bring out the best in us, not the worst.
Yes, that’s possible, too.
When Chat GPT, a chatbot developed by OpenAI, was first released in November 2022, it quickly went viral online as people shared various examples of what it could do. That included everything from writing kid’s stories to travel planning and recommending recipes based on the ingredients already in your fridge.
But what if chatbots like Chat GPT could also… change people’s minds?
In particular, when it comes to attitudes towards crucial science and social justice issues?
A recent study by researchers at the University of Wisconsin–Madison published in the journal Scientific Reports tried to test whether even a short conversation with an AI-powered chatbot could have that big of an impact. Or at least help expand people’s understanding. And so they asked over 3,000 participants, differing in gender, race, education and opinions, to have real-time conversations with GPT-3 — a precursor to the one that powers ChatGPT — about two divisive topics: climate change and BLM.
After analysing 20,000 dialogues, the roughly 25% of people who were least supportive of the facts of climate change and its human-driven causes or BLM reported being far more dissatisfied with their interactions than everyone else. However, despite being disappointed with the experience, the chat left them more informed and even positively shifted their thinking on both topics. The hundreds of people who reported the lowest levels of agreement with the scientific consensus on climate change moved a combined 6% closer to the supportive end of the scale, for instance.
As the study points out, this could be due to their experiencing cognitive dissonance — mental discomfort that occurs when our beliefs are contradicted by new information — which can sometimes actually motivate people to update their opinions.
Now, keep in mind that the study was conducted using merely a precursor to ChatGPT. Could a more advanced, and hence more skilled in communication, chatbot have an even more significant impact? Perhaps.
Another recent study by Carey Morewedge, a Boston University professor of marketing, published in PNAS, found something equally interesting. Morewedge set out to discover if seeing social biases — including racist, sexist and ageist ones — made by algorithms could help us recognise our own. To this end, together with his collaborators, he devised a series of experiments with a set of fictional Airbnb listings, which included a few pieces of information about each, and invited over 6,000 participants to rate how likely they were to rent each.
The participants were then told about a research finding that explained how the host’s characteristics — like race, gender, attractiveness or age — might bias the ratings and asked to spot the bias in the ratings of either real algorithms or ratings attributed to algorithms which were actually the participants’ choices, in disguise. Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions — even when those were the same.
Commenting on the research, Morewedge said:
Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help us better ourselves.
But this is just scratching the surface of everything that AI could help us do. And become.
The battle for our attention in today’s digital ecosystem is frequently won by the loudest, flashiest, cheapest, most biased, polarised, and enraging content, products, and people. For now, AI —specifically, the generative sort —tends to add further fuel to that online dumpster fire rather than try to extinguish it. (Some experts even predict that by 2026, 90% of all online content may be AI-generated.)
However, it’s also increasingly used to dictate what we watch, read, buy, consume, write, etc. Search engines, social media platforms and common consumer products — like Gmail, for instance — often already rely on AI, sometimes even heavily, to function.
But what if instead of continuing the tradition of non-AI algorithms that feed us content regardless of whether it’s helpful, true, thoughtful or enriching, AI models did the opposite? What if instead of prioritising whatever or whoever attracts the most eyeballs and clicks or pays the most money, it prioritised our individual and collective well-being
What if it gave us better recommendations, connected us with more like-minded people and even nudged us to adopt healthier behaviours and make wiser choices?
This is the goal of the Meaning Alignment Institute, a non-profit AI company that advocates for the need for ‘Wise AI’, which it defines as ‘systems that are not just intelligent but morally astute.’ The Institute is currently developing a model, dubbed ‘Democratic Fine-Tuning,’ which could help create the Wise AI, thanks to a moral graph of values crowdsourced by people everywhere.
As Joe Edelman and Oliver Klingefjor, the company’s co-founders, wrote in a post introducing the model:
LLMs, unlike recommenders and other ML systems that precede them, have the potential to deeply understand our values and desires, and thereby orient social and financial systems around human flourishing.
In practice, this would mean that our search engines, social media platforms, and the Internet at large were organised by AI that was always guided by the values we collectively decided were the most important in specific contexts. And, as Edelman and Klingefjor note, it wouldn’t just ‘answer and obey.’
If you were to purposefully look for harmful information, for instance, instead of getting right what you asked for, you’d be prompted to elaborate on your reasoning. And perhaps even encouraged to reflect on or change your perspective — after all, as we see in the findings of the chatbot study, this could happen — or redirected to someplace that could help support you in whatever you’re going through.
Writer Elle Griffin has recently written about the Meaning Alignment Institute’s project in her newsletter, The Elysian, too, and she points out that if this Wise AI guided the Internet and gave us things that align with our best values, this would also create an incentive for companies to do better. Consumer goods brands would, for example, focus on producing more ethical goods, and news sources would publish more think pieces in line with these values in order to rank on search engine’s front pages.
Additionally, by learning what inspires us, motivates us, and what we truly need to be our best selves — building on all the advancements in behavioural sciences — AI could nudge us to adopt healthier, more mindful, sustainable and pro-social behaviours.
The potential for this technology to help us be better, wiser, healthier, and even to deepen our humanity is endless.
The only problem is understanding how to make it happen.
Douglas Engelbart, an American engineer and early computer pioneer, argued that the purpose of computers is to provide ‘power-steering for the mind.’ In other words, to augment humans, not exploit them.
Today’s AI-powered tools and systems that are becoming an inseparable part of our daily lives and taking over a variety of social functions — from news editing and matchmaking to advertising placement — indeed make our lives better in some ways. But that’s hardly their only objective, is it?
Still, this is just the beginning of AI’s proliferation.
A lot of high-stakes decision-making could be delegated to AI at some point, too — including in healthcare, politics, public justice, finance, or even the military. What if that happens before we ensure it always complements and augments human initiative, not exploits it?
The way I see it, we have two choices. We can continue down the laissez-faire path — so beloved by technoptimists, capitalists, and other similar creatures — and end up with technology that doesn’t choose what’s best for the common good but instead just whatever is best for its corporate, political, or financial bosses.
We can let it be used to make us angrier, more hateful, biased and polarised than possibly ever before, and at the expense of marginalised and underrepresented groups who are already being affected by it today.
We can allow it to fabricate even more stories to make us angry and hateful, manipulate facts and scientific information and undermine the voices of people who try to fight against it.
If the objective is to mine our attention and maximise profits by any means necessary, then, well, whatever happens, happens, even if it deepens existing divisions and inequalities and intensifies the spread of social biases. What’s particularly scary about this scenario is that it could lead to what Scottish philosopher William MacAskill identified as a ‘value lock-in’ state — a situation in which a single ideology gets permanently ‘locked in’ for centuries to come.
But — we can also try not to let that happen.
Instead of letting AI be exploited — and, in turn, exploit us — we can collectively agree on the values that should be central to its operation. Values that can open dialogue between people, empower the weak, challenge discrimination, and remind us how to take care of ourselves and our environments along the way.
And ultimately, that brings out the best in us, not the worst.
I’ve covered many of the dangers of AI over the last few years, in particular when it comes to all the ways it’s misused to intimidate, harass, bully and abuse women.
But while we definitely need to keep being aware of its risks and how they might evolve or worsen over time, we also need to imagine a better AI, as well as a better version of ourselves that we could become thanks to it.
If not, someone else will do all the imagining for us.
But without keeping humanity in mind.
Video killed the radio star
Internet killed the video star
AI killed the internet star
And will ultimately kill us all
(If we're not careful about it, that is.)
To some extent all new technology is no better or worse than any tool. The user is the issue, as they have always been throughout the span of human existence.
AI is not Artificial Intelligence. It is a statistically driven word salad generator. Now, if a babe in mother's arm spewed out the same there would be shouts of amazement, and calls for the trickery to be shared. Now, this AI shows up, not in their mother's arms but in the hands of the people enamored with their latest shiny new thing they created.
Every word they output is statistically likely to be the correct sounding word based on the previous word(s). They call it hallucinating. It is not. It is operating exactly as designed. They are a set of numbers in a massive spreadsheet column. That's all they are. If one uses them as a tool, or perhaps even better, if they are used in the hands of artists then there is some value to be had.
Too many of the business leaders will use AI without realizing what they are. What their output is. Replacing reporters (been done) with AI then being called out on it (also done), having to add human editors/fact checkers saves, what exactly?
Time will tell. Of course, time cares naught for us, merely the ticking of the universe. It is best we remember that. Because these AI will not care, indeed, they cannot. They are artificial. They are not intelligent.