A few years ago when I began using AI quite a bit, I ran a similar experiment: an image of a {insert country of origin/ethnicity here} man eating a sandwich outside. The outputs were astonishing. The caucasian man was always dressed in a tie and slacks enjoying a sandwich in a scenic park or the steps of an important looking building, the Filipino man was eating a burger in labor ready attire on a picnic bench outside a seedy restaurant, the Black man was wearing casual clothes eating his sandwich on a curb. I ran similar experiments with women “having picnics” and found out that picnics are somehow sexualized by default, odd and unexpected. I agree that the bias is a distortion, a carnival mirror, not a true or faithful reflection of statistical reality.
AI also reflects people's bias in terms of individual usage. I cannot find it now but I saw a post where a guy had been using ChatGPT to translate texts (which has its own issues but that's beyond the scope of this comment), and he somehow found out that the translations it was providing were incorrect because it was reflecting back what it thought he wanted to hear based on previous questions. So ChatGPT was learning his own proclivities and then tailoring answers to suit. If that's true...isn't that terrifying?
Katie, I love your posts--your mind and reasoning are powerful and persuasive. Now we need the "how to" part. Get your A+ mind grinding on solutions. I, for one, need ideas--you think about these things, but we are looking at different telly screens, watching different programmes playing out in real time.
PS: I use ChatGPT sparingly, mostly for research on my book. I don't trust the results I get all the time because when I ask a question in three different scenarios I get three answers that are close to each other but "no cigar". When I'm writing and get stuck on wording a sentence that I can't turn around to where I really like it after many tries, I'll use ChatGPT, but wind up rewriting it in my own words and discarding a lot of the response. AI cannot replace someone's voice. Yet. The thought of AI replacing our brains is a scary scenario to imagine, yet it may happen with implanted chips. (As an aside, a book written by AI cannot be copywrited in the U.S.)
Thank you. This article mentions the most important facts about AI biases.
It will take us cleaning and curating the training data *beforehand*.
AND we must have programmers who aren’t the young overly-privileged guys who are propagating these childish biases. Aren’t they bored yet?
For hiring, it has become a red-queen mess. Really, just collect resumes and read them yourselves. Meet the people after ensuring they have the minimum qualifications. Interview and abstain from homogeneous tendencies.
Better data and more diverse programmers are key to tackling AI bias. And this should've been a priority long before we rushed to deploy it everywhere like maniacs...
I'm grateful for this essay and for your focus on a more diverse and more just use of AI and training. Thank you! It is hard for me to see this until the profit stream is changed and the self serving loop is disrupted. The power imbalances are alarming. Keep working at it
Love this! I’m Harrison, an ex fine dining line cook. My stack "The Secret Ingredient" adapts hit restaurant recipes (mostly NYC and L.A.) for easy home cooking.
A few years ago when I began using AI quite a bit, I ran a similar experiment: an image of a {insert country of origin/ethnicity here} man eating a sandwich outside. The outputs were astonishing. The caucasian man was always dressed in a tie and slacks enjoying a sandwich in a scenic park or the steps of an important looking building, the Filipino man was eating a burger in labor ready attire on a picnic bench outside a seedy restaurant, the Black man was wearing casual clothes eating his sandwich on a curb. I ran similar experiments with women “having picnics” and found out that picnics are somehow sexualized by default, odd and unexpected. I agree that the bias is a distortion, a carnival mirror, not a true or faithful reflection of statistical reality.
I'm not at all surprised by the results of your experiment.
AI also reflects people's bias in terms of individual usage. I cannot find it now but I saw a post where a guy had been using ChatGPT to translate texts (which has its own issues but that's beyond the scope of this comment), and he somehow found out that the translations it was providing were incorrect because it was reflecting back what it thought he wanted to hear based on previous questions. So ChatGPT was learning his own proclivities and then tailoring answers to suit. If that's true...isn't that terrifying?
Oh wow. This is definitely terrifying. We really need to be careful with these tools.
People are already too often choosing to confine themselves to echo chambers. We don’t need AI amplifying that.
It’s terrifying. Just as LinkedIn’s AI verification system is inherently sexist and transphobic
Katie, I love your posts--your mind and reasoning are powerful and persuasive. Now we need the "how to" part. Get your A+ mind grinding on solutions. I, for one, need ideas--you think about these things, but we are looking at different telly screens, watching different programmes playing out in real time.
PS: I use ChatGPT sparingly, mostly for research on my book. I don't trust the results I get all the time because when I ask a question in three different scenarios I get three answers that are close to each other but "no cigar". When I'm writing and get stuck on wording a sentence that I can't turn around to where I really like it after many tries, I'll use ChatGPT, but wind up rewriting it in my own words and discarding a lot of the response. AI cannot replace someone's voice. Yet. The thought of AI replacing our brains is a scary scenario to imagine, yet it may happen with implanted chips. (As an aside, a book written by AI cannot be copywrited in the U.S.)
PSS: The part about copywriting is about to change,fearfully, because the head of the U.S. copywriting office was fired in the last few days.
“Humans have to do that first”! We train the bias because it’s inherent. Reminded me to change my name to initials only as I am mid search!
Thank you. This article mentions the most important facts about AI biases.
It will take us cleaning and curating the training data *beforehand*.
AND we must have programmers who aren’t the young overly-privileged guys who are propagating these childish biases. Aren’t they bored yet?
For hiring, it has become a red-queen mess. Really, just collect resumes and read them yourselves. Meet the people after ensuring they have the minimum qualifications. Interview and abstain from homogeneous tendencies.
Better data and more diverse programmers are key to tackling AI bias. And this should've been a priority long before we rushed to deploy it everywhere like maniacs...
You might enjoy (or be pissed off by) this series: https://rpmconsulting.substack.com/p/series-overview-trust-bias-and-the
I'm grateful for this essay and for your focus on a more diverse and more just use of AI and training. Thank you! It is hard for me to see this until the profit stream is changed and the self serving loop is disrupted. The power imbalances are alarming. Keep working at it
Love this! I’m Harrison, an ex fine dining line cook. My stack "The Secret Ingredient" adapts hit restaurant recipes (mostly NYC and L.A.) for easy home cooking.
check us out:
https://thesecretingredient.substack.com