AI models often promote stereotypes. Now there is a program that exposes biases

AI-biases-cover
[A] new dataset [Margaret Mitchell] helped create [tests] how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures. You probably already know that AI often presents a flattened view of humans, but you might not realize how these issues can be made even more extreme when the outputs are no longer generated in English.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
[W]hy are these kinds of extreme biases still prevalent? It’s an issue that seems under-addressed.

That’s a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it’s believed that it’s not really that big of a problem. Or, if it is, it’s a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong.

It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren’t expressing themselves in very clear language.

This is an excerpt. Read the original post here

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}

Related Articles

Infographic: Global regulatory and health research agencies on whether glyphosate causes cancer

Infographic: Global regulatory and health research agencies on whether glyphosate causes cancer

Does glyphosate—the world's most heavily-used herbicide—pose serious harm to humans? Is it carcinogenic? Those issues are of both legal and ...

Most Popular

Picture1-5
Science Disinformation Gap: The transatlantic battle over social media and censorship
Screenshot-2026-04-23-at-11.00.36-AM
Regulators' dilemma: Thalidomide, Metformin, and the cost of getting drug approvals wrong
ChatGPT-Image-May-12-2026-08_39_41-PM
GLP podcast: Big Pharma, Big Ag, Big Food—health harming industries or life-saving innovators?
Screenshot-2026-05-12-at-9.58.31-PM
'He seems fine': Marty Makary out as FDA commissioner
ChatGPT Image May 10, 2026, 08_16_59 PM 2
Overmedicalization? RFK Jr.’s antidepressant crackdown raises conflict questions over his fee stake in Wisner Baum, the tort firm built on suing drug makers
Screenshot-2026-05-12-at-10.05.11-AM
Pro-vaccine “hero” vs. an anti-vax “villain”: ‘Bad Vaxx’ video stirs MAHA backlash
Picture1-1
Cooling the planet with balloons: Could a geoengineering gamble slow global warming?
Screenshot-2026-05-01-at-1.29.41-PM
Viewpoint: What happens when whole grains meet modern food manufacturing? Labels don’t tell the whole story.
ChatGPT-Image-Apr-13-2026-02_20_22-PM
Viewpoint: Misinformation infodemic? Why assessing evidence is so challenging 
S
As vaccine rejectionism spreads, measles may be taking a more dangerous turn
ChatGPT Image May 12, 2026, 10_19_00 AM 2
Viewpoint— 'Muscular governance': How authoritarianism is surging corporate-linked energy misinformation
images
The never-ending GMO debate: Pros and cons
Screenshot-2026-05-11-104424
Hantavirus outbreak research: Trump administration shut down study last year on rodent-to-human transmission
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.