[A] new dataset [Margaret Mitchell] helped create [tests] how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures. You probably already know that AI often presents a flattened view of humans, but you might not realize how these issues can be made even more extreme when the outputs are no longer generated in English.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
[W]hy are these kinds of extreme biases still prevalent? It’s an issue that seems under-addressed.
That’s a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it’s believed that it’s not really that big of a problem. Or, if it is, it’s a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong.
…
It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren’t expressing themselves in very clear language.





















