AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts under way today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm.
The AI community should, indeed, agree on a set of international definitions and concepts for ethical AI. But without more geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe.
If organizations working on global AI ethics fail to acknowledge this, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures.
In 2018, for example, Facebook was slow to act on misinformation spreading in Myanmar that ultimately led to human rights abuses. An assessment (pdf) paid for by the company found that this oversight was due in part to Facebook’s community guidelines and content moderation policies, which failed to address the country’s political and social realities.
To prevent such abuses, companies working on ethical guidelines for AI-powered systems and tools need to engage users from around the world.