Over the past few years, the AI ethics discourse has revolved around two questions we have touched on already. How will AI change what it means to be human? And how can we manage the tradeoffs between the ways in which AI improves and worsens the human condition?
These questions are tangled up with a third question that does not receive adequate attention: Who wins and who loses when systems built around AI are deployed in every sector of the economy?
We need to begin thinking about AI comprehensively, not in a piecemeal manner. How can we ensure that the technologies currently being developed are used for the common good, rather than for the benefit of a select few? How can we incentivize businesses to deploy generative AI models in ways that equip employees with deeper and highly valuable skills, instead of making them superfluous? What kind of tax structure will disincentivize replacing human workers? What public policies and indexes are needed to calculate and redistribute goods and services to those adversely affected by technological unemployment and environmental impact? What limits should be placed on AI being used to manipulate human behavior?
Addressing questions like these requires breaking what the sociologist Pierre Bourdieu and anthropologist, journalist and author Gillian Tett call “social silences”— when a concept is widely considered taboo, or not openly discussed.