Can artificial intelligence (AI) feel distress? Do lobsters suffer in a pot as it reaches a boil? Can a 12-week-old fetus feel pain? Ignore these questions and we potentially sanction a quiet, slow-moving catastrophe. Answer in the affirmative too hastily, and peoplesโ freedoms will shrink needlessly. What should we do?
Philosopher Jonathan Birch at the London School of Economics and Political Science might have an answer. In The Edge of Sentience, he develops a framework for protecting entities that might possess sentience โ that is, a capacity for feeling good or bad. Moral philosophers and religions might disagree on why sentience matters, or how much it does. But in Birchโs determinedly pluralistic account, all perspectives converge on a duty to avoid gratuitous suffering. Most obviously, this duty is owed to fellow human beings. But there is no reason to think that it ought not to apply to other beings, provided that we can establish their sentience โ be they farm animals, collections of cells, insects, or robots.

Birch is careful to distinguish sentience from intelligence. In his account, the former is the wellspring of duties, not the latter. But might beings that are sentient and intelligent exert stronger demands for precautions than beings that are sentient but unintelligent?















