AI refers to techniques that allow computers to learn, reason, infer, communicate and make decisions that in the past were the sole province of humans.
Yet as AI technology spreads, so do concerns about its accuracy and fairness. Experts say it can have built-in racial, gender and age biases that could, for instance, rule out certain qualified people for jobs, or force some creditworthy borrowers to pay higher rates than otherwise. This has prompted calls for regulation, or at least greater transparency about how the systems work and their shortcomings. (See how some cities are dealing with these issues.)
What follows are edited excerpts of the discussion, which took place over email.
WSJ: Do we need greater regulation of artificial intelligence? Or just government guidance that would lead the industry to regulate itself?
[Deputy Director of the ACLU’s Speech, Privacy and Technology Project Esha] Bhandari: Government guidance has a role to play, but where business use of AI has the potential to harm individuals or communities, those harms have to be addressed through regulation. It’s the same principle that applies to other consumer products that have the potential to harm people—businesses that stand to make money from those products are not left to simply self-regulate.