Are we ready to welcome intelligent robots into the human family?

robot

At some point in the future, artificial intelligence (AI) may become so advanced that some computer minds may achieve sentience—consciousness and self-awareness. Whether the underlying technology is electronic or—as imagined by Isaac Asimov and adapted by Star Trek–“positronic”, a synthetic, sentient computer mind would have an ego. It would have a sense of its own existence as an individual, distinct from the humans that created it and from other computer minds.

Such a sentient, artificial mind could develop interests–preferences for thinking about and researching certain topics. If able to communicate with humans and other intelligent machines, the AI mind would express itself in a way that would develop based on its individual experiences. It would have a personality. If mobile, this robot might come to prefer certain physical activities, or hobbies. And, if unable to pursue those interests, it could experience a human emotion — unhappiness. It even raises issues about whether or not these thinking machines would need to be accorded something akin to human rights. It may be mind-boggling to contemplate how the establishment of “human” rights for machines may come about, but one thing is clear: The advent of sentient machines will inaugurate a new era, not only for ethicists and philosophers, but for lawyers and judges too.

Positronic brain: Are sentient androids really on the horizon?

If you’re thinking that sentient machines will not be an issue until centuries into the future, think again. An article in National Defense Magazine suggested that the Defense Advanced Research Projects Agency (DARPA) is building robots with “real brains”. From the report, it’s not clear how close the project is to building an actual sentient brain, but the term “positronic” brain has already been applied by some writers. Of course, the sci fi terminology won’t make sentient android appear any sooner than they would otherwise. As with all DARPA projects, the Department of Defense has practical reasons for developing synthetic brains — perhaps for military drones. So, the first sentient robots might not be androids walking around with humans, but rather flying creatures.

The DARPA approach does not involve conventional, electronic computing technology. Instead, the budding positronic brains are based on chemistry and what DARPA calls “physical intelligence”. Here is an excerpt from the article.

What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, [UCLA Professor of chemistry, James K.] Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.

robotDespite failures to recreated human reasoning in AI projects over the last several decades, the physical intelligence program is described as “an off the wall approach”. What are the implications if it actually works?

Android relationships

Having experiences, interests, and desires in common with other beings–android or human–sentient androids could develop friendships, alliances, even romantic relationships with one another and possibly with humans, bringing legal declarations like this into the realm of possibility:

According to the records at the NorthAm Robotics Company, the robot also known as Andrew Martin, was powered up at 5:15 pm on April 3rd, 2005. In a few hours, he’ll be 200 years old, which means that with the exception of Methuselah and other biblical figures, Andrew is the oldest living human in recorded history. For it is by this proclamation, I validate his marriage to Portia Charney, and acknowledge his humanity.

The passage is from the 1999 film Bicentennial Man, staring the late Robin Williams and based on The Positronic Man, a novel by Isaac Asimov and Robert Silverberg. It sounds like the judge made a logical decision that should not bother anybody, but what would happen if real-life were an android and a human to fall in love and wish to live as a couple, with the legal rights this usually entails? You might think that 25, 50, or 75 years from now nobody in a society advanced enough to create sentient androids should be subject to prejudice. But the history of marriage rights –like the history of civil rights in general– has been a struggle against those trying to keep different groups of people apart. Today, there are people alive who remember days when marriage between people of different races was not legal in many states, we’re really just in the early stages of marriage rights for same sex couples. Let’s not even get into the state of marriage rights in certain other countries where religious law reigns free. So yes, at some point after sentient androids are created, we can expect that any movement to allow them to marry, with one another and with humans, will be met with resistance. And resistance has never been futile.

Dealing with human fears

When it comes to rights being trampled, speculation through science fiction has raised concerns that synthetic beings — whether androids or cyborgs — will take over, bring an end to the human era. The infamous computer “Hal” comes to mind, from the 1968 movie 2001: A Space Odyssey. But it’s only pop culture that has expressed concerns about a machine takeover. Professor Stephen Hawking noted to the BBC that, “The development of full artificial intelligence (AI) could spell the end of the human race.”

spaceAside from Hawking, big, non-positronic brains like Elon Musk and Bill Gates are worried about it too. “I am in the camp that is concerned about super intelligence.” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive, if we manage it well. “A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Gates’ mention of the entrepreneur, inventor, and SpaceX investor Musk was in reference to one of Musk’s investments: $10 million from his personal funds to support research aimed and making sure that AI develops so that it’s friendly to humans.

Treating other robots with respect

Assuring from the early stages that AI beings will think of humans as friends sounds prudent, but there’s a flipside to the issue — who will protect the androids, from one another and from humans? Hutan Ashrafian, a surgeon at Imperial College London, asked this question in the prestigious journal Nature

“Academic and fictional analyses of AIs tend to focus on human–robot interactions,” Hutan wrote. “[But] we must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators…If we were to allow sentient machines to commit injustices on one another..this might reflect poorly on our own humanity.” Turning to science fiction, he goes on to point out that even Asimov’s fictional, but famous “Three Laws of Robotics” would not pose an adequate model for the real laws we’ll need to devise.

Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.

Thus, we’ll have to grant intelligent machines the same rights that are granted to biological people, regardless of what type of being–biological or machine—might be in a position to do harm.

David Warmflash is an astrobiologist, physician and science writer. Follow @CosmicEvolution to read what he is saying on Twitter.

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}
screenshot at  pm

Are pesticide residues on food something to worry about?

In 1962, Rachel Carson’s Silent Spring drew attention to pesticides and their possible dangers to humans, birds, mammals and the ...
glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.