“Alexa, do the dishes for me.”

An illustration of robots in the Technical & Industrial building. Courtesy of the Washtenaw Voice

An illustration of robots in the Technical & Industrial building. Courtesy of the Washtenaw Voice

By Claire Convis
Deputy Editor

Artificial Intelligence has been a complex and controversial topic for years now. Weak AI is a well-designed computer program meant to mimic the human mind; ask Weak AI questions that it has been programmed to answer and it will respond. In theory, a strong form of AI would have a mental state of its own, rather than merely following a program like its cousin, weak AI. Strong AI could reason, remember, feel emotions, and make its own decisions. We don’t know if strong AI is possible outside of science fiction films because we haven’t created it–yet.

But it’s not all fun and games and robots–we need to think about the consequences of creating strong AI. Would these forms of AI always be peaceful and helpful? Should we even expect that of them?

There are moral implications that come along with tampering with AI; If we create strong AI, we can’t just unplug it, throw it away if it breaks, and buy a new program. Strong AI is a living being that thinks and is aware. Siri and Alexa are basically modern-day servants; we ask them “What’s the weather today?” we tell them “Pause the music,” “Call mom,” or “Set a timer for 12 minutes.” This is all dine with weak AI, but what happens if we create strong AI? Are we still going to treat these living, thinking beings as servants? Are we going to implant AI into a robot shell and have a bunch of robots waiting on us hand and foot?

Scientists and computer programmers have moral and ethical responsibilities, just as those who created the atomic bomb are responsible, in part, for the consequences.  If you create strong AI, you are morally held accountable for that technology.

According to a 2019 TED Talk by robot ethicist Kate Darling called “Why we have an emotional connection to robots,” humans can develop an emotional connection to even the simplest of robots (ever felt sorry for a roomba that got stuck under the couch?) Darling tells the story of a military officer who was heading a program utilizing a robot with multiple legs, resembling a spider or a crab. The purpose of the robot was to walk over a minefield and its’ legs would be blown up one by one, as the robot hobbled around setting the bombs off. The military officer ended up calling off the mission because it was too “inhumane” to watch the robot limp around and have its body parts blown up one by one, as the military looked on.

So why do we humans find ourselves attached to robots with AI? Is it because they are designed to mimic human actions and emotions? Maybe someday we will be able to transfer an actual human mind into a robot or a computer, making that machine have the consciousness of a person. Even though strong AI would not be a human, we might have to change our definition of a person. Would we become even more attached to our robots if we were to create strong AI someday? Maybe that sounds like a paperback sci-fi novel being sold at Barnes & Noble for $7.99, but these are real questions to think about if we humans ever develop strong AI. I guess we will have to wait and see.

Comments

comments

scroll to top