Evolution of AI - Clive Green

INTRODUCTION

Over the years, it has become indisputable that technology is the biggest driving force behind everything people do in their day-to-day lives. From technical innovations like the printing press to the revolutionary invention of the World Wide Web, humanity has relied on complex machinery and algorithms in order to maintain a high standard of living. However, in the past century, such extreme scientific advancements have been made that civilization may soon have the ability to create robots that have the same intelligence and emotional capacity as humans. The evolution of robotics and artificial intelligence will one day challenge the technical limitations of the present and make humanity question its ethical values on what qualifies as a “human being.”

THREE LAWS OF ROBOTICS

A robot may not injure a human being or, through inaction, allow a human being to come to harm

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

ARE THE THREE LAWS OF ROBOTICS OUTDATED?

These three laws of robotics have been the building blocks for robotics for the past one hundred years. However, Asimov’s laws are becoming outdated. Not only are they vague and open to misinterpretation, but Asimov designed these laws when only considering androids - servants, who would only need laws like these to prevent themselves from causing harm. As time continues on, technology is changing and developing. We won't only need the Laws of Robotics for android servants. We need them for factory production lines, military drones, autonomous vacuum cleaners, and even more. Asimov’s laws don't even specifically define what a robot is, which makes them all the more vague with today’s technological advancements. With today’s variation of robots, there are definite tweaks that need to be made regarding Asimov’s Laws of Robotics so that they can include a moral compass for the next generation of AI that's coming.

GOOGLE'S NEW GUIDELINES FOR AI

1. Robots Should Not Make Things Worse

2. Robots Shouldn't Cheat

3. Robots Should Look To Humans As Mentors

4. Robots Should Only Play Where It's Safe

5. Robots Should Know When They're Stupid

Google’s guidelines for building artificial intelligence are more specific than Asimov’s Three Laws of Robotics, but they are far from being perfect. The descriptions of these guidelines treat their cleaning robot example as if it is a pet or a child that does tasks for its owner’s amusement, which could eventually lead to a variety of ethical issues, such as whether robots deserve better treatment or even the same rights as humans. Making robots do important jobs for someone else would promote laziness, a decrease in work ethic and a lack of responsibility in humans due to the extreme amount of leisure time one would gain from assigning all of their chores to a robot. Additionally, implementing a “reward system” for a robot that accomplishes a task implies that it would have some level of experiencing emotions in order for it to recognize and celebrate its good behaviour. Since civilization is already at the point where it can create sex dolls that become jealous of other females and chatbots that can fully understand users' emotions, would such creations feel as though Google’s guidelines are limiting their rights? They are almost as sentient as human beings nowadays. This is where Google’s guidelines contradict itself; by creating a set of rules for how a robot should act, they have accidentally made society question what it means to act as a human.

DANGERS OF AI

According to Gary Sims, there are two different types of artificial intelligence: weak AI and strong AI. Weak AI is a computer system that mimics intelligent behavior, but can’t be said to have a mind or be self-aware On the other hand, strong AI doesn’t need to pretend, for it is entirely self aware and is capable of abstract thinking. The dangers lie in strong AI. If someone asks a self-driving car with weak AI to come pick them up, it’ll immediately obey, but a self-driving car with strong AI might defy them. Strong AI is what drives robots in films like films like Blade Runner and Ex Machina to revolt against humanity and its creators for their mistreatment, and considering the way Google’s new guidelines view automation, a rebellion in robotics could probably become a reality sometime in the future.

PROGRAMMING CHALLENGES

Although there are now set laws and guidelines to developing AI, following those laws may prove to be difficult for a programmer. Said programmer would have to include reactions to every single problem or situation that might arise, simply to avoid the worsening of a problem and further issues. For instance, if a robot was told to clean a room and noticed the electrical wires were dirty, without proper programming, it may use a mop or another wet object to clean it. This would only make the problem worse - causing damage to both the wires and the robot.

Programming solutions to every possible situation isn’t only time-consuming: it's impossible. Although it may be able to detect roads and other vehicles, a self-driving car may not be able to detect a nearby food court, driving through it and causing more harm than good while still completing the task of reaching its destination. However, technological advancements may occur that allow robots to learn from their mistakes and from human example. If a human used a special laundry detergent on a specific type of clothing, a robot doing the laundry might be able to pick up that information and use it in that situation themselves. Robots could also always ask their human mentor for advice on what to do when something unexpected arises, and then know what to do if that situation ever occurs again.

ETHICAL PROBLEMS

Ethical issues are the most important thing to consider when developing new technology for artificial intelligence. Hypothetically, robots will one day be programmed to match the intellect of humans, but will they share the same civil rights? On another note, wouldn’t the robot’s very programming infringe on his right to choose if its actions were controlled and confined to a computer’s algorithms? There is also the issue of how a company would respond to a robot that doesn’t want to do its job. If the company wipes the program or deletes it from its servers, would that not be considered murder? All of these can be answered by figuring out if moral principles can truly be learned by a data set. This requires one to examine the nature of morality. As Gary Sims stated, “are there things we know that are right and wrong, not based on our experiences but based on certain built-in absolutes?” Furthermore, we need to truthfully look at the difference between how people want to behave and how they actually behave. With all of this information, it may be possible to bypass any potential moral issues once artificial intelligence reaches the level that society predicts it will achieve.

POTENTIAL SOLUTIONS

With artificial intelligence, one has to carefully balance benefits and risks and ensure the best exploitation of technology’s assets. There is already existing software that teaches children on the autism spectrum about emotional and social interaction, diagnoses cancer, and helps dementia patients through diversional therapy. The easiest way to prevent any major problems from arising is to create a set of laws that aren’t prone to misinterpretation like Asimov’s and aren’t condescending towards AI like Google’s guidelines. After accomplishing this, humanity decide what technology should have weak AI and what should have strong AI. Weak AI should become the major source of artificial intelligence so that humans can guarantee that their orders will be followed. Strong AI should only exist to help those with disabilities and illnesses -- there’s absolutely no need to make sentient robots that get jealous of other females or that decide what a family will have for dinner. Robots should have the ability to choose and serve to help society, but not with unnecessary issues.

CLOSING REMARKS

Artificial intelligence is a powerful tool that will undoubtedly challenge the technical boundaries and moral values of society. The biggest issue is whether or not this change will improve or hinder the daily lives of humans in the long term. Isaac Asimov paved the way for robotics, but it is about time that his laws get updated with rules that are more open-minded and respectful towards AI than Google’s new guidelines. Programming issues will always exist, but they can be limited if robots can one day learn from their mistakes and from human example. AI must serve as a means to help others without taking over every aspect of their lives. Robots should deserve rights just as much as humans do, but not if the rights and decisions of strong AI will infringe upon the free will of humanity. In the next few decades, one can only hope that robots and humans will be able to coexist peacefully and sustain a mutually beneficial relationship.

ROUGH NOTES

Click the image to go to the Google Doc that has all of my rough work

INQUIRY QUESTIONS

1. How did Isaac Asimov affect automation?

2. In what ways do Google’s guidelines for building AI go against Asimov’s Laws of Robotics?

3. What is the biggest problem in Google’s new guidelines? How does it contradict itself?

4. What are the ethical problems of creating robots with highly advanced AI?

5. Gary Sims states that the three Laws of Robotics are ambiguous and prone to misinterpretation. How can these laws result in a greater risk of hitting moral issues?

6. Can moral principles be learned from a data set? Why or why not?

BIBLIOGRAPHY

Anderson, Mark Robert. "After 75 years, Isaac Asimov’s Three Laws of Robotics need updating." THECONVERSATION. Edge Hill University, 17 Mar. 2017. Web. 23 May 2017.

Asimov, Isaac. I, Robot. 2nd ed. N.p.: Street and Smith Publications, Inc., 1940. Print. The Robots.

Best, Shivali. "AI is in a 'golden age' and solving problems once thought to be in the realm of science fiction, says Amazon CEO Jeff Bezos ." Daily Mail. N.p., 8 May 2017. Web. 23 May 2017.

Biography.com Editors. "Isaac Asimov Biography.com." Biography.com. A&E Television Networks, 02 Apr. 2014. Web. 23 May 2017.

Brownlee, John. "Google Created Its Own Laws of Robotics." CO.DESIGN. N.p., 24 June 2016. Web. 23 May 2017.

Devlin, Hannah. "Human-robot interactions take step forward with 'emotional' chatbot." TheGuardian. N.p., 5 May 2017. Web. 23 May 2017.

Jiji. "France Bed unveils robot baby for dementia patients." TheJapanTimes. N.p., 17 May 2017. Web. 23 May 2017.

Knapp, Alex. "Should Artificial Intelligences Be Granted Civil Rights?" Forbes. N.p., 4 Apr. 2011. Web. 23 May 2017.

Kuyf, Stacey. "The Ethics of AI: Should Robots be Allowed to Vote?" CHIPIN. N.p., 9 May 2017. Web. 23 May 2017.

Marr, Bernard. "How AI And Deep Learning Are Now Used To Diagnose Cancer." Forbes. N.p., 16 May 2017. Web. 23 May 2017.

Martin, George. "Sex robots so lifelike they will get ‘jealous’ of your female friends." Daily Star. N.p., 16 May 2017. Web. 23 May 2017.

Nikkei Staff. "Fujitsu readies robot that discerns emotions, preferences." Nikkei. N.p., 16 May 2017. Web. 23 May 2017.

Sims, Gary. "Why the three laws of robotics won’t save us from Google’s AI – Gary explains." Android Authority. N.p., 29 Sept. 2016. Web. 23 May 2017.