GOOGLE'S NEW GUIDELINES FOR AI
1. Robots Should Not Make Things Worse
2. Robots Shouldn't Cheat
3. Robots Should Look To Humans As Mentors
4. Robots Should Only Play Where It's Safe
5. Robots Should Know When They're Stupid
Google’s guidelines for building artificial intelligence are more specific than Asimov’s Three Laws of Robotics, but they are far from being perfect. The descriptions of these guidelines treat their cleaning robot example as if it is a pet or a child that does tasks for its owner’s amusement, which could eventually lead to a variety of ethical issues, such as whether robots deserve better treatment or even the same rights as humans. Making robots do important jobs for someone else would promote laziness, a decrease in work ethic and a lack of responsibility in humans due to the extreme amount of leisure time one would gain from assigning all of their chores to a robot. Additionally, implementing a “reward system” for a robot that accomplishes a task implies that it would have some level of experiencing emotions in order for it to recognize and celebrate its good behaviour. Since civilization is already at the point where it can create sex dolls that become jealous of other females and chatbots that can fully understand users' emotions, would such creations feel as though Google’s guidelines are limiting their rights? They are almost as sentient as human beings nowadays. This is where Google’s guidelines contradict itself; by creating a set of rules for how a robot should act, they have accidentally made society question what it means to act as a human.