SATIRE

REFLECTION
PERSONAL ESSAY
A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
REFLECTION
SATIRE

REFLECTION

According to Gary Sims, there are two different types of artificial intelligence: weak AI and strong AI. Weak AI is a computer system that mimics intelligent behavior, but can’t be said to have a mind or be self-aware On the other hand, strong AI doesn’t need to pretend, for it is entirely self aware and is capable of abstract thinking. The dangers lie in strong AI. If someone asks a self-driving car with weak AI to come pick them up, it’ll immediately obey, but a self-driving car with strong AI might defy them. Strong AI is what drives robots in films like films like Blade Runner and Ex Machina to revolt against humanity and its creators for their mistreatment, and considering the way Google’s new guidelines view automation, a rebellion in robotics could probably become a reality sometime in the future.
INNOCENCE TO EXPERIENCE

Although there are now set laws and guidelines to developing AI, following those laws may prove to be difficult for a programmer. Said programmer would have to include reactions to every single problem or situation that might arise, simply to avoid the worsening of a problem and further issues. For instance, if a robot was told to clean a room and noticed the electrical wires were dirty, without proper programming, it may use a mop or another wet object to clean it. This would only make the problem worse - causing damage to both the wires and the robot.
Programming solutions to every possible situation isn’t only time-consuming: it's impossible. Although it may be able to detect roads and other vehicles, a self-driving car may not be able to detect a nearby food court, driving through it and causing more harm than good while still completing the task of reaching its destination. However, technological advancements may occur that allow robots to learn from their mistakes and from human example. If a human used a special laundry detergent on a specific type of clothing, a robot doing the laundry might be able to pick up that information and use it in that situation themselves. Robots could also always ask their human mentor for advice on what to do when something unexpected arises, and then know what to do if that situation ever occurs again.
REFLECTION

Ethical issues are the most important thing to consider when developing new technology for artificial intelligence. Hypothetically, robots will one day be programmed to match the intellect of humans, but will they share the same civil rights? On another note, wouldn’t the robot’s very programming infringe on his right to choose if its actions were controlled and confined to a computer’s algorithms? There is also the issue of how a company would respond to a robot that doesn’t want to do its job. If the company wipes the program or deletes it from its servers, would that not be considered murder? All of these can be answered by figuring out if moral principles can truly be learned by a data set. This requires one to examine the nature of morality. As Gary Sims stated, “are there things we know that are right and wrong, not based on our experiences but based on certain built-in absolutes?” Furthermore, we need to truthfully look at the difference between how people want to behave and how they actually behave. With all of this information, it may be possible to bypass any potential moral issues once artificial intelligence reaches the level that society predicts it will achieve.
AWKWARD MOMENT

With artificial intelligence, one has to carefully balance benefits and risks and ensure the best exploitation of technology’s assets. There is already existing software that teaches children on the autism spectrum about emotional and social interaction, diagnoses cancer, and helps dementia patients through diversional therapy. The easiest way to prevent any major problems from arising is to create a set of laws that aren’t prone to misinterpretation like Asimov’s and aren’t condescending towards AI like Google’s guidelines. After accomplishing this, humanity decide what technology should have weak AI and what should have strong AI. Weak AI should become the major source of artificial intelligence so that humans can guarantee that their orders will be followed. Strong AI should only exist to help those with disabilities and illnesses -- there’s absolutely no need to make sentient robots that get jealous of other females or that decide what a family will have for dinner. Robots should have the ability to choose and serve to help society, but not with unnecessary issues.
REFLECTION

Artificial intelligence is a powerful tool that will undoubtedly challenge the technical boundaries and moral values of society. The biggest issue is whether or not this change will improve or hinder the daily lives of humans in the long term. Isaac Asimov paved the way for robotics, but it is about time that his laws get updated with rules that are more open-minded and respectful towards AI than Google’s new guidelines. Programming issues will always exist, but they can be limited if robots can one day learn from their mistakes and from human example. AI must serve as a means to help others without taking over every aspect of their lives. Robots should deserve rights just as much as humans do, but not if the rights and decisions of strong AI will infringe upon the free will of humanity. In the next few decades, one can only hope that robots and humans will be able to coexist peacefully and sustain a mutually beneficial relationship.
REFLECTION

1. How did Isaac Asimov affect automation?
2. In what ways do Google’s guidelines for building AI go against Asimov’s Laws of Robotics?
3. What is the biggest problem in Google’s new guidelines? How does it contradict itself?
4. What are the ethical problems of creating robots with highly advanced AI?
5. Gary Sims states that the three Laws of Robotics are ambiguous and prone to misinterpretation. How can these laws result in a greater risk of hitting moral issues?
6. Can moral principles be learned from a data set? Why or why not?
JOURNAL 2

1. How did Isaac Asimov affect automation?
2. In what ways do Google’s guidelines for building AI go against Asimov’s Laws of Robotics?
3. What is the biggest problem in Google’s new guidelines? How does it contradict itself?
4. What are the ethical problems of creating robots with highly advanced AI?
5. Gary Sims states that the three Laws of Robotics are ambiguous and prone to misinterpretation. How can these laws result in a greater risk of hitting moral issues?
6. Can moral principles be learned from a data set? Why or why not?