Wednesday, September 30, 2009

Robotics

The first session I attended was "Redeeming Robots" with Derek Schuurman. He explained a great deal about the technology of robotics and then went into the controversy of artificial intelligence. Now, for robotics - and technology in general - one of the hardest body parts to recreate is the eye. Biological vision systems are exceptionally complex, and thus far have been impossible for people to mimic in technology. You can tell a robot that the letter ‘r’ looks a certain way, but then as soon as it is capitalized, italicized, handwritten et cetera, the robot no longer identifies it as the letter ‘r’. This shows that we are nowhere near creating the robots shown in the movie “I, Robot”; Data from “Star Trek”; or C3PO from “Star Wars”.

However, artificial intelligence is becoming more prominent; as are the questions surrounding it. We use robots for humanitarian de-mining, dangerous search and rescue missions, hazardous waste clean-up, in space and undersea exploration, and also in surgery; and we applaud the human ingenuity that created these machines. Then once artificial intelligence comes into play and the robot is given the power to choose, disputes emerge. There seems to be an innate fear that artificial intelligence will make life worse. This is not entirely unfounded, as computers can get viruses and be hacked into, so why not robots, or other potential forms of artificial intelligence? Another question brought up in many science fiction movie asks: once robots are able to think for themselves, what will keep them beneath the human race?

Isaac Asimov came up with three laws of robotics to ensure the superiority of humans; the first being that a robot may not injure a human or allow a human to come into harm; the second stating a robot must obey human orders unless it conflicts with the first law; and the final law of robotics says a robot must protect itself unless it conflicts with the first and/or second law. Now, while these laws are helpful, who defines human injury? Does the first law strictly adhere to physical harm, or does a human’s mental well-being also come into affect? How can one define a human’s psychological health? If it cannot be defined, then robots potentially could restrict humans from doing anything remotely dangerous, thereby following the first law and refusing human orders to free them by stating it conflicts with the first law. Is there a way to create artificial intelligence without these problems?

I’m sure the vast majority of people would agree that robotics is very useful and necessary - specifically as service robots - but how does artificial intelligence play into robotics? There is a fine line between helpful and hurtful and we need to make sure that we are aware of that in every technology that we create.

Monday, September 28, 2009

My Purpose

I am writing this blog in response to the Identity & Technology Conference held at my school. My assignment is to comment on my experiences during the conference. I intend to write three more posts, two in response to the break out sessions I attended at the conference, and a summary post to respond to any comments and summarize the difference this blog made to my understanding or appreciation of what I've learned in this conference.