Friday, March 2, 2012

Is It Human Error Or Computer Error?



Is It Human Error Or Computer Error?

Danqing:

A Space Odyssey is quite different from other movies I have ever seen. I almost regarded it as a documentary film instead of a science fiction movie at first. The sound of the movie is filled with music a lot. That is also why we feel different about it. The gorgeous scenes in the earth, in space, and inside the aircraft give me extremely real feelings that it may be a true history of human-beings. However, it turns out to be a story for that there is still a story line in this movie. Human-beings have to finish a Jupiter Mission. HAL 9000 computer, the sixth member of the Discovery Crew, is a intelligent computer and central nerves system of the ship. It was considered to be able to perfectly behave like a smart human without mistakes or emotions. However when I was watching it before the tragedy happened, I had some bad feeling about it. Not only because that the number Six and the red light from its eyes reminded me of the scary human-look Number Six in the movie Battlestar Galactica, but also the pungent music, or noise with the scenes.

It turns out to be that Hal, the computer later made the mistake and didn’t admit it was its faults. Its behavior became not adorable when it found the secret of Frank and David talked and then terminated four people’s life inside the spaceship after that. It went against what it argued before “The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information.” What do you think of this? Is computer foolproof and incapable of error? Do you think that the error is made by human brain or Hal, the machine intelligence? Do you believe machines could have genuine emotions? Do you think they should have?


13 comments:

  1. I agree with Danqing that I had an unsettling feeling about HAL whenever the camera would focus on its red blinking light. I became even more disturbed when HAL terminated the lives of the humans hibernating and shut Frank and David out. However I actually felt like a human being was dying when David started to deprogram him. I can't help but refer to HAL as a "him". When HAL kept saying I feel scared and you could sense him shutting down like a human being dying, I felt remorse for the machine. As computers are created by humans, I feel that it was human error at fault. However Hal seemed to take on a life of its own. "He" was hurt when he read David and Frank's lips and realized they wanted to shut him down or essentially kill him. I feel HAL developed human feelings that got in the way of him being a machine. Humans make mistakes, but machines are meant to be, in theory, perfect. HAL became to some extent human and let his anger and hurt terminate human lives. I don't think in reality humans can have real emotion. They can mimic it, but they won't be actually capable of it. I don't think machines should have emotions because it gets in the way of carrying out their purposes. I know my actual views on AI conflict with my views on HAL, bu I don't think a machine capable of true human emotions can be created in reality.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. I agree with what everyone else is saying that HAL was extremely ominous. This is due to the visual elements; for example, the red light. This reminded me of Six in Battlestar Galactica. I have a real fear of machines that become so human-like, they start to appear to have emotions. I think by allowing humans to interact with such machines, like HAL, people are setting themselves up for psychological confusion. I tended to always remind myself "HAL is a machine" throughout the movie, because I could have easily fallen into the same trap the humans did in believing HAL was human. Like shazreh said, it is difficult not to refer to HAL as a "him" and Dave and Frank in the movie call HAL "him" as well. Additionally, I think HAL's monotone, very mechanical voice is creepy. Voice tone and inflection usually make people human, so a machine could never attain different voice inflection. Machines will never be able to actually "become" human. It was argued that a computer like HAL was perfect and never had made a mistake. Such statements should never be made, because through experience we know nothing is perfect. I do not think anything will ever be perfect, but if we continue striving for perfection and eventually reach it, then what happens? My thoughts about this film and humans interacting with machines on a personal level are very scatterbrained. Throughout the past few weeks of learning about AI and robots I have realized I am really frightened by the singularity and the possibility of computers acting too much like humans. I feel that in Odyssey, Frank and Dave were basically brainwashed into believing this machine was 100% accurate. This could happen to anyone if we keep trying to make machines that are so intelligent and human-like.

    ReplyDelete
  4. I definitely shared that same sense of unease as soon as HAL was introduced. In the moment where HAL is speaking with a crew member about whether he's having second thoughts about the mission, I was reminded of the radiolab about a computer that was designed to mimic a psychologist/therapist, and the fact that people so easily fall into regular discourse with these robots, and sometimes forget that it's all synthetic. I was again reminded of this when Dave was dismantling HAL and HAL started to sing. It was clear that Dave was shaken up by this, quite possibly because it made him feel guilty for harming a fellow "crew member". Although in this movie, HAL is portrayed as possessing real feelings of jealousy or spite, I don't think that, in real life, artificial intelligence will ever have emotions in the same sense that humans do. In addition, I don't know if I would say that HAl ever made an "error", as we're defining error from the perspective of the humans. It is this basic fact that he had his own agenda and separate thought processes than those that were intended for him that makes him so creepy.

    ReplyDelete
    Replies
    1. I think it might be comforting to think that AI will never have emotions in the same way that humans do, but technology advances at such a fast rate that it is entirely possible for machines to one day become sentient. To me, the idea of a machine reaching the level of intelligence to allow it to to think and feel for itself is very scary. Do you think that, as a society, we need to start addressing such issues? What if we approach the singularity between man and machine? Should we cross the threshold or would we have to make the decision to halt all technological advances?

      Delete
  5. When Hal was dying, I felt bad at first, but then felt stupid for feeling bad when I realized I was feeling bad for a machine. So I was rooting the entire time for Dave! Machines don't have real, genuine emotions at all. All they have are the feelings we project on to it. The humans who created Hal gave him whatever qualities he had; any emotion he appeared to have was the result of human programming. I think it’s possible that Hal thought on his own, but emotion: no. Emotion is strictly a capability of a living organism. Now the error that Hal made was a machine error. Yes, humans programmed Hal, so his error could be attributed to human fault. But if every mistake that a machine made...your computer freezing, your cell losing signal, your clock breaking and telling the wrong time....was blamed on humans, well....we’d have a lot of blame to take for the breaking of technology. Usually, we blame the computer, or the phone, or the clock. So in the same way, I have to say that Hal’s error was a machine error. Maybe was a series of programmed commands to the way he interpreted information that went awry to make it appear as though he was ‘angry’ when he ended the life support of the four astronauts. I think Hal’s an example of why humans should not make AI so powerful. Because then deaths like that could be considered human error: humans were the ones who built the AI.

    ReplyDelete
  6. As far as I can see, AI has its place. I think that if we can have a computer that helps us drive safely that is all well and good. But there are some things that technology can never help us accomplish, and thus AI has its limits. For example, I would question the decision to create a super computer to control our missile systems, or a computer to run our homes. There are some things we need to do, and there are some things that we can afford to let machines do. For example, I don't really think that machines can do that much to help us exercise. A treadmill is pretty much the limit of the technology needed to work out. Then, there are fields in which human judgment is necessary. I would question the use of a super computer to control our missile silos, for instance. But this issue is incredibly complex. I think it would be easier to debate it on a case by case basis, rather than as a general abstract concept.

    ReplyDelete
  7. When I watched the space odyssey I chuckled a bit because this specific concept (of where actions by AI can be received by the audience as a way for the AI to benefit itself or humans) is touched on in the movie I'm writing my paper on. I'm watching I,Robot and in it there is also a powerful machine such as HAl, her name is V.I.K.I (I really wonder why all these robots are all female, maybe because they come of has less likely to behave the way they do?) and she controls the actions of many other robots. She orders the robots to start a rebellion and take control of the human race. Now her explanation for her action is that by governing the human population she will be able to decrease the pollution in the earth, stop wars and other international conflicts in the way obeying the governing laws of all robots by protecting the human race. Now her explanation seems legitimate at first but I still wonder if it is good enough. By starting a rebellion isn't she hurting humans, in turn disobeying one of the governing laws of robots? I don't really know, but all this stuff just really got me thinking and what not but it's funny how this came up with HAL too. Is she really doing it to her benefit or for the benefit of humans.

    ReplyDelete
    Replies
    1. I think this is a really interesting idea. I think a possible idea why a robot with AI wouldn't see the problem with this is that the robot doesn't identify with the human race personally. Maybe if she had a personal connection with the humans, she could realize that harming one human, albeit in order to save the human race, is not a moral course of action.

      Delete
  8. I think that AI has become a necessity in our lives, but as the Space Odyssey shows, there are many cases where AI should not be used or should be used with extreme caution. For example, HAL developed the capability to make the decision to kill humans. I think this represents the extreme dangers of AI and the possibility of technology being able to surpass human intelligence one day. I find it interesting that HAL displayed emotions and expressed fear when his life was about to end. This reminds me of the radio lab we recently listened to and how Furbies expressed fear when turned upside down. I think the ability to express emotion is something that I would always associate with humans and it is slightly disturbing that technology has developed this capability. I think it is a legitimate concern that AI may reach the singularity one day and I think the human race would be wise to take precautions against such a thing. Although the use of technology is extremely convenient, some things are better left to humans alone.

    ReplyDelete
  9. I completely agree with Tara. Though AI has many incredible uses that makes our lives so much easier, it should be used with some caution. I don't necessarily believe that AI will evolve so much that it will decide to exterminate the human race. However, as in the Space Odyssey, I do believe that AI might evolve to believe it is incapable of error, or worse, actually become incapable of error. The idea of the human race creating something much more intelligent than the race itself, and worse, more perfect, is terrifying. In a sense, it's playing God, mimicking Him, by creating something in our "own image," but allowing it to act freely, without restraint. Going even further, the human race can't possibly actually BE God, by giving genuine emotions to what it creates. Perhaps HAL was smart enough to know how to use emotion to it's advantage, but would never actually feel fear or love. Emotion is, essentially, what makes humans human. It distinguishes us from other machines and gives us a quality that cannot be replicated.

    ReplyDelete
  10. I agree with Danqing, about the movie being similar to a documentary. More time was spent playing music rather than real dialogue between the characters in the movie. Not only that, the camera angles of the movie also added onto the documentary feeling.
    I think the fact that HAL's "eye" is red makes her more ominous and creepy. Red does symbolize evil, but still, it kind of creeped me out whenever the camera would zoom into the lens. But I think rather than having emotions, HAL was mimicking our ability to feel emotions so that Dave will sympathize.

    ReplyDelete
  11. Our society is so accustomed to having AI, I don't think we would be able to survive with it. It plays a large role in our lives but I don't think it should control the way we live. It should be used to make our lives better rather than to control them. There comes a point where a line must be drawn between AI being helpful and AI being detrimental and that is a fine line. When humans stop interacting with each other in person or humans stop leaving their houses, I think AI is too prevalent. In Wall-E, the AI becomes such a huge part of everybody's life that it renders humans essentially useless.

    ReplyDelete