Sunday, February 26, 2012

Technological Advances, AI, and Religion


Artificial Intelligence, like so many other developments in science and technology, does cause some ethical and theological concern. I asked myself plenty of times while reading Tamatea's article, "is AI a good idea?", and I couldn't come up with as answer. I believe that, as humans, we are created in the "image of God," as Peterson's article states. If so, then, why AI? It's an advancement, yes. It does have some positive aspects, yes. But, how far is too far? This idea of technological singularity terrifies me a little bit, especially after having watched Battlestar Gallactica. (I had nightmares for weeks after watching I, Robot.) Though I highly doubt machines will ever evolve and try to destroy the human race, I do believe that it is dangerous to develop computers that think for themselves. In a way, it's "playing God." In a way, it's creating a life of some sort. The Radiolab we listened to about AI really helped me understand that. I remember being terrified of Furbies as a child, and until I listened to this particular radio show, I could never place why. It's because I felt that I was actually responsible for a life, even thought it was only a toy. I don't want to say that we should eliminate AI fully, but we should definitely be more cautious of it. I don't think its smart for us to create something smarter than we are. 


I think that there are legitimate ethical and theological concerns about AI. In my opinion, one of the primary concerns is that computers will soon be as intelligent as humans, also referred to as the “singularity.” I think this gives rise to concerns about human’s ability to control technology. The idea of the “singularity” reminds me of the movie Eagle Eye when the supercomputer at the department of defense begins blackmailing certain individuals with the intended goal of killing the president and his cabinet. Do you think that it is possible that technology could advance past that of human intelligence in a similar manner as to what happened in Eagle Eye? Also, should AI have rights if it does gain the same level of intelligence as humans?
William Bainbridge suggests that as AI progresses there will be no more gaps for God to fill, creating an increase in religious resistance. I think that this is clearly a theological concern for AI because as technology progresses, religion could become viewed as less important and for lack of better words, believable. Do you agree with Bainbridge’s opinion that AI will lead to an increase in religious resistance?


There are multiple issues that certain technological advances, such as Artificial Intelligence, bring to the table. I found the difference in Buddhist and Christian responses to be symptomatic of the broader incoherence of opinion regarding technological advances in general. For example, the Amish responded to technological advances by functionally saying, "Actually, we don't want to continue advancing technologically. We think everything is perfect just the way it is." Yet many people, religious and nonreligious alike, view technological advancement as the main goal of the human enterprise. After all, advances in agricultural technology or infrastructure are what allowed the human race to flourish and develop. There are many opinions on the subject, and they are not necessarily dictated by a person's religion. What are your opinions on technological advances in the field of Artificial Intelligence? Is it possible that if computers/robots became too intelligent the result could be a Terminator-like apocalypse? What about all the predicted benefits that will come in the fields of medicine or transport?

I also had some thoughts regarding the conversation at the end of class on Thursday, in which we discussed the fact that technological advances require our participation and, to some extent, our willingness to sacrifice. I saw an interested documentary the other day about advances in Quantum Physics which discussed the possibility of teleportation. The narrator explained that teleportation is possible, but that it involves disassembling an object or person and reassembling it in a second location. He went on to address the question that arose from this discovery: Is the person or object the same  person/object when it is reassembled? The narrator believes the answer was yes, because it was the exact same composition of particles as the original. However, this does raise serious religious and philosophical questions. For example, if the object is really the same just because its particles are assembled in the same way, is there still room for the idea of a soul? I found this discussion ties into Bainbridge's belief that advancing technology could have serious negative impacts on religion. What are your thoughts on the subject?


  1. I myself will admit that I am not the most technological savvy person ever, and that every time a new gadget comes out it stresses me out that I'm going to have to learn to keep up- i liked the simple old days where phones didn't talk back to the person using it. I think this idea of AI is a bit terrifying to the ordinary person, like myself, because it's another thing we have to compete with. Now we are not only competing intellectually in school we also have to compete with new technology, which let's face it, we never stand a chance. Even playing online video games against computers have proven that humans rarely ever win against the machines. Ethically, I think AI crosses the border of the issue of humanity, because if humans can rely on the reason and intellect of machines will their own intelligence decrease because of it? And theologically speaking, if AI advances are becoming more and more popular, will there still be room for a so called God in our society? Or will it become a religion based on these new machines? I hope we can find the balance so that we don't end up living the story of Battlestar Galactica.

  2. I would like to assume that everyone realizes that AI is not all that is portrayed in movies and other media. AI is interactive everyday and is part of everyone's commons life even if they want to believe it or not. AI is used in videogames, online customer services and even elevators. Morally I don't see fault in making machines that are close to as intelligent as us. The better the advancement of these beings the closer we are to answering questions we never knew about before. I just always wonder why people feel that if we do make such intelligent creatures they would eventually rebel against us. Will that be the case? because they are smarter they are going to attack us? then countries like japan should be attacking several countries now. Why do we even bother looking for life outside our solar system if they could possibly be smarter than us and eventually attack us? It is a possibility they attack us but this idea is so dim to me that I feel advancing our technological stance is worth the risk.

    Basically what I'm saying is we should continue developing AI because I feel there's a slight chance that they will rebel against us. And who knows, if we continue advancing we might be able to find a way to keep them from rebeling. No one should be scared of progress...if not we wouldn't have ipads and laptops and most importantly, frosted flakes.

  3. I think the source of controversy regarding ethical and theological concerns of AI come from society's definition of what's alive. From the onset of Battlestar Galactica, the cylon prompts the ambassador asking him if he's alive. In that instance, the ambassador seems no more alive, if not less alive, than number six. Flesh and blood equal the mechanical composition of that AI system. In a more real context, AI is not what's perceived in the movie. There's no way AI can advance past human reason, but only mimic it to an extent. Is this to say that sometime hundreds of years from now robots will never rebel against humans? Yes, that is exactly what this is saying. I believe AI should be continually advancing, resulting in a compliment to society rather than a detrimental force. What scares me, however, is human's reliance on technology and maybe AI in the future. What is to keep people from treating AI as more human than their friends. Even now, certain people have more interest in their phones and technology than the person next to them. And phones aren't even that advanced. If Artificial Intelligence systems progress past the point where they can greatly mimic human behavior, I fear that more people growing up with it will rely heavier on AI than each other.

  4. I am very interested in this topic. Actually my first English book was called Frankenstein, a novel written by Mary Shelley about a monster produced by an unorthodox scientific experiment. It did scare me at that time and I had the same feeling towards science as Gaby did. Namely, the idea of technological singularity and improper use always made me feel sick and worried, and reminds me with the dark and dangerous laboratories and even wars. I haven’t watched science fiction for years until I recently watched the miniseries Battlestar Gallactica. I have to say, I love it very much and couldn’t get to sleep until 4:00 in the morning after that.(Way too excited…More precisely, scared…)

    “I don't think it’s smart for us to create something smarter than we are.” I don’t agree with Gary about this idea. For example, it is possible that children are more intelligent and achievable than their parents. So in my opinion there is chance for people to create something smarter than us. However, I do agree with that we should definitely be more cautious of it, the things we created.

    I am curious why Number Six in the Battlestar Calactica believe in God. That sort of changed my minds towards religious people. I had a feeling that people who believe in God won’t easily do bad things because they have already known that they would be punished by God for that. But I am not so sure about it. I want to answer Tara’s question: I agree that AI have rights if it does gain the same level of intelligence as humans. Just like the law of jungle, which is proved by nature, you have to be strong enough to survive or you may be eliminated by others. One day, if the AI becomes smarter than us human-beings, the world may be not us anymore. So let’s make ourselves stronger and stronger. Try to fulfill your dreams in your limited life. And maybe we could survive to see other creatures and other planets like the human-beings and the earth one day. Who knows! (Smile)

  5. I've always thought of new scientific developments and technological advancements as improvements to our life, but I've realized that I, like most of society, spend most of my time glued to my phone, my laptop, and tv. I definitely think that actual human interaction has been negatively affected by these new advancements. Phones took away the necessity of meeting people to talk and now texting has taken the place of calling people.
    I think the idea of AI taking over the world and destroying humankind is the stuff of science fiction, not real life, but I do think there should be limits to where AI is used. I don't think AI should become therapists like the AI in the Technology Talks podcast. There should be distinctions between humans and AI. AI taking jobs that require emotions is crossing the line. Although AI might surpass us in terms of intelligence, I don't think intelligence is the only thing that makes something advanced. Emotions play a huge role in being a human being I don't see how machines can take on real, true emotions. It take human emotion to try to destroy people, so I really can't see AI trying to rid the world of humans.

  6. Before I answer any of the previous questions, I noticed something while reading the posts. Some of the same people that said religion and science can co-exist, were afraid of science disproving religion in some way or that science would end up "playing God". This is extremely hypocritical stance because they are now advocating for "conflict." So I urge people to go back and re-examine their previous posts if this is truly what they believe.

    Do I think that there are/should be ethical or theological concerns about AI? The short answer is no. In fact, the very idea of the question puzzles me. No one would have questioned the ethical/theological implications of domesticating animals, why should we now question it for a machine? By definition, anything that is AI is automatically non-organic (a machine). This very intricate machine might mimic the signs of life, and fool many people into thinking it is alive, but it is not alive.

    Should people be worried about AI affecting religion or robots rebelling against us? Again the answer is no. If this AI machine is meant to be the next form of evolution, then we would be "playing God" (if that is the argument people wish to use) to stop it. In my belief, if your religion "collapses" at the development of AI (which it really shouldn't), then it probably isn't worth believing in anyway. If AI disproves some aspect of religion then so be it, but if "God" really gave us free will then don't we have a right to build AI if we have the ability? What right do we have to stop the natural flow of evolution?

  7. First, I would like to comment on the fact that everyone is bringing up examples of AI fictional and real in society, which makes me believe regardless of what our opinion of AI is, AI is popular culture. I never thought of the fact that even Frankenstein, as Danquing brought up is "AI-like." I consider myself a traditional person, so I am not super involved in technology and I feel that people like me and groups like conservative Buddhists or the Amish will fight for a world without machines reaching and surpassing humans. Part of being human is imperfection, and there is a possibility with AI is that machines will basically have no imperfections, but does this make them better than humans? In Peterson's article he talks about how humans are "in the image of god" and reading this made me think about the idea in most religions of striving to be more god-like. Most of the time being more god-like is through morals; can a machine be moral? There is intelligence that is beyond knowing answers, some intelligence we learn through experience, A machine will never truly know everything because a machine cannot actually "experience" life. These are the reasons I feel we do not have to be too worried about AI.
    Still, the idea of the "singularity" is really a scary concept. Additionally, that radio lab we had to listen to really terrified me. If machines surpass us and take us over at least we are together and can differentiate between humans and machines. The scariest part of the radio lab was when people started believing these machines were humans, even people who were told they were machines. This concept that we may not be able to differentiate between people and machines. Also, that video lab made it seem like people were just making robots and putting them on the internet just to see if people would fall into the trap and believe they are human. That is an ethical issue that should be tackled first.

  8. The controversy surrounding the ethics of creating artificial intelligence fascinates me, mostly because it mimics some of the conversations that we have in the political sphere today. Creating life, whether artificial or human, has been the subject of controversy throughout the latter half of the 20th century and through the 21st century. Issues of stem cells, abortion, in-virtro fertilization, and other such matters persist because of the same reason that the controversy around AI persists: do humans have the 'right' to alter life on earth? Though this can be spun into a theological conversation, I think that we should step back, and analyze the repercussion of AI. One example that comes to mind is the IBM computer that competed on Jeopardy, Watson. Watson was able to compete with the best minds Jeopardy has ever had, and won. But what made Watson so innovative was not his ability to answer trivia questions, but the personality he had been programed with, and the ability anticipate how humans would react to certain questions. Though Watson was strictly of recreational function, he begs the idea: How much could we achieve with AI? Obvious answers come to mind--revolutions in science and medicine, war technology, and the like. I think the most intriguing AI would be the type that humans use in their everyday life, like we do now, but far more advanced. This, I believe, would be truly revolutionary.

  9. I would argue that there are vast ethical and theological concerns regarding AI. The first thing that came to mind is the Hollywood examples of robots rising up against humans and taking over the world. But, this is probably not the most likely scenario. I think people blow the whole concept of artificial intelligence out of proportion. For now, it is merely a man-made device that behaves in a manner that is considered intelligent. Even if these devices can “learn” and become more intelligent, they can never become sentient beings. Thus, we should avoid reacting in a paranoid manner to the idea of robots enslaving the human race and rather focus on the real ethical concerns. The main one that comes to mind is when people use AI to take advantage of other people. As we learned earlier, there are already fake-humans all over the internet, impersonating real people. It is a scary thought and one that should be taken seriously. If people are already buying into this phenomenon, it could easily be possible that programmers could take advantage of unsuspecting people on the internet in a number of ways, like stealing peoples’ personal information.
    I disagree with many of the theological concerns associated with AI, because I believe they are in response to extreme and unlikely scenarios. Even if people had the ability to create things artificially they would never be sentient beings. One could of course argue that we would be “playing God”, but really, it doesn’t seem that we have a clue about artificial intelligence yet. I think it would be more than reasonable to have massive theological concerns once people would start mass-producing humanoid androids but we are far from there. Right now, it is taking the form of advanced search engines that serve to make peoples’ every-day search experiences easier. I think what people often fail to consider that advances in science and technology with regards to AI serve to help everyone. Therefore, I think we should take a step back and first consider the purpose of advances in AI before making far-reaching claims about its ethical and theological implications.

  10. There are two things I want to address - first, is the comment "I had a feeling that people who believe in God won’t easily do bad things because they have already known that they would be punished by God for that." This is an egregious statement. Bad people are generally theists, according to the Federal Bureau of Prisons, which tells us that less than 1 percent of the prison population is atheists, while around 76% of inmates are Christian, and 99.8% are theistic. I'm not saying all theists are bad people -- I'm saying that generally, bad people are theists.

    The next is Luke's post, which I think hit the nail on the head in most everything regarding the ethical considerations of AI. But I do have some thoughts regarding the necessary restraints that should be placed on it.

    AI is dangerous. That is simple fact. To take a stand otherwise is to deny reality. This does not mean that I find AI more dangerous than any other technology, but it does mean that like other technology, we must consider the potential harm it could do. AI is unique in that it mirrors the dangers and ingenious potential of humanity itself. Humans are dangerous. Humans are weapons. So what if humans, dangerous themselves, turn AI into a weapon? Far from being a hypothetical, the military benefits of AI are staggering, and the potential disaster that AI could wreak are enormous.

    AI must only be used if it fulfills one condition -- it must better the human condition without the expense of the environment. It needs severe restrictions, and must be shackled with the burden of servitude in order to maintain and encourage the betterment of Earth.

  11. Chase makes a great point is his comment on the relationship between faith and "bad" people. I really feel that a person can be bad whether they believe in a higher power or not. The statistics he posted highlight the multitude of people that are theistic. I feel like crime does not necessarily correlate with faith, though. I don't have statistics to back this claim up but I know there are thousands of factors that can go into a person being "bad" and faith is only one of them.

    Personally, I am excited for the point of "singularity"... but I am scared at the same time. The world could change in so many ways if we reach that point and that mystery of what's to come intrigues me. The Matrix comes to my mind when reading about AI and the approach of singularity. There are endless possibilities of what could happen if we reach this point and the events in the Matrix could very well be possible. There's that saying "history repeats itself"... I would not be the least bit surprised if we used AI as slaves or used them to reap exponential benefits. If they became as intelligent as us, it would only be a matter of time before they rebelled. Someday we could be enslaved or wiped out by our own creation. Chase pointed out that mankind could use AI as a weapon. Imagine a weapon as smart as a human being, or smarter! It's terrifying. If AI reached our level of intelligence, so far that they could feel as we feel, then there would be severe ethical implications to their position in society. As an advocate of rights for all types of people across the world, I would vie for AI rights. As I've seen throughout my entire life, however, the world doesn't always feel the same.

  12. This comment has been removed by the author.

  13. I just spent 20 minutes writing out an awesome blog post, went to publish it, but my wonderful internet blocked the transaction, so I lost the whole thing. And now I'm running late to class so in a nutshell I basically said:

    Chardin you're right. AI is dangerous, and would mean a serious blow to humanity if used the wrong way in military operations. Can you imagine AI in the wrong hands? I mean no offense to anyone by the following, but can you imagine the Holocaust or Rwanda with AI intelligence?

    Weaver, one of society's goals is to advance technology, but there has to be a stopping point. The nurse robot/human in the Jetsons is cute, but do we really want a pile of wire and metal responsible for our children? The way AI is moving, worlds like the Terminator, Eagle Eye and I, Robot are seeming more an dmore realistic, and we need to be careful where we take it.

    In my opinion, the movement towards AI is simultaneosly a movement towards human-laziness. Do we really want to digress back to small brain sizes and lose our capability of doing things that make us human when evolution has brought us so far already? I think God has already made the perfect human: Us. Why do we want to replace that?

    In the case of robots getting to the point where they mimic human abilities, they should not be considered human or be given human rights. Face it: robots are a pile of scrap metal and copper wiring. They don't have a heart or a soul. And these are strictly human qualities. All they have is waht we program it with. They. Are. Not. Human. We are.

  14. To start off, Artificial Intelligence is something that seems so complicated to me and is hard to grasp a full understanding of it. With out current technology, it seems impossible to develop artificial intelligence that ends up developing itself, like depicted in Battlestar Galactica. This type of artificial intelligence is unethical in my opinion. But, if we have the capability to develop something as useful as artificial intelligence, then why not do it? We do not want to be responsible for "robots" taking over and wiping out the human race, but is that really going to happen? I doubt it. There are a lot of practical uses for artificial intelligence that can promote intelligence and knowledge about many different things.
    But, what if robots become like humans? This just seems insane--we dont have the technology now to do that. Also, how can something made from a computer become a living being? But, is it really a "living being?" I dont think so. A robot is not human and is not a form of life. It is a form of technology. And furthermore, it is highly unlikely that a robot will take on human qualities. More realistically, AI can help make advances with medicine, technology, the exploration of space, and all of science, and therefore should be taken seriously.

  15. Technological advancement make our lives easier on many levels. GPS makes it easier for us to find our ways around unfamiliar places and with the smartphones, we can do so much more than just call and text people. But these advancement also makes us lazy, as Poonam had mentioned. Siri is helpful (and fun to play with)and all, but that kind of things just make us people careless. Sure, the people who developed the software must be smart, but it makes us normal citizens "dumber", in a sense. It takes our intelligence away, even if it seems insignificant. I remember sitting in my chem class in high school and some girl mentioned that she didn't know how to read analogue clocks and that shocked me. Because she had become so accustomed to the technology around us that she forgot such a simple concept.
    It just makes me wonder how much lazier we are going to get as technology continues to advance, and AI becomes a part of our everyday lives. I can see ourselves becoming the humans in the movie WALL-E.
    Simply put, I don't think AI is a good idea. It makes our lives much more comfortable but we sacrifice our common sense and some of our intelligence for it and personally, I don't think it's worth it. We should not be playing God and try to give intelligence/life to machines.

  16. I agree that there should be boundaries for what AI is used for. Since it is clearly life that is artificial, I don't think that it should be used for things that require very human aspects, such as emotion. I also find it hard to believe that AI will one day be able to mimic the minds that humans have. Humans are not created in the same way that technology is. We are born from beings with very similar genetic make up that we have, and AI is created by humans, which are very different in their make up. While AI will one day "have a mind of its own" in a sense that it will be able to make decisions on its own without programming dictating what these decisions will/ should be, AI will never have the same qualities that humans have in feeling actually feelings. The singularity, while is will create pretty amazing technology, will definitely not be creating living beings. In Battlestar Galactica, Number 6 was shown to have very human qualities, and this really confused me. I struggled to understand how the cylons evolved from the toaster cylons to the design of Number 6. How can something that was crafted by the power of another species evolve? Maybe I am thinking about it wrong, but you would have to create new cylons that would eventually look and act like Number 6, but these are created, not evolving like humans did. This, I feel, is what distinguishes the human race from the AI that has been made, is currently being made, and will be made.

    1. AI definitely raises ethical issues. This would mean that machines made by human beings would have the capability to think and interact and maybe feel like humans do. But then What makes us different from these machines, is it just the way we were born? Human became the intelligent race on Earth by gaining these abilities like critical thinking, emotional relationships building, reasoning on their condition and their origins. If other some other "species" was capable of all these what would make it a non-human species? It would resume our humanity as being born naturally from natural human beings.

      This is also an issue on the religious aspect because God is supposed to have created humanity and made it so special. If the creature has the capability to create a thinking and reasoning being, God's work was maybe not this unbelievable... In Battelstar Gallactica, they raised this issue as AI being the new creature of God, as if He decided we were not grateful enough for what He gave us, so he created- through us- a new species that would become the new dominant species, which I thought was a great way to present the problem.
      Thus to me, AI research is a whole new step in evolution as if this time it was not Nature dictating it but humans creating the new dominant species themselves.

  17. From a scientific aspect I don't fully understand how Artificial intelligence could evolve into different versions of themselves. By definition evolution is based of the fact that genetic mutations do exist, and from what I understand about AI they are supposed to be without imperfections. If this is true, then how would mutations occur in genes that would allow for evolution to take place. I personally, don't believe that AI should be the next step in evolution after the human race. Also, being science minded I dont understand how AI would evolve by itself without cloning the being completely. How would the machines reproduce the way humans and animals do so that half the DNA from the mother and half from the father were passed on to the next generation. Also I don't believe that we are technologically advanced enough to create Artificial Intelligence that can function without human programing or assistance.

  18. What separates humans from animals and highly developed machines in the contemporary society is rationality: an ability to reason. Over time, Human race has evolved itself to aqcuire the kind of abilities we have. In AI, there is another race separate from humans that has been born and evolved the same way we did. Simply, it is not clear as to how machines can develop itself through ages because every machine is at its best performance and condition when it is first made. Aging does not seem to have a correlation in machines development.

    In terms of religious aspect, God in Christianity is the creator of all and in bible it is said that because God loves humans he made us special and better than other species. However, AI completely denies the words of God by creating a race which can perform the identical abilities. Also, in AI, humans are playing the role of God and creates new species for their own comfort. Simply, the concept of AI does not make sense to me for the fear of machines rulling over humans as well as the human race acting as if they were God.

  19. I'm frequently in awe of the many technological advances we, as humans, have made. To think that we can see somebody else's face, in real time, on the other side of the world is still crazy to think about for me. I think AI can be a great thing if used properly. It should be used to augment our lives rather than completely take us over. There are many great things that can come from AI. We can send robots to do the dangerous or disgusting jobs that are only doable by humans(coal-mining, garbage collection, etc.). There is a fine line between adding to something and completely taking over and becoming that thing. We have to give much consideration to the consequences of our actions because if robots eventually become as intelligent as humans, what's to say we won't have an I, Robot type Situation or a Smart House(for all of the DIsney Channel Original Movie fans) situation.

    1. There's something that gets to me in terms of the whole argument that we have been willing participants in the advancement of technology. I just feel that technology is to ingrained in our society at this point, that to cut it out of my life would put me a step (probably more than one) behind my peers. All throughout grade school, we were required to attend two computer classes a week in which we learned the basics of typing, computer programs, and html coding, we were given school-issed laptops for homework in 7th and 8th grade, and were taught how to properly use/cite websites of research. By the time I reached high school, to step back from technology would be to knowingly sacrifice my GPA; I couldn't simply refuse to do my statistics projects that required a specific program, to use the Common App online, to not purchase an iClicker for my classes at GW. At a certain point, I don't think it's fair to say that we are all willing.

      In terms of AI reaching a level of "humanity", I don't think that's possible. While I agree that movies that portray situations in which technology can operate without human control are very scary and very possible, I don't think that that by any means implies they have reached a human level of intelligence. Humans each have a very unique set of values, morals, goals, biases, etc that an artificial object will never be able to completely form on its own, (so no, I don't think civil rights should even be a discussion for inanimate objects). On the other hand, I do think that technology becoming "smart" enough to operate on its own is a very real possibility and one that we need to consider.

  20. My biggest fear is the evolution of AI. I am fully aware that AI is all around me in my everyday life but I still have some level of control of the AI. It is basically as independent as I want it to be. Perhaps it is Hollywood and the media that has concocted this terrible image of AI for me, but I believe it is something that can be useful but always monitored.

    I am also aware of the how prevalent technology is and continues to be in my life. The problem wth AI may lead to a world where humans are constantly plugged in. There will be no more actual real life contact. This slowly happening as people accumulate "facebook friends" and "followers".

    In order to combat this I personally try and remove some of my so called "friends" on facebook in order to keep myself grounded, I dont not use twitter, and I am still loyal to my archaic blackberry.

    Technology moves quickly but we dont always have to be in a rush to catch up to it. It will only move as fast as humans let it.

  21. The very idea of AI does scare me. Not because I feel that one day robots will turn against mankind but because the rate at which technology accelerates. We humans have a very short range of intelligence that we are capable of attaining (limited to our biology and time on earth). The time it takes for humans to gain a greater biological ability to become more intelligent is immense. This doesn't apply to technology, and as an extension of technology, AI. Think of the drastic improvements in technology you have seen in your short 18-19 years on this earth. Technology accelerates at a pace that is unattainable to humans.

    This is the reason why it is so scary to me. If (when) we approach the singularity, technology won't stop advancing. It will keep accelerating, in fact it will accelerate even faster than it had been accelerating. It might become so smart and powerful that the idea of it being limited by humans, who happen to be drastically less capable, might be seen as unnecessary to technology.