LIB-200

November 21, 2015

 

Final Project

 

                

For our final project, we’re trying to portray both sides of having artificial intelligence becoming one with our lifestyles, whether that means the end of human-kind as we know it, or being able to happily coexist with our robotic friends. Our thesis suggests that the A.I.singularity will only happen if we allow it to, as humans we aren’t perfect, and neither are our creations. It is in our creation process of these androids that will help us live side by side with them. To further delve into this process, in our medium, it allows for the reader to choose their own path and destination, and the outcomes from it, provide an example on how a simple choice or action can steam roll into human extinction or happy co-existence.

 

                Our purpose is to show that A.I. beings can be as good or bad as we want them to be. Humans will create them in our own image, thus they will have the same dysfunctions as humanity. This is to show that humans will be able to coexist with artificial intelligent/conscious beings, as long as the guiding path that we provide to the A.I.’s to learn from is a good moral one. Otherwise, with a broken program, or moral compass not instilled, this can lead to human extinction.

        First off, we the co-authors are at odds end with the way we feel towards having superintelligent artificial beings. On one hand we are excited about what this may hold for us in the future, whether that means being able to enhance our own bodies with these advancement in technologies and prolonging our own lives, to being afraid that these superintelligent beings will take over the world. Ray Kurzweil is one proprietor that argues for the advancement in artificial intelligence. He claims that we can’t define it too black and white, there is some gray area in between. Technically we can all be considered machines, the way we operate, without us knowing we subconsciously are doing mechanical movements. Whether it’s your eyes reading this, or the hand you’re using to scroll through to the next line, it’s all mechanical, and subconscious to an extent. Kurzweil states that,

"Furthermore, these nonbiological entities will be extremely intelligent, so they'll be able to convince other humans (biological, nonbiological, or somewhere in between) that they are conscious. They'll have all the delicate emotional cues that convince us today that humans are conscious. They will be able to make other humans laugh and cry. And they'll get mad if others don't accept their claims. But this is fundamentally a political and psychological prediction, not a philosophical argument" (378-379).

In the near future when the technology to be able to create a conscious artificial being does arise, we will be pinned with these choices to see how we can deal with them. Kurzweil says that we should see the A.I. beings no differently than us humans: with our pacemakers, and prosthetic limbs, we are exactly the same as future artificial beings.

        On the other end of the spectrum is James Barrat, who argues that by creating these superintelligent beings, we are basically conjuring up our own demise. He is very analytical and sees faults in Kurzweil’s theories. Barrat claims that when we create these beings, if we do not install a compassion/empathy/friendly program into them, they might not want to keep us around. Sure, we might be more intelligent than most other animals on this planet, but our intelligence would be a drop of water compared to the army of superintelligent beings who can outsmart us on a large scale. Barrat argues that,

“And that brings us to the root of the problem of sharing the planet with an intelligence greater than our own. What if its drives are not compatible with human survival? Remember, we are talking about a machine that could be a thousand, a million, an uncountable number of times more intelligent than we are - it is hard to overestimate what it will be able to do, and impossible to know what it will think. It does not have to hate us before choosing to use our molecules for a purpose other than keeping us alive" (18-19).

The superintelligent beings would have no need for a parasite that just leeches off the planet’s resources and yet gives nothing back. We are the lab rats to these beings, and we are just as disposable to them, as we serve no real purpose other than to destroy things, and take up space.

        This is why we are both conflicted. Throughout the semester, we see this see-saw argument of worst and best case scenarios on both ends of the spectrum. But we want to be optimistic about the endless possibilities of having an android companion at our sides: and at the same time we fear for what may happen if they were indeed to revolt against its creators, and decide to wipe humans off the face of the Earth. This is why we have decided to go with a middle ground approach to this project, and allow our reader to choose their own destination.  

 

                We are aiming to use Inklewriter as our medium, creating a choose-your-own-adventure story, where your choices affect the outcome of humanity. This is to show how the decisions that we make reflect on the outcome or consequences that can happen in the future if we choose the wrong path on the way to creation of artificial intelligence. Your choices in everyday life will always have an effect or consequence. This story is to promote the fact that sometimes, your choices can spiral out of control, or lead to great achievements. Whichever path you take is yours alone, and with each ending, hopefully it’ll make you question if your last decision was the right one.

Sources

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin

        Books. 2005. Print.

Barrat, James. Our Final Invention: Artificial Intelligence and the End of the Human Era. New

        York: Thomas Dunne Books. 2013. Print.