ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
TechEmergence OCT 2015 AI Expert Poll on Machine Consciousness and AI Risk
2
All Responses, All Participants
3
Total Responses: 45
4
NOTE: This complete list of responses includes those of our computer science PhDs and artificial intelligence researchers, as well as those of other technology entrepreneurs and tech media persons. On this tab you will find all responses to our Q/A on AI consciousness and AI risk - while in the other tabs of this spreadsheet you will find only the responses of computer science PhDs and artificial intelligence researchers.
5
6
Name:Dr. Tjin van der Zant
7
Role(s):Cofounder and Executive of RoboCup@Home | Visionary at Brobotix | Director of Cognitive Robotics Laboratory at University of Groningen| Founder of Assistobot
8
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
9
Answer:It is absurd to think that humans are the only ones that can have consciousness, since we know apes also have it. Anyone claiming that only biological machines, such as humans, can have consciousness is being a biochauvinist. It might be hard to imagine though, looking at the current technology.
10
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
11
Answer:(c) 2036 - 2060, Definitely C (90%) By 2060 a 1000$ computational device has more ‘brain’-power than all humans combined (Moore’s law)
12
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
13
Answer:Humanity depending on A.I. and then a disaster (virus, solar flare) destroys the technology.
14
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
15
Answer:Humans suppressing A.I. which can lead to an A.I. revolution (as in the French Revolution)
16
17
Name:Dr. Michael (Mishka) Bukatin
18
Role(s):PhD, Computer Science, Brandeis University | Senior Software Engineer at Nokia | Robotics/AI Board Member, Lifeboat Foundation
19
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
20
Answer:I don't know of any reason which should prevent us from developing machine consciousness, and moreover I hope that smarter-than-human machines would eventually solve the Hard Problem of Consciousness, even if humans keep failing at solving it on their own.
21
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
22
Answer:(b) 2021 - 2035
23
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
24
Answer:The most acute danger is for the community of superintelligences to start fighting among themselves in such a manner that everything will be destroyed as a side effect of that fighting.
25
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
26
Answer:The most likely danger? I don't know.
27
28
Name:Dr. Blair MacIntyre
29
Role(s):PhD in Computer Science from Columbia | Professor of Computer Science at Georgia Tech College of Computing and the GVY Center | Director of the Augmented Environments Lab | Co-founder and Co-Director of Georgia Tech Game Studio | Co-founder Aura Interactive (AR design and consulting firm)
30
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
31
Answer:I believe that we will have conscious machines at some point. I believe our brains are incredibly complex, and our current machines are incredibly simple. Whether through biological or quantum computing, I expect we will dramatically increase the computational capabilities of our machines to exceed that of our brains.
32
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
33
Answer:(d) 2061 - 2100
34
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
35
Answer:Detailed profiles of all people (lost of privacy through big data processing)
36
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
37
Answer:Accidental destruction of our world by not-quite-ready AI.
38
39
Name:Dr. Joscha Bach
40
Role(s):Cognitive Scientist at MIT Media Lab and Harvard Program for Evolutionary Dynamics | Founder of the MicroPsi Project
41
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
42
Answer:Our conscious mind is bootstrapped in the first months and years of your interaction with the world, yet all the information that governs that bootstrapping is encoded in a small fraction of the information content of our genome. Our current computers also begin to approach the computational complexity of our nervous systems, so I do not see a reason to believe that there are any insurmountable obstacles to creating artificial sentience.
43
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
44
Answer:(e) 2101 - 2200
45
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
46
Answer:The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won't improve our living conditions if we don't move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don't manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity.
47
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
48
Answer:AI enhanced destructive capitalism.
49
50
Name:Dr. Andras Kornai
51
Role(s):PhD in Linguistics, Stanford | Professor at the Budapest Institute of Technology | Senior Scientific Advisor at Computer and Automation Research Institute of the Hungarian Academy of Sciences | Research Associate at Boston University| Board member aon ACL SIGFSM
52
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
53
Answer:Since we have an existence proof that such things are possible to build from
protein, it is evident that no magic will be required.
54
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
55
Answer:(c) 2036 - 2060
56
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
57
Answer:Financial algorithms. These are, btw, already superintelligences without
human-centric goals (or maybe I'm too picky for not considering "making your
trading house even richer" sufficiently human-centric).
58
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
59
Answer:Oppression aided by AI under the guise of "enforcing laws" where laws are
written by corrupt and self-serving legislators.
60
61
Name:Dr. Jim Hendler
62
Role(s):Professor of Computer, Web and Cognitive Science, as well as Director of Data Exploration and Applications, at Rensselaer Polytechnic Institute | Author of multiple books, including Spinning the Semantic Web (2002)
63
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
64
Answer:I believe that machines will have some sort of subjective awareness, but that it will not be “in the same way that humans are” — different organisms will, by definition have different kinds of awareness. Just as I would not say a chimp or whale is conscious in the same way that humans are, I would not expect this of machines. I do believe that we are seeing the beginning of an increasing autonomy that will be operationally non-differentiatable from awareness some day – but the border between non-aware and aware is not a sharp one, so I don’t expect this to be a sudden change.
65
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
66
Answer:(c) 2036 - 2060
67
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
68
Answer:Significant societal change caused by replacement of workers by cognitive computing (not especially robots).
69
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
70
Answer:Human warfare capabilities amplified by artificial intelligence (note: not robots but other capabilities)
71
72
Name:Dr. Eyal Amir
73
Role(s):Chief Executive Officer and Chief Data Scientist at Parknav Technologies | Cofounder, CEO, and CDO of AI Incube | Associate Professor, Computer Science Dept., University of Illinois, Urbana-Champaign
74
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
75
Answer:We already have conscious machines. Their degree of consciousness will evolve and become greater and greater as we progress in the technology and knowledge that we put into them. For example, autonomous cars will be very self conscious.
76
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
77
Answer:(d) 2061 - 2100
78
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
79
Answer:Weaponized autonomous robots
80
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
81
Answer:Hybrid human-machines
82
83
Name:Dr. Noel Sharkey
84
Role(s):Emeritus Professor of Artificial Intelligence and Professor of Public Engagement, University of Sheffield, UK, Co-founder and chair elect of the NGO International Committee for Robot Arms Control, Co-director Foundation for Responsible Robotics. Founding Editor-in-Chief Journal of Connection Science. Formerly EPSRC Senior Media Fellow and Leverhulme research fellow on the ethics of battlefield robots, Head Judge on BBC’s Robot Wars
85
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
86
Answer:This question is not possible to answer because consciousness is still shrouded in mystery with no adequaute scientific theory or model. People who talk with certainty about this are delusional. There is nothing in principle to say that it cannot be created on a computer but until we know what it is we don’t know if it can occur outside of living organisms.
87
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
88
Answer:No timeframe given
89
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?
90
Answer:The beginning of automating armed conflict
91
2b - In one word or phrase, what do you consider to be the most likely risk of AI in the next 100 years?
92
Answer:the full automation of armed conflict
93
94
Name:Tim Devane
95
Role(s):Principal, Next View Ventures | Advisor to Epic Magazine | Advisor to betaworks
96
1a - There is heated debate around whether or not machines will be able to ever become conscious or sentient (subjectively aware) in the same way that humans are. If you believe that developing machine consciousness (whether in 1000 years or 20 years), or whether you do NOT believe that such a feat is possible, please explain your position as best you can in 1-2 sentences:
97
Answer:Yes, advances in neural network speed and structure have already produced processing complexity that experts claimed were impossible just a few years ago - a silicon neural network will be capable of a mathematical mimic of self-consciousness. However, I hope that the minds responsible for advancing A.I. focus on improving our current systems - medical diagnosis, legal analysis, the complex rules of a city of self-driving cars, predictive engines in weather and construction - and extending the human mind then seeing how close we can get to the incumbent 'mind'.
98
1b - In what timeframe would you give suspect with 90% certainty that “conscious” machines might be developed?
99
Answer:(c) 2036 - 2060
100
2a - Elon Musk is not alone in some of his fears about the dangers of artificial intelligence (Oxford's Nick Bostrom and UCAL's Stuart Russell, and others), but there's disagreement around near-term AI threats. In one word or phrase, what do you consider to be the most likely risk of AI in the next 20 years?