THIS SURVEY IS CLOSED PLEASE DON'T TAKE IT
Pre-Test
This survey is closed. Please don't take it.
How familiar are you with the arguments surrounding AI risk - ie whether a future superintelligent AI poses a threat to humanity?
Have you ever read Nick Bostrom's "Superintelligence"?
Have you ever read the Less Wrong Sequences?
Do you think AI risk is an important issues that deserves attention from the scientific community?
Now You Have To Read A Thing
I'm going to ask you to read one essay discussing AI risk. This essay may or may not be by me. It will be posted on Slate Star Codex in the style of a normal SSC post in order to eliminate any bias that might result from seeing the author or from the cosmetic aspects of the author's work; I apologize for any intellectual-property-related shadiness that might involve and will delete it after this survey is done.

Some of these essays might be very long. If it's too long for you to want to read the whole thing, then read however much you want, then stop. This experiment is supposed to determine how persuasive different things are, and something isn't persuasive if it's so long you don't read it. As long as you put in a minimal amount of effort, it's still good data.

You might recognize some of these essays, or you might have already read them. If so, reread them enough to refresh your memory about what they said, and then take the post-test as normal.

Some of these might be placebo essays, in which case just play along.

You will be randomly assigned to read an essay based on the last digit of your date of birth - eg if you were born September 20, the last digit is a 0. Please read only your assigned essay before taking the post-test. After you submit your answers to the post-test you can read whatever other essays you want. You should probably open the essay in another tab/window so it doesn't mess with this survey.

If your birth date ends in 0 or 1, please read ESSAY A - http://slatestarcodex.com/ai-persuasion-experiment-essay-a/

If it ends in 2 or 3, please read ESSAY B - http://slatestarcodex.com/ai-persuasion-experiment-essay-b/

If it ends in 4 or 5, please read ESSAY C - http://slatestarcodex.com/ai-persuasion-experiment-essay-c/

If it ends in 6 or 7, please read ESSAY D - http://slatestarcodex.com/ai-persuasion-experiment-essay-d/

If it ends in 8 or 9, please read ESSAY E - http://slatestarcodex.com/ai-persuasion-experiment-essay-e/

When you're done reading the essay, please come back here and take the post-test.
Post-Test
Which essay did you read?
Had you read the essay before?
If you hadn't read the essay before - were you able to guess who wrote it by stylistic/other cues?
How much of it did you finish?
Now, how much do you agree with the following statement: "AI risk is an important issue and we need to study strategies to minimize it"?
Strongly disagree
Strongly agree
How likely are we to invent human-level AI before 2100?
Unlikely
Very likely
If we invent human-level AI, how likely is it to become "superintelligent" - ie far more capable at nearly every intellectual task than any human - within 30 years of its invention?
Unlikely
Very likely
And if we invent human-level AI, how likely is it to become "superintelligent" - ie far more capable at nearly every intellectual task than any human - within one year of its invention?
Unlikely
Very likely
If an AI became superintelligent, how likely is it to become hostile to humanity or in conflict with human interests, given no particular efforts to avert this?
Unlikely
Very likely
If a superintelligent AI became hostile to humanity, with some short head start before humanity discovered this, how likely is it that it would be able to win a decisive victory and become more powerful than humans?
Unlikely
Very likely
Overall, how concerned are you about AI risk?
Not concerned
Very concerned
Any other comments?
Your answer
Submit
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy