This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios.
**Note: The survey questions are not strictly speaking "questions" and should be viewed rather as a ranking or scoring. Each question is a dimension (E.g., AI paradigm) with three conditions (e.g., current, new, hybrid). The goal is to rank each condition on its degree of plausibility and impact to create classes for the model (similar to the four quadrants of a risk matrix -
https://tinyurl.com/riskmat). The extensive definitions below are only for reference (provided, based on feedback), if you understand each concept, such as takeoff, distribution, paradigm, and alignment, you're probably good to go. **
1) Please rank each condition from the highest potential benefit on stability, safety, or security (greatly increase) to the highest downside risk (greatly decrease). For conditions (e.g., technologies) that you don't believe could cause an increase or a decrease, just choose the best/worst option or leave it as "no effect." Rank from best to worst, assuming the condition has occurred. For conditions that are all bad or all good, again rank least bad to most and vice versa.
- This project is not about prediction or forecast, nor statistics, but exploratory scenario development only. Your best assessment is enough for this purpose and is highly valued. Further iterations will refine specifics
This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three/four conditions (e.g., fast) on the left and asks the participant to:
Frame of reference: Uncertain year between 2030 and 2100