Confidence Interval
An AI-supported reasoning scaffold
for college students (and others)
Vaughn Tan
Singapore Government Centre for Strategic Futures
Winding Road Pte Ltd
UCL School of Management
© Vaughn Tan 2025-2035
An AI-powered tool
(a reasoning scaffold)
that helps users go from a thinly articulated argument like this
to a robustly conceptualised argument like this
in 15 minutes.
© Vaughn Tan 2025-2035
Reasoning scaffolds help humans reason explicitly and therefore more effectively about subjective things, not just objective truths.
Subjective reasoning is the most frequent cause of coordination problems —
yet remains a theoretically and practically under-explored area in epistemics.
Learn more about reasoning scaffolds here
© Vaughn Tan 2025-2035
Teaching undergrads through PhDs and exec MBAs in universities since 2008, and strategy consultant for govts/multilats/corps/startups since 2012.
tl;dr:
Ability to do subjective reasoning is declining.
Decline observed in students and non-students.
Leads to long-term coordination problems.
© Vaughn Tan 2025-2035
Boeing’s series of 737 MAX disasters show how coordination breaks down when different parts of an organisation unintentionally pursue different and conflicting subjective definitions of “good performance”: Increasing output volume, or compliance, or controlling production costs.
© Vaughn Tan 2025-2035
Decline accelerating with advent of AI tools.
Working on the problem for the last 3 years.
Testing mechanisms for enabling better human subjective reasoning since late 2024.
© Vaughn Tan 2025-2035
An early version of a reasoning scaffold.
© Vaughn Tan 2025-2035
Tested reasoning scaffolds using pen-and-paper mechanisms first, for
faster iteration and ability to modify scaffolds in real-time.
Real-world teams, real-world subjective reasoning about strategy for public utilities and economic development.
Very strong initial feedback:
Teams pirated test materials for own use, and participants requested more workshops using the test mechanism.
© Vaughn Tan 2025-2035
The Future of Life Foundation supported final testing and prototyping of a self-service, AI-supported version designed for college students.
© Vaughn Tan 2025-2035
© Vaughn Tan 2025-2035
© Vaughn Tan 2025-2035
Design principles:
Learn more about meaningmaking here
© Vaughn Tan 2025-2035
© Vaughn Tan 2025-2035
Feedback from student + instructor testers:
Massive instructor time savings (~ 60 min/stu).
Time savings for students (~45 min/stu).
Significant output quality improvements.
Applies to many types of subjective arguments.
© Vaughn Tan 2025-2035
“The 15 minutes I spent on this was more helpful than an entire year-long module on critical thinking.”
44 testers so far from educational institutions.
Over 95% want their institution to provide access.
© Vaughn Tan 2025-2035
Also being tested by government strategy groups, public utility operations teams, development agency leadership teams, consultancies, and startups.
© Vaughn Tan 2025-2035
“This is already helpful for clarifying our thinking. If we had a custom version for [our specific context] we would buy access for our middle management layer.”
© Vaughn Tan 2025-2035
Across the board, useful for improving arguments about subjective value judgments:
Justifying a business model
Deciding on an investment commitment
Choosing between product dev options
Arguing for a particular policy direction
© Vaughn Tan 2025-2035
Fully functioning prototype
ci1.vercel.app
© Vaughn Tan 2025-2035
Whitelist request form
forms.gle/Q8PYkEVFnoDaoggC6
© Vaughn Tan 2025-2035
Read the development note
vaughntan.org/aiux
© Vaughn Tan 2025-2035
Get in touch to learn more about the project.
I’d love your help with introductions to:
Institutions that want to test.
Funders who want to subsidise access.
People/teams who want to build it out with me.
© Vaughn Tan 2025-2035
21
web
Farcaster
X
Bluesky
© VAUGHN TAN 2024-2045
© Vaughn Tan 2025-2035