RSVP: Output Supervision can Obfuscate Chain of Thought
Title: "Output Supervision can Obfuscate Chain of Thought"
Presenter: Alex Turner (Google Deepmind) (joining remotely through Zoom)
Location: 56-167
Date/time: 6-7pm, Wednesday, Oct. 8
Recording link: Will be emailed to those who fill out this form
Description: Recently, OpenAI showed that training against a chain of thought (CoT) monitor can cause obfuscated CoTs, which contain bad behavior the monitor cannot detect. They proposed to keep CoTs monitorable by training only against output monitors that do not have access to CoT. We show that such training can still cause obfuscated CoTs via two mechanisms. First, when a model is trained to produce a safe-looking output, that model may generalize to making its CoTs look safe. Second, since later tokens are conditioned on earlier ones, safe-looking CoTs may increase the likelihood of safe outputs, causing safe-looking CoTs to be reinforced. We introduce two mitigations to address these two issues, which achieve a Pareto improvement in terms of monitorability and task performance compared to regular training. To our knowledge, we are the first to identify and mitigate these problems. Our work implies that preserving CoT monitorability is more difficult than previously thought; we suggest practical guidelines for AI developers to maintain monitorable CoTs.
Sign in to Google to save your progress. Learn more
Will you join in person for the talk from 6-7pm on Wednesday, October 8?
*
If you are a member of AI@MIT Reading Group Fall 2025, enter your email here (the one you used to apply to Reading Group). Due to limited space, Reading Group members will be prioritized.
Email for sending the Zoom recording link
Next
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. - Terms of Service - Privacy Policy

Does this form look suspicious? Report