Request edit access
Feedback on Actionable-Guidance Recommendations for NIST AI Risk Management Framework (AI RMF)
Regarding draft guidance in the working paper "Towards AI Standards Addressing AI Catastrophic Risks: Actionable-Guidance and Roadmap Recommendations for the NIST AI Risk Management Framework" by Anthony M. Barrett, Dan Hendrycks, Jessica Newman, and Brandie Nonnecke
Sign in to Google to save your progress. Learn more
1) Does the draft guidance seem clear and actionable enough to be usable in context of the NIST AI RMF? *
2) Of the following, which would most improve the draft guidance?
Clear selection
3) What else would be most valuable for improving the draft guidance?
4) Does the draft guidance seem compatible with Enterprise Risk Management (ERM) frameworks typically used by businesses and agencies? *
5) Does the draft guidance seem compatible with relevant standards or regulations, e.g. from NIST, ISO/IEC, IEEE, or the EU AI Act? *
6) Does the draft guidance seem usable or compatible with each stage of an AI lifecycle, e.g. design, development, test and evaluation, etc.? *
7) Does the draft guidance provide meaningful, actionable, and testable (i.e. "measurable") indicators of AI system trustworthiness, or at least enable documentability of risk management processes? *
8) Is there LOW downside risk from publishing the draft guidance?  (Does the draft guidance seem UNLIKELY to be misinterpreted/misapplied by users or other stakeholders in ways that would be net-harmful?  Does publishing this guidance have LOW information hazards?  Is the draft guidance sufficiently future-proof to be applied to AI systems over the next 10 years?) *
9) Overall, does the draft guidance meet or exceed its stated objectives enough to be a “minimum viable product” as part of guidance for the NIST AI RMF? *
10) If the draft guidance lacks something critical, what does it need to fill that gap?
11) Please provide your email address (optional) for feedback tracking and follow-up discussions:
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of UC Berkeley.

Does this form look suspicious? Report