Publication Norms for Responsible AI
Help shape the Partnership on AI's work on responsible publication norms for the AI/ML community by providing your feedback in this form. All of the questions are optional.

Read more about this work here
Leave blank to remain anonymous.
Email address
Providing an email address allows us to follow up with you about your responses, however feel free to leave blank if you prefer to remain anonymous.
Role and affiliation
If you are happy to provide it, this gives us helpful context for interpreting your responses.
To help us understand your level of involvement in the AI/ML community, choose all that apply.
If there's anything else that would provide useful context for your answers, please specify in 'other'.
Prioritizing possible interventions and approaches
There are many different ways to approach the challenge of publication norms for responsible AI, and many opinions in the community about what could be helpful or harmful. Below, we list some example approaches, and ask for your thoughts. Note: we do not necessarily endorse any one of these approaches, rather the aim is to gather community feedback on a wide-range of possibilities.
This would be actively harmful
This would be a waste of resources
This seems like a generally good idea
This seems important
This is something I would actively find useful
Research into what we can learn from history and other fields
Research into incentives and coordination challenges
A campaign to encourage more publishers to require a risks section in papers
An interactive tool to help researchers assess risks
A paper suggesting 'best practices' for responsible publication
Guidance on creating an internal review process for publication decisions
An ongoing series of community workshops on this topic
A steering committee of experts from across the AI/ML research ecosystem to guide publication norms
An online forum for community discussion of this topic
An online platform to allow peer review of the risks of research papers
Advocating for regulation of AI research publication
A compendium of case studies in AI related to publication practices
A taxonomy of harms and unintended consequences that might result from increasingly advanced AI research
Do you have anything to add to any of our key questions/challenges?
E.g. information that can help us answer these questions, or thoughts on other questions we should consider. The questions along with more context can be found on the PAI project page, but are listed here for convenience: What can we learn from other fields dealing with high-stakes technology, and from history? What are the potential pitfalls of changing the status quo of publication norms? How can we encourage researchers to think about risks of their work, as well as the benefits? What tools or services need to exist to help people navigate publication decisions? How can we design effective review processes? How do we coordinate effectively as a community?
What resources would be helpful to you to navigate difficult decisions and trade-offs related to publication decisions?
Have you observed any publication practices that work well? Have you observed any that have had shortcomings?
E.g., does your organization have a review process, and if so how well does it work?
What conferences or events would you recommend PAI attend to host a discussion on publication norms?
Any additional or general thoughts and comments on publication norms for responsible AI?
Never submit passwords through Google Forms.
This form was created inside of Partnership on AI. Report Abuse