We have a lot of work to do leading up to this.
We want to find people with diverse skills and backgrounds to work in or with the Taskforce, to catalytically advance AI safety this year with a global impact. We're particularly interested in building out "safety infrastructure" and developing risk assessments that can inform policymakers and spur global coordination on AI safety. For example, this would include experience running evals for LLMs, experience with model pretraining, finetuning, or RL, and experience in technical research in the societal impacts of models. But we're open to hearing what should be done beyond this as well.
If you think you can contribute to this effort, reach out.
By submitting this form you are confirming that you are freely consenting to the Department for Science, Innovation & Technology holding and processing your personal data to be considered to contribute to the Frontier AI Taskforce (outlined in this Privacy Notice).