Cloud TPU Pod Alpha Access
Thank you for your interest in Cloud TPU Pods! Please answer the questions below to help us prioritize your request.
Please write "Individual" if not affiliated.
Are you interested in training an open-source reference model on a Cloud TPU Pod?
These tutorials feature some of our most popular reference models:
. Additional reference models are available in multiple subdirectories here:
Yes, with modifications
No, I have a different model in mind
If your ML model is not similar to one of the reference models linked above but is close to other published work, please provide relevant arXiv links here:
What is the largest fraction of a Cloud TPU Pod you would like to use?
32 cores (1/16 of a pod)
128 cores (1/4 of a pod)
256 cores (1/2 of a pod)
512 cores (full pod)
More than a single full pod
Is the ML model you would like to train on Cloud TPU Pods currently training on individual Cloud TPUs?
Since you can run the same code on individual Cloud TPUs and on slices of Cloud TPU Pods, developing and debugging your model on individual Cloud TPUs is the best way to prepare to scale up on Cloud TPU Pods.
Work in progress
Have you profiled your model on a single Cloud TPU device?
Cloud TPUs come with in-depth profiling tools that you can use to understand and optimize your model's performance before scaling up with Cloud TPU Pods. Further details available here:
Work in progress
If you would like to share any other context, please feel free:
Please email me more information about the Cloud TPU Pod Alpha program as well as future updates about Cloud TPUs
Please only email me about the Cloud TPU Pod Alpha program
A copy of your responses will be emailed to the address you provided.
Never submit passwords through Google Forms.
This form was created inside of Google.com.
Privacy & Terms