Gensyn Testnet - Deep Learning Poll
Hello everyone! As the Gensyn core team continues to work towards releasing the Testnet, we're interested in finding out more about the deep learning engineers/researchers/hobbyists in our community.

If this is you - please provide detail below!

Some notes:
* If you don't know the answer to any question please say 'don't know'
* If you are receiving this and are not in the community, join here! -> https://discord.gg/cHNFXCv7Wr
Sign in to Google to save your progress. Learn more
What's your name on the Gensyn discord? *
Do you exclusively train deep learning models? *
Do you ever use non gradient based optimisers (e.g. swarm optimisation)? *
Do you train deep learning models for work? *
Roughly, what % of the time you spend training a model is dedicated to grid searching/hyperparameter optimisation? *
What services do you regularly use to train models? (select as many as you like) *
Required
Roughly, how much do you pay for compute per hour? (e.g. cost of AWS instance) *
Do you do any form of cost optimisation? (e.g. sending models to train on different virtual machines) *
How large, on average, is the training data you work with? (in GB) *
What is the primary framework you use? (E.g. PyTorch) *
What is the main use case you work? (e.g. ‘computer vision for pose estimation’) *
What is the largest problem you face when performing deep learning work? *
Are you under significant time pressure when training models? (e.g. you would pay significantly more to make them train significantly faster) *
Do you have issues accessing the scale of compute you need? (provide some detail if you can) *
Do you consider the training data secret or valuable? *
Do you consider the model architecture to be secret/valuable? *
Are the models you train easily decomposed into smaller training tasks that can be parallelised? *
In general, are you trying to solve new problems or work out more efficient/effective ways to solve existing problems? *
Very generally speaking, what's the usual batch size you train with? (e.g. for MNIST this might be 1 x 712 x 30 - if you were using 30 samples of flattened image data) *
Do you design your own model architectures from scratch, implement published architectures yourself, or use libraries with existing architectures pre-made (e.g. HuggingFace Transformers for NLP)? *
Do you typically train models on single GPUs or distribute over multiple GPUs / multiple computational nodes? *
How long (wall clock) do you typically train your models for? *
Anything else you'd like to add?
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Gensyn. Report Abuse