ML Engineering Process Survey
Howdy! Thanks for participating in our survey. Bytez is building an ML Engineer Agent grounded in research and open-source models, to help devs find the right model for projects. We’d love to learn about your workflow wrt evaluating and comparing pre-trained models. Your pov shape what we build next. 

If you'd like to learn more about this project, reach out to Holly @ holly@bytez.com.
Background Information
Full Name
Years of Experience in ML Engineering *
Primary Domain or Industry *
What's your role in your organization and what do you do?
Job Title *
Model Selection and Recommendations
When someone asks you a question about an AI model (e.g. performance, use case, value prop, etc) what's your approach to answering that question? How do you find out information about that model? *
When someone asks you to recommend a model for a project/application, what's the process you follow for recommendations? *
 Model Evaluation

When someone asks you to run a pre-built/pre-trained model on a dataset, what's your process?  *
When someone asks you to eval/compare several pretrained models and recommend the best one, what’s your process? *
Model Customization / Fine-Tuning

When someone asks you to customize/fine-tune a model on a dataset, what's your process? *
What factors influence the types of models you consider? *
Post-Deployment Monitoring and Maintenance

How do you evalue a model's success in production? What’s your process? *
When someone asks you to upgrade a model in production, what’s your process?  *
Additional Insights

What are the biggest challenges / pain points in evaluating or comparing models? *
How do you see the future of ML engineering tools evolving? What do you wish existed that would make your life easier? *
Thank you for your responses
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Bytez.

Does this form look suspicious? Report