MLflow User Survey (Dec 2019)
Sign in to Google to save your progress. Learn more
About You
Tell us about yourself and optionally provide contact info so we can reach you. We'll only use your contact info to ask for clarifications, and to enter you in a raffle for an MLflow T-shirt.
Name [optional]
Email [optional]
Organization [optional]
T-Shirt Size
Clear selection
Clear selection
How did you hear about MLflow?
Clear selection
What resources did/do you use for learning MLflow?
What is your machine learning use case?
MLflow Features
MLflow has several components and features. You may not have used them all, but let us know which ones you find most useful or interesting.
Which MLflow components are you interested in?
Haven't looked at it
Not interested
Tried it
Use it occasionally
Use it regularly
MLflow Tracking (experiment and metric tracking)
MLflow Projects (code packaging for reproducible runs)
MLflow Models (model packaging & deployment)
MLflow Model Registry (model management)
Clear selection
If you've used the model registry component, please include any detailed feedback here
Which overall MLflow use cases are most important to you? List up to three.
Standardizing a process for building ML applications
Tracking & sharing metrics, code, models during experimentation
Tracking performance of production pipelines
Reproducible runs in different environments
Packaging models for deployment to production
Managing model lifecycles (CI/CD of production models)
Most important
Second most-important
Third most-important
Clear selection
Are there any APIs or concepts in MLflow that you think are confusing or should change for version 2.0?
Your ML Workflow
Tell us about which tools and processes you use for machine learning so we can better support them.
What ML frameworks / tools do you use?
Where do you perform ML model training?
What data sources do you use for training?
What takes up most of your time while doing ML development? Select up to three items
Environment management (setting up conda envs, etc)
Writing, debugging code to experiment with new features & models
Analyzing training runs (viewing loss curves, hyperparam tuning results)
Packaging & deploying models to production
Debugging production issues with models
Communicating your work (making dashboards, summaries, etc)
Managing data (finding the right data, versioning, preprocessing)
Most time-consuming
Second-most time-consuming
Third-most time-consuming
Clear selection
Are there specific languages, libraries or frameworks you'd like MLflow to add support for?
How do you deploy your ML applications?
How do you monitor ML application performance in production?
Future MLflow Development
We'd like to add a number of new features in 2020. Let us know what you think about these and other ideas.
How do you think we should prioritize the following MLflow features in 2020?
Low priority
Medium priority
High priority
Telemetry component (API to monitor metrics from deployed models)
Easier multi-step workflow support (e.g. DAG visualization, Airflow integration)
UI improvements (dashboards, easier metric comparison)
Hyperparameter search and AutoML
Manageability and scalability
Feature Store (autodetecting, cataloging, tracking features used to train models)
Integration with annotation/labeling solutions (e.g. Figure Eight)
Model analysis/interpretability
Data versioning & snapshotting
Clear selection
If you've contributed or considered contributing to MLflow, what (if anything) has been difficult?
Do you have any other feedback on MLflow? (Take as much space to answer as you'd like.)
Orgs Using & Contributing to MLflow Page
We're starting a page on the MLflow website to list organizations using and contributing to MLflow. If you want to be listed, check this box, and make sure you entered an organization name and email. We'll follow up formally over email :)
URL of logo to use on the page
Clear form
Never submit passwords through Google Forms.
This form was created inside of Databricks. Report Abuse