DevTools for Large Language Models: Unlocking the Future of AI-Driven Applications
Diego M. Oppenheimer - @doppenhe
Foundational Models vs Large Language Models
FMs:
LLMs
A quick walk down memory lane
Model Size
Capabilities
Big Bang
No models
Emptiness
Self-Supervision
Data
Nothing
125-350M
Vomit up reddit
Small web/book dump
Who Could Get to 100B First?
1-100B
Taskless text generation
All the web
Instructed Tuned and Massive
10-200+B
Task generality and listen to feedback
Heavily curated and labeled web-scale data (probs cost billions)
As size and data quality increase, you get more generalization and in-context behaviors but higher cost
Entering the “Holy $#A!” phase
Early stage of development around new platforms tend to be simple wrappers:
Thriving developer ecosystem
The Development Process with LLMs
The Development Process with LLMs
GAI today -> wrappers on LLMs
Orchestration, Experimentation and Prompting Tools
*not complete list
Knowledge Retrieval and Vector Databases
*not complete list
Building V2 of LLM Features- Fine Tuning Language Models
*not complete list
Monitoring, Observability and Testing
*not complete list
Testing, Assurance and Guardrails
*not complete list
Some future predictions
* Iteration cycles define winning developer experiences:
* Larger models, even more access and powerful wrappers:
* GPT-You:
MLOps tooling evolves to enable “personalized” FMs: trained on your own data and workflows:
Thank you
Diego Oppenheimer
doppenheimer
@doppenhe
Credits
David Hershey
Laurel Orr
Matt Turk