461
//CoolPilot
Albert Sun, Andrew Fortner, Donald Thai
The
Problem
Large Language Models (LLMs) are powerful, but who knows where the data we input goes. It could be used for training, leaked, or even spit out in another answer.
Our Solution
01
Our Solution
To an AI, data is just numbers and patterns, It does not need the real raw data.
Companies want AI code tools to help their developers, but don't want to hand off their internal code to a black box.
We help by creating models on encrypted text that returns encrypted text, allowing for code generation without the actual code.
Homomorphic Encryption
“The conversion of data into ciphertext that can be analyzed and worked with as if it were still in its original form”
Traditional LLMs
Unencrypted Prompt/Training Data
LLM API (GPT)
Data is not safe, providing external sources with our raw data
CoolPilot
Encrypted Prompt/Training Data
CoolPilot API
LLM API (GPT)
Data is encrypted on-prem, guaranteed to be safe and secure as it is processed
How It Works
01
Client-side homomorphic encryption
02
Train LLM with encrypted data
03
Outputs Encrypted Data
04
User decrypts data
What We Tried
More Applications
DEMO
Future Challenges