Secure AI/ML-Driven Software Development (LFEL1012)
David A. Wheeler
Copyright © 2024 The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks.
Outline
Context
Overall context
Sources: [Linux Foundation2025], [BSI2024]
Not covered
Important but out-of-scope
Key AI concepts for secure development
AI, ML, Neural Networks, and LLMs
Machine Learning (ML)
Large Language Model (LLM) Approaches
Neural Network
Artificial Intelligence (AI)
Other basic terms & concepts
Fundamental weaknesses from LLM-like technology
AI
system
Inputs
Outputs
Goal: Use productively, in spite of limitations
LLM (or similar)
Gullible &
context window
Errors & hallucinations
Training issues
Beware of the AI “Lethal Trifecta”
“Cartoon Robot” by Sirrob01, public domain.
Lethal Trifecta per [Willison2025-06-16]
2. Access to your private data
3. Ability to externally communicate
Security risks of using AI assistants
Real-world use of assistants
[Denisov-Blanch2025], [Google], [Warren2025]
AI can improve software developer productivity
[Denisov-Blanch2025], [Google], [Warren2025]
Two main kinds of security risks using AI assistants
1. Dev Environment
Security failures
from assistant
2. Results
AI generated insecure results
1. Dev environment: Security failures from assistant
2. Results: AI generated insecure results
Results: Slopsquatting is a new concern
What about “vibe coding”?
Source: [Karpathy2025], [Willison2025-03-19],
Images by ChatGPT & Microsoft Designer
Don’t be a vibe coding victim
Use AI wisely
As always: Manage your risks!
Risk Assessment
Source: ISO 31000:2018
Identification
Analysis
Evaluation
Avoidance
Reduction
Transfer
Acceptance
Risk Treatment
Best practices for secure assistant use
Limit privilege of assistant
Private
Exposure
Externally
Communicate
Cautiously use external data
Private
Exposure
Externally
Communicate
Limit access to your private data
Private
Exposure
Externally
Communicate
Limit ability to externally communicate
Private
Exposure
Externally
Communicate
Consider any use of external systems to run assistants
Logging
For more see [ISO/IEC 42001:2023] section B.6.2.8; [NIST AI 600-1] GV-1.2-001, GV-1.5-003, MS-2.8-003
Be cautious about extensions & external configurations
Creating MCP servers
Sources: [Naamnih2025], [MCP]
Counter well-known attacks on MCP servers
Sources: [MCP-BP]
Writing more secure code with AI
How to improve security of code when using assistants
4. Generate tests
5. Verify with humans, tests, and other programs
2. Expressly instruct assistant to generate secure software�3. Trust less/engage more
Basics
Code
Verify
1. Apply basics of developing secure software
2. Expressly instruct assistant to generate secure code
3. Trust less/engage more
4. Generate tests: Overall
4. Generate tests: AI generated tests
5. Verify with humans, tests, and other programs
Reviewing changes in a world with AI
Basics of reviewing proposed changes
Human review: Some things to consider
Beware of adding new dependencies
Human review: Reviewing others’ work with assistants
Countering low-quality external proposals
Wrap-up
Pragmatic Coders’ Recommendations
Source: Pragmatic Coders, Secure AI-Assisted Coding: A Definitive Guide, https://www.pragmaticcoders.com/blog/secure-aiassisted-coding-guide
Conclusions
Thank You
References
References
References
Legal Notice
Copyright © Open Source Security Foundation®, The Linux Foundation®, & their contributors. The Linux Foundation has registered trademarks and uses trademarks. All other trademarks are those of their respective owners.
Per the OpenSSF Charter, this presentation is released under the Creative Commons Attribution 4.0 International License (CC-BY-4.0), available at <https://creativecommons.org/licenses/by/4.0/>. You are free to:
The licensor cannot revoke these freedoms as long as you follow the license terms: