Why Johnny can’t tell if he is compromised
...and what you can do about it.
Keynote Area41
2nd of June 2014, Zurich, Switzerland
Robert Morris Sr.
Fundamental rules for IT security - a cynical view from more than 20 years ago:
Do not own a computer
Do not power it on
Do not use it
Situation does not seem to have gotten better
Hacking is addictive
Transitive trust relationships everywhere
Start to hack almost anywhere - compromise boundary grows exponentially
Only limit: Size of net, admin infrastructure
The now
All major nation states / global powers want to have “dominance”
Almost nobody is any good at defense
In the limit: Everything compromised (or on compromise boundary) by multiple parties
What does compromise mean?
Somewhat fuzzy concept
Installing malware is clearly a compromise
Illicitly obtaining authentication credentials is also a compromise
Compromise is about “control”
Ownership vs. possession
Legal distinction between ownership and possession of an object
I am the owner of my car, even if I have lent it to a friend and it is not in my possession
Networked computing devices have a third dimension: “Control”
Possession vs. control
Neither possession nor ownership of a networked computing device imply control
Being hacked is loss of control without change of ownership or possession
“Getting 0wned” = loss of control over your own computing infrastructure
Who is in control?
Establishing who is control of your computer is nearly impossible
This talk: Exploration of all the ways we can’t tell if we are in control, and how to fix it.
Given a computer ...
… try to establish who is in control
For the exercise: Assume Windows
Where to start ?
All highly-privileged code is in control
Code running with user privileges is partially in control
Control and software
Clearly, someone else is in control (third-party OS, various bits of third-party software)
This is OK - we have decided to trust these third parties and say “yes” to their software
We trust (some) software vendors to not backdoor us intentionally
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
Check 1: Userspace Code
Problem: Vendors don’t sign their executables
Problem: If they do, they don’t sign their DLLs
Problem: If they sign both executables and DLLs, they don’t sign executable extensions
Problem: 100+ trusted root CAs?
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
Check 2: Kernel Code
Number of CAs that can sign drivers much smaller than user-space
Irrelevant: Attacker use signed driver with known vulnerability to bootstrap code
Failure to sign userspace means failure to sign kernel space
Not theoretical: Uroburos
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
FAILURE
Check 3: BIOS Code
Per-vendor code signing (DELL, HP etc.)
No public documentation or third-party analysis about the way this works
No way for third parties to verify signatures
Even if possible to verify, can’t read relevant regions
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
FAILURE
FAILURE
Check 4: Device Firmwares
HDD controllers: Nobody knows how to verify code inside, but we know attackers can backdoor them
GPU firmware: People are flashing them for overclocking, no way to do third-party validation
Completely stranded
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
FAILURE
FAILURE
FAILURE
Check 5: Intel ME
ARC core on modern mainboards that can execute signed Java applets etc.
Communicates with host OS via PCI shared mapped region
Highly opaque, no way to verify code running in ME from host OS
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
Check 6: Stolen Keys
Attackers have compromised software signing keys and CAs in the past
People with software signing keys can silently “lose” them without this ever being noticed
There is no equivalent of “Certificate Transparency” for code signing
Check 6: Stolen Keys
All PKI architecture assume an invincible CA and invincible signers
Reality has shown that this is a wrong assumption
No way to verify if a file signed with a key was signed by the person the key was issued to
Check 6: Stolen Keys
After breaches of the last years, only safe assumption is:
Code signing keys of many software vendors and CAs have been silently stolen
No good way of detecting this
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
Check 7: Scripts
Lots of interpreters run code with privileges on your typical host
Javascript-based extensions to your browser
Java-based background tasks
Python and other interpreted languages
Check 7: Scripts
No good infrastructure exists to tie running interpreted code back to the scripts from which it was compiled
No good way to determine where the code running inside java.exe or python.exe is coming from
Baseline checks
Verify signatures on all userspace binaries
Verify signatures on all kernel space binaries
Verify signatures on all BIOS components
Verify signatures on all device firmwares
Verify signatures on the Intel ME code
Verify that the signers know about their signatures
Verify origin of privileged scripts
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
Failure on all levels
Given modern infrastructure, it is nearly impossible to determine if a machine is compromised
It is also nearly impossible to “un-infect” a machine once it has been infected
What needs to change?
Long-term view
Proposed measures will take many years to build
Fundamentally easy, though - no rocket science required
Hardest things to overcome: Organisational inertia, complacency, politics, broken incentive structures, cost
Step 0: Check trust
IT departments do not ask themselves enough questions about who they trust
Someone well-intentioned but securitywise incompetent will be the weak link that attackers exploit
This applies to vendors and suppliers !
Control and Power of attorney
Giving “control” over your compute infrastructure is the same as giving a delegable power-of-attorney over your compute infrastructure to a third party
This encompasses trusting a CA, allowing auto-update of software, and much more.
Control and Power of attorney
Legal departments are rightfully hesitant to issue powers of attorney to third parties
Delegable powers of attorney to random third parties are virtually unheard of
IT industry needs to learn from this
Step 1: Undo CA proliferation
Trusting a code-signing CA is equivalent to a delegable power-of-attorney over your compute assets
There are way too many code-signing CAs
Only trust a CA that you know very well - which at the moment will be none
Step 2: Trust by-vendor
Most likely, arbitrarily delegable power-of-attorneys are a broken idea
Trust for executable code should be by vendor, not by CA
CA-based trust only for sandboxed web-pages / javascript
Step 3: Update transparency
All software vendors roll their own update mechanism
Allowing someone to update software is also a delegatable power-of-attorney
Software updates need to come in standardized packages and via standardized protocols
Step 4: Signing transparency
Given likelihood of stolen signing keys, “code signing transparency” is needed
Vendors need to run a public ledger where they explicitly avow “yes, I have signed this binary”
Ideally with information about the exact SVN tag / git hash that was used to produce the binary
Step 4: Signing transparency
When signed file is encountered, public ledger can be checked
“Dear Vendor, are you aware that file XYZ has been signed with your key?”
Probably the only way to engineer “detectability of key theft” into our systems
Step 5: Reduce firmware opacity
Firmware blobs for devices need to be readable by the main CPU without physical possibility of interference from the device firmware
Purchasers of hardware need to insist on this transparency
They also need to realize they have a right to demand this
Step 6: ME transparency
There is no excuse for a coprocessor on your mainboard whose code can’t be validated by you from your main CPU
Purchasers of hardware need to realize that they have a right to demand transparency from the code running on ME
Step 7: Signed interpreters
In order to run a script with high privileges in an interpreter, the script needs to be signed and the interpreter needs to be able to tie back the executable form to the original script
For non-privileged code (JS in a tight sandbox etc.) we may be able to make an exception
Transparency vs. tamperproofing
Systems need to be engineered to be easily verified by the owner
Centralization of trust is a failed experiment, especially given government desire to “dominate cyber”
Demand systems whose integrity you can verify
Paradigm shift
“Security” hardware has opted for more opacity in the past
Fear of side-channel attacks, fear of physical attacks
Prioritized tamperproofing, sacrificed transparency and verifiability
Paradigm shift
Side-channel and physical attacks are a lesser concern than remote attacks provided you are in possession of your hardware
Remote attacks that you can never tell happened are the bigger threat
Re-prioritize verifiability
Will this give us security?
The proposed measures will not yield 100% security
Will give defenders a fighting chance to deny persistence to the attacker
Will give defenders a fighting chance to detect compromised suppliers
Will this give us security?
Hopefully, this will force attackers into exploiting & re-exploiting for persistence
Better software engineering can then slowly root out bugs
Move from cheap, stealthy mass compromise to individually tailored compromise: Costly
How to pay for it ?
None of the proposed steps are “free”
None are terribly costly, either
Standardized software updating, better signing & verification will actually reduce IT maintenance costs
Stop buying snake oil?
Huge revenues are generated in our industry with colored appliances that only work as long as the attacker hasn’t looked at them
Often, these boxes want to be dropped onto privileged points in your infrastructure
Just say no. Spend your money wisely.
Thank you
Questions?