Sandboxing, Virtualisation, and Mobile Device Security
This work by Z. Cliffe Schreuders at Leeds Beckett University is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
General notes about the labs
Introduction to application-oriented access controls and sandboxing
Copy on write sandboxes
Rule-based system-wide access controls
Coarse grained rights
Optional C programming and capabilities exercise
Rule-based fine-grained controls
Using AppArmor to confine a simple program
Using AppArmor to confine a Trojan horse simulation
Using SELinux to confine a simple program
Mobile security: Android
Qubes and AppVMs
Often the lab instructions are intentionally open ended, and you will have to figure some things out for yourselves. This module is designed to be challenging, as well as fun!
However, we aim to provide a well planned and fluent experience. If you notice any mistakes in the lab instructions or you feel some important information is missing, please let me (Cliffe) know and I will try to address any issues.
The labs are written to be informative and, in order to aid clarity, instructions that you should actually execute are generally written in this colour. Note that all lab content is assessable for the module, but the colour coding may help you skip to the “next thing to do”, but make sure you dedicate time to read and understand everything. Coloured instructions in italics indicates you need to change the instructions based on your environment: for example, using your own IP address.
You should maintain a lab logbook / document, which should include your answers to the questions posed throughout the labs (in this colour).
If you are working on campus in the IMS labs using the oVirt online labs, click here for instructions on how to login on campus in the IMS labs and create VMs from templates.
If you are working remotely using the oVirt online labs, click here for instructions on how to login via VPN and create VMs from templates.
If you are working remotely having downloaded our VMs or by copying them when you were on campus, click here for instructions on how to download VMware Player and configure the VMs to run remotely.
If you are on campus using the IMS system, click here for instructions on how to use the IMS system and VM download scripts.
Start these VMs:
user: student, password: student
user: root, password: tiaspbiqe2r
user: student, password: student
You may prefer to create and start these as required through the lab.
There are many reasons for not trusting software: the authors may have been designed the software to act maliciously (malware) or they may have made some design or implementation mistakes that make the software vulnerable to attack. This is where access controls come in. Access controls restrict what each subject on a system is authorised to do. Traditionally access controls (such as Unix file permissions, which are user-oriented) have focussed on restricting what each user on a system can do. However, over time this has proven to be insufficient, since this means that every program on a system is trusted with all of a user’s authorisation: any program can read or delete all of a user’s personal documents, Web history, and so on.
Sandboxing (or application-oriented access controls) involves restricting what a program or group of programs can do. This can significantly improve the security of a system, since a rogue program can do far less damage to the system, if it is restricted to only the permissions it requires to function correctly.
One approach to sandboxing is to run applications in isolated environments, with only access to resources (such as files) that are accessible from within the sandbox.
Container-based sandboxes share the kernel, but have separate user-space resources. This is more efficient than system-level virtualisation. For example, chroot() is a system call on Unix systems, that changes the root directory for a process and its children. The new namespace of the application limits it to only access files inside the specified directory tree. A wrapper program “chroot” can be used to launch programs into a “chroot jail”. Only root can perform a chroot, but should change identity asap because root can also escape a chroot jail (by performing another chroot()), so no program in a chroot should ever stay as root.
There are resources such as process controls and networking that are not mediated. Other mechanisms solve some of these problems, such as FreeBSD Jails.
You will create a chroot environment (a directory containing all the files that the “sandboxed” programs require), and run some programs inside it:...
On your openSUSE system:
Create a directory:
sudo mkdir /opt/chrootdir
Copy everything needed to run ls (LS) inside a chroot into a directory
First, we should look at the details of the ls executable:
ls -la /bin/ls
These details show that the file in /bin/ls is actually a symbolic link to the same file in /usr/bin/ls. In order to copy the file and have it correctly work within the cage, you would need to bear this in mind..
Now try running the ldd command on ls:
You should get a result that looks something like this:
If you are patient you could copy each required file into your chroot directory. Otherwise, as you can see, all but one of the files can be found in /lib64/.
Try the following (copy all the /lib64/ files into the cage):
sudo rsync -av /lib64 /opt/chrootdir/
Then, copy the one file not found in /lib64/ and the file it is symlinked to::
sudo mkdir -p /opt/chrootdir/usr/lib64/
sudo rsync -av /usr/lib64/libpcre.so.1 /opt/chrootdir/usr/lib64/
sudo rsync -av /usr/lib64/libpcre.so.1.2.1 /opt/chrootdir/usr/lib64/
Finally, copy ls.
sudo mkdir /opt/chrootdir/bin/ /opt/chrootdir/usr/bin/
sudo rsync -av /bin/ls /opt/chrootdir/bin/
sudo rsync -av /usr/bin/ls /opt/chrootdir/usr/bin/
Run ls in a chroot:
sudo chroot /opt/chrootdir/ /usr/bin/ls
Note that ls can only see the files in the chroot cage.
Run ls in the chroot, and attempt to view the root (/) directory:
sudo chroot /opt/chrootdir/ /usr/bin/ls /
Note that again, as far as anything in the chrooted program is concerned, what the rest of the system calls “/opt/chrootdir/”, it sees as “/”. This is referred to as the program’s namespace.
Create a new complete chroot environment, for running any command line programs, including bash (the Linux shell command prompt).
Add an openSUSE repo config:
sudo zypper --root /opt/chrootdir/ ar http://download.opensuse.org/distribution/leap/42.1/repo/oss/suse leap421
Install some software into the directory:
sudo zypper --root /opt/chrootdir/ install rpm zypper wget vim
This may take a little while. You can continue the exercise in another bash window/tab, or read about bind mounting while you wait.
Once the install is complete, bind mount /sys, /dev, and /proc into the cage:
sudo mount --bind /dev /opt/chrootdir/dev
(repeat for /sys and /proc, make sure to change it in both places)
Read “man mount” to see what this does, and understand the security consequences.
Why might it not be a good idea to bind mount the entire root “/” directory into the chroot cage?
Now run bash inside the chroot::
sudo chroot /opt/chrootdir /bin/bash
Create a new user within the chroot (you will be prompted to enter a password for the new user):
sudo useradd -g users -d /home/username -m username; sudo passwd username
Then to make bash switch to this new user::
su – username
Note that you can now run commands and (mostly) only affect the chroot directory.
Experiment with what is possible from within the chroot cage. You may wish to copy across further files and their dependencies from the main system to run from inside the chroot.
Tips: to share the same X server (so you can run graphical program from the chroot, run “export DISPLAY=:0”.
What can you see in /, /opt, /proc?
Can you access anything outside of the chroot?
Understand that you can still access the network and possibly do damage via the bind mounted directories.
In general, we would typically create a minimal install for a chroot environment, such as https://wiki.debian.org/Debootstrap
Copy on write sandboxes allow applications to read all files, but confines any writes to a separate area. Upon termination, these kinds of sandboxes typically ask which changes to keep and which to discard (as illustrated below). Examples of copy on write sandboxes includes Sandboxie, Pastures, and Alcatraz.
Copy on write sandbox, isolates changes to the sandbox. Image: http://www.sandboxie.com/ by Sandboxie Holdings, LLC. All rights reserved.
On your Windows system:
Install Sandboxie (a copy-on-write sandbox).
The Sandboxie install can be found on http://www.sandboxie.com
Disable Windows defender, in order to download and extract the Netbus remote access malware. This can be done by opening the Windows Defender interface, clicking ‘Tools’ -> ‘Options’ -> Administrator, and unchecking ‘Use this program’.
The Netbus program can be found at the following URL: https://dl.packetstormsecurity.net/trojans/nb16_p04.zip. Note that depending on your browser, you may be presented with a warning message that you need to bypass before downloading. When prompted, the password for this archive is p4ssw0rd.
Run the Netbus server in your Sandbox, being careful to launch patch.exe into the sandbox (right click the executable, and choose from the context menu).
Connect to the Netbus infected machine by running NetBus.exe, and investigate what kind of damage you can do.
You will be able to access files on the hard drive.
Can you write to files?
Can you view all the files that have changed within the sandbox? Where are these stored on the storage disk?
Outside of the sandbox, are the files changed? Why not?
After restarting the infected computer is the Netbus server still running? Why not?
System-level sandboxes provide a complete environment for operating systems. You have already seen this in previous lab work, where virtual machines have been used to run separate operating systems on the same hardware, and each OS can have a separate set of software with limited access to the host computers resources. A hypervisor, AKA virtual machine monitor (VMM), multiplexes the hardware to run hardware-level virtual machines (VM).
Usually VMs are very much isolated from each other, but as you can imagine sometimes you may want to share data between VMs, for example if you have a Linux PC, and want to edit some files using a Windows program, which is in a VM, then you need some way of sharing data across VMs.
In that case, you can use shared folders, to “poke a hole” through the VM isolation
Is it a good idea to give all your VMs write access to your home directory? Why not?
Can hardware emulation VMs be used to confine individual applications?
Rule-based system-wide access controls can control what each application is authorised to do. The security system enforces exactly which files or resources are accessible to each process. They don't typically require applications to be launched into a sandbox, rules are applied to any applications that have policies.
Some rule-based controls are quite coarsely grained. For example, Android (which we will explore later) grants permissions such as “Network”, “SD Card”, “Camera”, and “GPS” access.
Another coarsely grained system is Linux capabilities, which break up root’s special permissions, so that some programs can be granted specific “capabilities” rather than run as root. Normally on Unix, there are two types of users: privileged (uid=0) and unprivileged (uid!=0). The root user (0) is allowed to do practically anything, and bypasses all kernel permission checks. Capabilities divide these privileges. Which makes it possible to, for example, allow a program to have raw network access or to call chroot (CAP_CHROOT), without granting it all of root’s other privileges, such as the ability to access every file on the system.
See “man capabilities” for the full list of available capabilities.
One of the limitations of the traditional Unix approach to privilege, is that programs that require special permissions, such as ping, need to run as root. In this case, it is so that ping can do raw network operations -- something normal users can't do. However, when ping runs as root (via setuid), then a programming mistake (vulnerability) in ping could possibly allow normal users root access.
Capabilities help to solve the problem. Instead of running the program setuid root, we can give it the capability to do what it needs without access to everything else that root is allowed to do.
Do the following to check whether ping is currently setuid:
ls -la /usr/bin/ping
-rwsr-xr-x 1 root root 39992 Oct 25 2015 /usr/bin/ping
Ping your Windows VM (you might need to disable its firewall), to check that the program works (and that your network connection is ok) (Ctrl-C to stop):
Now make a copy of ping:
Sudo cp /bin/ping /tmp/ping
As your normal user (not root), try pinging your Windows VM again:
It won’t work, since it doesn't have the required permissions:
student@linux-djla:~> /tmp/ping 192.168.201.XX
ping: icmp open socket: Operation not permitted
Make sure setcap is installed; check the man page:
Hint: If not, sudo zypper libcap-progs
Now set ping to use the capability, by attaching the capability to the file:
sudo /sbin/setcap cap_net_raw=ep /tmp/ping
Check that the program now has the capability, by running:
sudo /sbin/getcap /tmp/ping
You should now be able to use the /tmp/ping program as any user, and it will be able to ping as before:
The advantage is that now the program cannot do all the other things root can do, so a vulnerability in ping wouldn't expose your entire system
SEED Lab (Computer SEcurity EDucation):
“The learning objective of this lab is for students to gain first-hand experiences on capability, to appreciate the advantage of capabilities in access control, and to master how to use capability in to achieve the principle of least privileges. In addition, through this lab, by dissecting the capability mechanism in Linux, students will gain insights on how capability is implemented in operating systems. This lab is based on POSIX 1.e capability, which is implemented in recent versions of Linux kernel.”
Lab available under the GNU Free Documentation License:
This is a fairly advanced look at Linux capabilities from a programmer's point of view. It involves C programming, and hacking at libraries. Recommended if you are happy programming C.
These SEED labs should run in most Linux systems, if you want or need their exact setup you can download the SEED VM, or follow the instructions here: http://www.cis.syr.edu/~wedu/seed/lab_env.html
Another approach taken by some schemes, is to simply specify a list of all the resources each application is authorised to access. This is the approach taken by AppArmor and TOMOYO, which are Linux Kernel security features.
You will use AppArmor to confine a harmless text editor and a Trojan horse simulation.
First you will create a policy using AppArmor that will confine the text editor KWrite, and make sure KWrite can edit a file named “/home/student/Demo/mine.txt”.
On the openSUSE Leap 42.1 VM:
Start by creating this file. From the command line:
mkdir -p /home/student/Demo/
echo "this is a test" > /home/student/Demo/mine.txt
echo "this is a test" > /home/student/Demo/other.txt
Graphical Tools for managing AppArmor can be found in YaST.
Start AppArmor Configuration by searching the system using the Application Menu, as shown below.
(user: root, password: tiaspbiqe2r)
Starting AppArmor Configuration
Once you have opened the configuration manager, start KWrite. Then, click “Settings” in AppArmor Configuration. Select the entry for KWrite and then click “Toggle Mode” to change its mode for “enforce” to “complain”, as shown below.
Complaining mode stops enforcing AppArmor profile rules and logs the behaviour of the program. Using genprof or logprof then steps you through each resource that the program accessed, so that you can decide whether or not to allow the program to access those resources in the future.
Switch back to the KWrite program, and open the file you want it to be able to access (/home/student/Demo/mine.txt). Close KWrite afterwards.
From a terminal window, enter the following command:
sudo genprof kwrite
Press the “S” key in order to “Scan system log for AppArmor events”.
You will then step through each of the things the program did, and you need to decide whether it should be able to perform this action in the future.
Vetting rules for inclusion in an AppArmor profile
For example, KWrite likely tried to access “/etc/xdg/kdeglobals” with read access. This file is a global configuration file for kde, the desktop environment. Unlike most other items you will vet, this has a severity level of “unknown”. This severity level is a warning that ranges from 1 (low) to 10 (high) based on what the process is attempting. The severity level should be considered carefully, as some processes require access to resources that rank high in terms of severity in order to perform their basic function.
There are a number of options available when vetting a log entry; you have the option of using the abstraction “abstractions/kde” which will also grant other privileges needed by kde applications. Or you can click allow to add the privilege to just allow access to this specific file. Alternatively, you can click glob which will generalise the name using wildcards. You can also do this manually using edit.
“Opts” can be used to choose whether to only allow the access when the user running the program owns the file being accessed, also audit can be set so that every time the file is accessed it is logged.
In this case, add the abstraction because you know it is a KDE application.
For each access attempt, decide whether the program requires access to the resources in order to function as expected, and add the permission using abstractions or direct permissions.
If the program being confined starts another program the options available will be different. In that case you choose how you want the subsequent process to be confined. Your choices would then include whether you want the new program to inherit the same rules, have its own profile, or be unconfined.
Once you have finished vetting each of the privileges, you will be given the option view the profile changes, save them, and so on. Save the changes.
Have a look at the file that has been created, by browsing to the file, as shown below.
Browsing to an AppArmor profile file
Return to the AppArmor Configuration in YaST. Your new KWrite profile should be in enforce mode, meaning it will actually enforce your rules. Using the same procedure that you used to switch to “complain” mode, re-enable “enforce” mode instead.
Now, test the profile you created.
Run KWrite and check that you can use it to access the “mine.txt” file and function as required. If you attempt to access other files KWrite should be denied that access. For example, try using KWrite to open another file (/home/user/Demo/other.txt). This should be denied.
If your rules do not seem to be having an effect, check that your profile is in enforcing mode and not in complaining mode. You can toggle this in the YaST AppArmor control panel.
Logprof can be used to add to profiles based on denied access attempts. Note that now that you are in enforcing mode you can still develop policy, and this is much safer as you are confining what the program can do as you are developing policy. Profiles can be created entirely in enforce mode by creating an empty profile by clicking starting the Add Profile Wizard and clicking finish straight away, ensuring that the mode is set to enforce, then running the program and using the Update profile wizard.
Care needs to be taken when deciding which rules to include, since any recorded behaviour that you accept will always be allowed in the future.
Is there any security benefit to confining KWrite?
What kinds of programs should be confined using AppArmor to best protect a system from attack?
Use AppArmor to confine a benign network service of your choice.
This is a more practical use of AppArmor, since it is hard to stop malicious activity if the program is misbehaving when we create policies.
Optional extra: confine a vulnerable version of some software, and exploit a vulnerability, you should find that AppArmor can prevent the vulnerability from causing harm to the system.
Android is a (mostly) free open source OS, by Google. It is based on the Linux kernel, but has a user-space unlike any “GNU/Linux” distro. Apps are usually written in Java and compiled to .dex, run in a Dalvik virtual machine, which is a modified Java Virtual Machine (JVM). Applications are confined to the permissions that developers request in a manifest file. For example:
<uses-permission android:name="android.permission.RECEIVE_SMS" />
Review the list of built in permissions:
Apps can also define their own permissions, and can communicate and provide services to one another (this needs to be carefully designed).
The Linux kernel enforces the rules. Each application has its own Unix UID (starting at UID 10000), and is assigned a number of GIDs representing what the application is allowed to do.
A config file assigns the permission names to GIDs:
Via the IMS system, Load the LiveAndroid - Live Disk VM, this is Android for x86 platforms.
Switch to the console (Alt-F1), and use the package manager (pm) command to find out about the permissions on the system:
pm list permissions
Also note that the permissions are sorted into groups:
pm list permission-groups
Open a file and look at the way that permissions, such as “camera”, are mapped to GIDs (Linux groups):
List the files in /bin:
This should look familiar, this Android includes lots of the common GNU/Linux commands:
ls -la /bin
Notice that lots of the files are symlinks to BusyBox. Written in C, BusyBox implements many of the standard Unix commands, optimised for embedded systems.
Using ls, list the files in /system/app/, this is where applications live.
Download an application manually:
Because you have installed via the console, the system assumes the user has accepted the permissions. The app is free to use whatever permissions the app has asked for in its manifest. Usually the user would be asked to confirm the privileges before installing.
Lets look at what that new application is actually allowed to do:
Alt-F7 to switch back to the GUI.
Click the Context Menu Key (next to the Windows key on your keyboard).
Goto Settings, Applications, Manage Applications, Lookout.
What permissions does it have? What kinds of damage could it do with them? Do you trust this app? Why/why not?
Exit back the the main page (Esc).
Click the '<' to display the menu, click “Dev Tools”, “Package Browser”, “com.lookout”
What is the Linux UID that the program will run as?
Launch Lookout via the Menu, then switch back to the console and run:
Look for the process “com.lookout” and check what UID it is running as (it should match the above).
Note the PID (first number in the ps line output), and run:
Look at the output for the GID 3003, this is the hard-coded GID for the network access permission, the process can indeed access the Internet.
We have looked at the Linux-y aspects of Android security.
Keep in mind that programming for Android involves requesting the privileges you require, and being careful not to accidently give those permissions to other apps. As an end user, always think about whether you trust an app to behave with the permissions you are granting it.
You now know about a variety of ways that you use to protect yourself from applications. If these programs or services have vulnerabilities that give a remote attacker control of the program, there is only a limited amount of damage that they can cause. This is an active area of research, with lots of potential for innovation.
At this point you have: