Published using Google Docs
Messer
Updated automatically every 5 minutes

                                   

Messer

1.0 Network Security

1.1 Implement security configuration parameters on network devices and other technologies

Switches work at layer 2 of the OSI model; bridging done in hardware; have application specific integrated circuit; can communicate with each other directly; use MAC address to route traffic; many, many, many ports; high bandwidth, which creates security challenge because of the data volume; everyone shares the same subnet

Routers move up to layer 3; allows communication between switches with different subnets; sometimes called a layer 3 switch; often connects diverse networks such as LAN, WAN, copper, fiber

Good idea to keep components separate—switches as switches, routers as routers, firewalls as firewalls

Firewalls are first and last line of defense; filters traffic by port number, operates at OSI 4; can encrypt traffic; can serve as proxy servers; separates internal from external traffic; firewalls sit on the edge of the network; sometimes does NAT, routing, etc.; most firewalls are layer 3

Load Balancers distribute the load over many physical servers, very common way to do that; can coordinate with many servers; tries to distribute traffic evenly among a cluster of server; can distribute based on load, content, and other criteria and this can create a security challenge; running many servers means lots of patching, updates, etc.;

Proxies sits between the user and the internet; receives requests, stops it, sends the request on the user’s behalf, then receives the response, and transmits it to the user; can also cache material to improve performance; some proxies are invisible/transparent, meaning settings don’t have to be modified, but this can cause issues with some applications

Unified Threat Managment (UTM)/Web Security Gateways are good for low budget operations; may have URL filters, content inspection; can allow or deny traffic based on content; may also have spam filter, router/switch/firewall, IDS/IPS; may not do all these things well; can have bandwidth shaper

VPN Concentrators are becoming increasingly common, even in home offices; works by letting remote user create an encrypted tunnel through the internet to the concentrator; VPNC does a lot of work because it is handling encryption; VPNC are often very hardware specific, which requires lots of CPU power;

NIDS/NIPS: Intrusion Detected/Prevention Systems watch traffic to determine if it is bad traffic; an IPS will take the additional step of block malicious traffic, such as XSS and buffer overflows; the difference between the two is that the IDS just alerts users but takes no other actions; IPS stops malicious traffic; requires a balancing act between blocking bad traffic and allowing good traffic

Identification Technologies are signature based; they look for perfect matches of data sets to identify malicious traffic; Anomaly-based looks for odd things, such as bandwidth spikes, user logins, etc; behavior based technologies looks for things users are doing, such as activities that are tending toward inappropriate actions, such as inputting commands into a password box; heuristics-based analyze traffic to look for changes in data, using a degree of artificial intelligence;

Protocol Analyzers (Sniffers): a device that is able to collect packets and gather information about them; AKA network analyzer; Wireshark is a popular open-source option

Spam Filters stop traffic at the gateway; sometimes done on a mail relay, others use cloud-based technology; one method is the Whitelist, which has a list of approved users; SMTP standards checking blocks anything doesn’t follow RFC standards; Reverse DNS blocks email where the sender’s domain doesn’t match the IP address; tarpitting slows down the server conversation, which deters spammers because they expect faster responses

Web Application Firewalls (WAF) looks at web conversations to see if traffic is legitimate; allows or denies based on expected input; unexpected content is a common method of exploiting an application; this can prevent SQL injections; very common in the Payment Card Industry Data Security Standard (PCI DSS), which requires WAFs.

Application Aware Security Devices works at the OSI application layer, i.e. all data in a packet; requires some advanced decodes; every packet must be analyzed, categorized, and a security decision is determined; Network Based Firewalls control traffic based on the application; IPS help identify applications; apply application specific vulnerability signatures to the traffic; Host Based Firewalls work with the OS to determine the application

1.2 Given a scenario, use secure network administration principles

Firewall Rules are based on types, such as groupings of categories, things like IP, port, time of day. There is also a logical path, usually top to bottom. Firewall rules can be very specific, with specific rules at the top. Good firewalls follow Implicit Deny—if it isn’t explicitly allowed, then it is denied. Firewalls tend to default to Implicit Deny.

VLAN logically separate subnets. VLANs can’t communicate with each other unless there is a router. This helps security because it isolates certain parts of the organization. HR can be separated from shipping, for example. The router/firewall becomes the gatekeeper. Grouping is usually based on function. VLANs are often integrated with network access controls, allowing users to be shifted base on credentials. Again, VLANs are useful in separating users.

VLAN management allows different VLANs to operate on different switches. Separation can be based on ports rather than function. This is helpful for security because it helps arrange things logically.

Loop Protection helps mitigate bandwidth usage. A loop is created when two switches are connected together. When this happens, they route packets to one another. Bandwidth is consumed and the network suffers. IEEE 802.1D is a standard called Spanning Tree that helps prevent loops. Every switch/bridge has such protection.

There are 3 ports in Spanning Tree: root, designated, and blocked. The primary bridge is the root. Root ports relay information back to the root. Designated ports conduct packets as normal. A port is blocked if it runs the risk of sending packets in a loop. Spanning Tree is able to adjust itself in order to reach bridges when a system fails. This also creates redundancy. For security, this helps reduce the risk of DoS.

Secure Router Configuration is very important. First, change default user names and passwords. Audit devices. Choose encrypted traffic whenever possible. Use HTTPS instead of HTTP, SFTP instead of FTP. Location is also important. Devices should be secure. Backups should be made and kept apart.

Access Control Lists are used to control permissions applied to objects. ACLs are used in various situations. NTFS is an OS that uses ACL. Routers and firewalls can also be configured with ACLs.

802.1x is a port-based Network Access Control. It uses EAP and RADIUS. Before you can use a switch port, you must authenticate. This prevents rogue users from simply plugging into a free jack. This protects physical interfaces. Unneeded ports should be disabled. 802.1x allows you to prevent duplicate MACs and thus prevent spoofing.

802.1x is a complicated protocol because it is software based. A supplicant is a device that knows how to communicate with 802.1x switches. The supplicant talks to an authenticator which then actually communicates with the authentication server. The Authenticator sends out EAP requests and listens for replies.

Flood Guards are common on Intrusion Prevention Systems. They keep track of network traffic. Flood Guards are designed to prevent DoS and DDoS. Flood attacks can overwhelm servers and bring them down. A SYN Attack repeatedly sends SYN requests but doesn’t continue the handshake process. The server continues to send SYN/ACKs and resources are consumed. Firewalls should handle SYN attacks.

A Ping Flood is used to overwhelm a network. Pings are sent to scan a network. A Ping Flood brings down networks by consuming resources. A Port Flood identifies an open port number and examines possible exploits.

Network Separation is important in security. Networks can be separated logically, physically, or both. When a VLAN is not enough, routers/switches should be physically isolated to prevent attacks. It’s difficult to bridge traffic when there is no mechanism. VLANs are cheaper than physical components.

Log Analysis is a critical part of network security. Every device creates logs. You must examine the log files for information about threats. Software is often used to make log files intelligible for people. Logs are great for post-event analysis but isn’t as helpful for real-time analysis, but it is possible. Automation should be used as much as possible.

1.3 Explain network design elements and components

A DMZ is a part of the network that faces the Internet. It protects the network by directing traffic to the DMZ rather than the general network. The firewall that guards it can have very granular control and remove certain threats. It still faces some dangers because it is still exposed.

Subnetting the network involves a logical segmentation of a network device. From a security perspective, access should be limited to those who need it. Sometimes one part of the network needs more security. Subnetting allows such segmentation. Each subnet has a barrier, usually a router but better yet a firewall. It also continues to allow access. Departments can be compartmentalized.

VLANs also allows physical segmentation. This is called trunking. A computer on VLAN X in Location Y can communicate with the router in Location X even though it is connected to the Location Y router.

Network Address Translation is used to effectively increase the number of IP addresses. A single IP can have multiple IPs. A Firewall or other device changes the internal IP to its own IP. To the Internet, it will seem as if the Firewall sent the request and not a computer on the network. A Destination NAT can be used to connect to a computer with a private IP address. The NAT is configured to acknowledge the DNAT when it requests to talk to the private IP.

Remote Access is an important requirement given increasing mobile use. It takes advantage of encryption technologies to protect the data stream. Additional technologies to increase security includes things like one-time passwords and token generators. When using remote access, logs should always be analyzed. Check for logins from multiple locations.

Telephony is a very new digital technology. This includes Voice over IP (VOIP). Phones use digital lines. It is a difficult technology to secure. Firewalls generally don’t work well with VOIP. You need protocol specific applications. An application gateway is often necessary.

Network Access Control is useful in large environments. It requires a large infrastructure such as authentication, redundancy, etc.

Virtualization provides huge cost savings. However, it becomes harder to protect information against some threats. You can no longer physically access servers. Additional insight is required, and its harder to view intra-server communication. Again, Logs are very important.

Cloud Computing is a great way to deploy applications with flexibility. Platform as a Service means you are just the end user and you are taking advantage of someone else’s infrastructure. From a security perspective there is a risk because you have no control over that infrastructure. Software as a Service is software on demand and involves no local installations. The security threat is that you data is out there and you don’t have physical access to it. Google Mail is an example of SaaS. Infrastructure as a Service allows you to outsource your equipment. With IaaS you have more control over security. IaaS allows rapid scalability.

There are several different cloud models. An organization can have its own private cloud. Generally cloud computing indicates public clouds. Clouds can also be a mix-match, called a hybrid cloud.

Defense in Depth is a good security strategy. Don’t ever rely on one security solution. Security should be layered. A firewall is placed at the edge of the network. Firewalls can also be placed between equipment. A DMZ can be implemented as well. Authentication is a security layer in itself. Multi-factor authentication is a stronger form of security than single factor. An intrusion detection system observers traffic and alerts security when there are abnormalities. VPNs access encrypts data as it moves through the internet.

1.4 Given a scenario, implement common protocols and services

IPv4 and IPv6 are common protocols.

IPv4 uses a 32 bit address. It’s limited to about four billion addresses. IPv4’s 32 bits are broken into binary octets.

IPv6 uses 128 bit addresses. A vast number of addresses are available. IPv6 has other advantages. IPv6 uses hexadecimal. Leading zeros are optional. A valid IPv6 address might be:

fe80:0000:0000:0000:5d18:0652:cffd:8f52

Groups of zeros can be omitted. Thus, this address can be written as:

fe80::5d18:652:cffd:8f52

When groups of zeros are omitted, this is indicated by double colons. This can be done with only one group of zeros. Note that the leading digit in 0652 was omitted.

DNS will become very important with the proliferation of IPv6 because of how complicated the addresses are.

IPSec (Internet Protocol Security) is a very common protocol. It operates at Layer 3. It’s used between firewalls, VPNs, and many other places. It’s an open standard. It’s used between end stations. It provides CIA. Not compatible with NATs.

ICMP (Internet Control Message Protocol) sends managements between systems. An echo request is send and a reply is given. Useful in determining if a system is unreachable. However, it sends out too much information and is typically blocked by firewalls. ICMP can redirect ad reroute packets.

SNMP (Simple Network Management Protocol) is used to gather details and metrics about devices and the network. This is useful in determining things like bandwidth through a variable. Lots of information can be gathered about a network, ranging from bandwidth to server temperature. SNMPv1 is cleartext. This is a security threat.

SNMPv2 offers more data types, bulk transfers, but it’s still in the clear.

SNMPv3 provides integrity, authentication, and encryption. Access to SNMP should be very limited because it includes lots of potentially important information. Only SNMP devices should be able to query the network.

Telnet (Telecommunication Network) allows you to login to devices remotely, provides console access. However, it is not encrypted and this is a security threat. It’s not a good choice for production systems. Avoid using it if possible. By default Telnet is included in Linux.

SSH (Secure Shell) gives the same front end. Looks like Telnet. However, its traffic is encrypted. This makes it a better security option.

File Transfer Protocol is a very common, very old protocol. It’s not encrypted, and this is a security threat. Avoid FTP.

FTP over SSL (FTPS or FTP-SSL)

Secure Copy (SCP) uses SSH to transfer files.

SSH FTP (SFTP) provides some file functionality; it allows the user to resume interrupted transfers, directory listings, etc.

Domain Name Services take names and converts them to IP addresses. When you enter a URL, a DNS is queried. The IP address is returned and the browser then uses it. DNS is a very critical resource. If it’s not secured, it can be used for DoS, Phishing, and Redirection. DNS servers tend to be very secure. NSLookup is a tool to query DNS servers for information.

Hypertext Transfer Protocol Secure (HTTPS) provides browsers with an extra layer of encryption through TLS/SSL. TSL is the updated Internet Engineering Task Force version of SSL. Web browsers typically use TSL. It can be used in more than just computers.

Network Attached Storage (NAS) is used to connect to shared storage devices across an access. It provides file-level access.

Storage Area Networks (SAN) operates much differently. It provides block level access. Acts like a storage device. It is very efficient at reading and writing.

Both systems use a lot of bandwidth. Both may use isolated networks and high speed network technologies to improve performance.

The bandwidth demand has created a technology called Fiber Channel. It provides up to 16GB rates. Fiber Channel has dedicated switches. A server (initiator) needs a FC interface. Associated with SCSI.

No special hardware is needed if Ethernet is used. This usually integrates with an existing Fiber Channel infrastructure. It’s not routable.

Fiber Channel over IP encapsulates data into TCP/IP packets. This is helpful for geographically separated devices.

Internet Small Computer Systems Interface (ISCSI) is an open interface. It operates like SAN. Operates like a local disk.

NetBIOS (Network Basic I/O System) is technically an application programming interface. Used in many networks through the years.

NetBIOS Enhanced User Interface is not routable and was discontinued in XP.

NetBIOS over TCP/IP (NBT) is the current implementation of NetBIOS. Provides name services, datagram services, and session services.

Common Network Ports are divided into two types: connection-oriented and connectionless.

TCP is connection oriented. This creates a good connection so that data isn’t loss. In a TCP connection, an acknowledgement (ACK) is sent if a message is received. If an ACK isn’t received, the information can be resent. TCP can reorder packets so that they are intelligible.

User Datagram Protocol (UDP) is connectionless. It is called unreliable because it doesn’t verify if data is sent properly. There is no confirmation of data sent or its integrity. Despite this, it has good properties. VoIP is very time sensitive, so TCP (which is slower because it resends) wouldn’t work.

Non-ephemeral ports are permanent port numbers, usually on a server or service. Port 80 is always Port 80, no matter what. When connecting with a specific port, the client doesn’t really care which one starts the connection. Because of this, it is considered ephemeral.

There can be 65,535 ports. Most services use non-ephemeral port numbers. You can use alternative port numbers if all the computers on a network use the same port number. Port numbers are for communication, not security. It won’t hide a service.

Common port numbers are called “well known”. TCP port numbers aren’t the same as UDP port numbers.

The OSI Model is a way for thinking of the layers of network system

Layer 1 Physical: cables, NICs, even hubs

Layer 2 Datalink: the creation of frames, MAC addresses, switches

Layer 3 Network: IP addresses, routers, packets

Layer 4 Transport: TCP segment, UDP datagram

Layer 5 Session: control protocols, tunneling protocols

Layer 6 Presentation: application encryption (SSL/TLS)

Layer 7 Application: what you interact with

1.5 Given a scenario, troubleshoot issues related to wireless security

One of the dangers of wireless is the fact that anyone can potentiality tap into wireless. It’s essentially a radio. One solution is to encrypt the data. Only select users should be able to decrypt the data. Use strong encryption, preferably WPA2.

Originally Wired Equivalent Privacy (WEP). This used 40 or 104 bit keys. Unfortunately, WEP used static keys. WEP can be cracked in minutes.

Wifi Protected Access succeeded WEP. There are three levels: WPA, WPA2, and WPA2-Enterprise.

WPA used TKIP (Temporal Key Integrity Protocol) allowed keys to be changed with packets. This provides rotation of keys. WPA was always meant to be a stopgap measure.

WPA2 uses Advanced Encryption Standard with CCMP for encryption. It is much stronger and better in most cases.

Extensible Authentication Protocol is an authentication framework offering many ways to authenticate based on RFC standards.

Lightweight EAP was a Cisco proprietary. It uses passwords only, no detailed certificate management. It’s based on MS-CHAP. This is a problem because it’s sent in the clear.

Protected EAP was created by Cisco, MS, and RSA Security. It creates a TLS tunnel.

Media Access Control (MAC) Filtering is a way to offer some security to a network by limiting those with access to devices with approved MAC addresses. This offers some security and a lot of additional work. However, it’s easy to spoof MAC addresses. MAC filtering is not very strong by itself but is good for defense in depth.

Service Set Identifier (SSID) Management should also be considered. The SSID should be changed. Don’t leave its default name in place. Disable it if possible. However, this is not especially secure in itself. It should not be the only defense.

TKIP is the foundation of WPA. Every packet gets its own unique encryption key. Each packet is unique. It mixes the secret root key with the initialization vector (IV). It helps prevent replay attacks. TKIP implemented a 64 bit message integrity check. It protects against tampering.

Counter Mode with Cipher Block Chaining Message Authentication Protocol (CCMP) replaced TKIP when WPA2 was published. Uses a 128 bit key and 128 bit block size. This requires additional computing resources. This was problematic at first but now it’s not a problem. WPA2 AES CCMP provides more confidentiality, strong authentication, and access control.

It’s a good idea to control antenna power. Set it as low as possible without hampering communication. There’s no need to transmit power to the parking lot. This kind of calibration can be tricky. Be aware that a high-gain receiver can still pick up low power signals. Location is important. Keep it away from windows. Try to place it in the center of the building, if possible. There might be some channel overlap, but the device will choose the loudest. Change the antennae if necessary.

Captive Portals provides a way to authenticate to a network or agree to terms and conditions. This is often seen in hotels, airports, and so forth. A new user is prevented from using the network until authentication is provided or terms are agreed to. Captive Portals sometimes sometimes use name/passwords. Once authentication is provided, access is granted. Captive Portals typically have time-out features.

There are several different Antenna Types. One of the most common is the Omnidirectional antenna which provides the same signal strength in all directions. This is a good choice for many environments. It’s especially good in centralized locations. However, the ODA can’t focus its signal.

To focus a signal, you need a Directional Antenna. A DA can have greater range because power is being focused in one direction. Antenna performance is measured in dB.

The Yagi Antenna is very directional, very high gain.

A Parabolic Antenna is good for picking up signals from a wide area.

Site Surveys are necessary for good security. The wireless landscape must be examined. Use scanning tools, spectra analyzers, and so forth. Identify access points: you might not control all of them. There may be bleed over from other buildings/floors/offices. Work around existing frequencies. Plan for interference. Things like microwave ovens can be problematic. Site surveys should be done from time to time to see if there are changes, new weaknesses, changed signals, and so forth.

Open wireless access is a security threat. Anyone on the network can see your traffic. One way around this is a Virtual Private Network (VPN). The VPN tunnel will encrypt all of your data. VPN software will open a secure tunnel through the internet to connect to a VPN Concentrator. The VPN concentrator decrypts signals and sends them on to the corporate network.

Compliance and Operational Security

2.1 Explain the importance of risk related concepts

National Institute of Standards and Technology develops standards for various systems, including control systems. NIST 800-53 provides three classes for Controls.

Technical: access control, audit, authentication, etc.

Management: assessments, authorization, risk assessment, services acquisition

Operational: awareness and training, configuration management, contingency planning, personnel security

These Controls are subdivided into 18 families.

AV Software and IPS may run into false positives or false negative.

A False Positive says that there is a problem when there isn’t one. There isn’t actually a threat, but the system thinks there is. With IDS/IPS, the problem is usually caused by signatures. Always update to the latest signatures. VirusTotal.com, owned by Google, allows comparison of suspect files.

A False Negative occurs when malware or other intrusions get through the system. The system fails to alert you because it doesn’t detect the problem.

Your security is as only strong as your policy. If your policy sucks, your security probably sucks. Security policies cover everything from mantraps, locking doors, visitors, USB disabling, etc. Should the USB port be disabled? How much access should visitors have?

Incident Response is a critical policy. It covers legal repercussions, uptime, who should be contacted?

Annualized Rate of Occurrence (ARO) covers likelihood. How often can something be expected to occur?

Single Loss Expectancy (SLE) examines the monetary losses, such as replacing equipment, lost wages, and so forth. In the case of something like a hurricane, SLE can be broad. In the case of a laptop, the loss can be more accurately calculated.

The Annual Loss Expectancy (ALE) is simply ARO x SLE.

Risk calculation isn’t just quantitative. Losses can be qualitative as well. How much data is degraded? Has confidentiality been lost? This encourage things like encryption.

Business Impact Analysis examines risk for every resource and every threat as well as likelihood. Is the threat likely? How easily can it be defeated? What happens if an attack or a loss occurs? How much will the organization be hurt?

The Quantitative assessment examines the loss in monetary terms. How much money will be lost during a period of down time? How much will it cost to replace damage or lost equipment? Sometimes this is easy to calculate with well-known things (how often did the server crash last year) but not as easy with Fate.

Qualitative assessments asks about significance.

A Vulnerability is a flaw that poses a security risk. This could be as simple as a broken lock or something complicated like a flawed OS. A vulnerability doesn’t necessarily mean damage has been done. Sometimes OS vulnerabilities exist for months and years before exploits are found.

The Threat Vector is the avenue of attack. Common paths include emails, web browsers, wireless hotspots, USB drives. Some vectors are more susceptible than others.

The Threat Probability should be assessed. This involves identifying actual and potential threats regardless of probability. Identify as many vulnerabilities as possible. Once this has been established, you can calculate the likelihood of a successful exploit.

Organizations should take part in Risk Avoidance. Disable or terminate risky behavior.

Transference involves things like buying insurance to cover losses.

At times, businesses must accept some risk. This is not necessarily a bad thing. However, risks should be mitigated.

Cloud Computing comes with its own risks. Data in the cloud can potentially be accessed by anyone. This poses a security threat. Security is managed elsewhere, by third parties. You have no control over management of security. What is the risk of account lockout? The Cloud doesn’t guarantee availability.

Risks associated with Virtualization should be considered. It reduces hardware demands and maintenance, but a failed server can bring down multiple virtual servers. If the virtualization layer is compromised, all virtual machines are threatened. This is a serious threat. There is little control over VM to VM communication. Virtual firewalls aren’t yet solid. Another risk is the loss of separation of duties.

Mean time to restore (MTTR) is the length of time to restore a service.

Mean time to failure (MTTF) is the expected lifetime of a system.

Mean time between failures (MTBF) predicts the time between failures.

Recovery Time Objectives (RTO) asks how long it will take to return to normal operations

Recovery Point Objectives (RPO) asks how much data loss is acceptable?

How much availability, expressed as a percentage, is best? Critical operations should have maximal availability.

99.999% is a common goal, called Five-Nines. In a given year, Five-Nines equates to about 5 minutes.

2.2 Examine the security implications of integrating data and services with third parties

Onboarding is the process of bringing in a new partner, namely a third party organization or person. Legal agreements must be made in regards to data. After the legal requirements are met, the technical considerations should be considered. You need to establish a secure connection. This is best done with an IPsec tunnel or physical segmentation, if there’s a shared data center. After that, you need to create an authentication method. This can be done with a third party using RADIUS or TACACS+. Audit controls to ensure everything is as it should be.

Offboarding happens when the agreement comes to an end. Both sides should have already established the process to part ways. Questions to be answered include how to separate systems and return them to their owners, and what happens to the data. Who gets to keep it? Is it shared? Importantly, when will the final connection be terminated?

Social media can be problematic in the workplace, especially with a third party involved. Sometimes social media is given to a third party. This can be risky. Social media data includes privacy concerns. Some data is very valuable. The tone is as important as the message. The physical control of social media should be considered. A mistake on one system can cause problems.

Interoperability Agreements are legal agreements between your organization and a third party. This is the legal side of information technology. This might be concerned with payroll, web hosting, firewall management, etc. Try to get your own legal department to consult the other organization’s legal department.

A Memorandum of Understanding is sent from one side to the other that talks about what is important, what needs to be understood, and so forth. It is not binding.

A Service Level Agreement sets minimum terms for services, includes things like uptime, response time, etc.

A Business Partners Agreement defines the role of each side. Often seen between manufacturers and resellers.

An Interconnection Security Agreement is used by the Federal Government to define security standards.

Privacy Considerations should not be taken for granted. This privacy concerns both personal and professional privacy. Customer data often contains an aspect of privacy. When working with a third party, agreements should already be in place.

The privacy of data is also paramount. What are the technical/logical controls? What are the physical controls? Who owns the data if a third party agreement ends?

Have Risk Awareness when working with a third party. Consider security aspects. Establish principles beforehand. Understand the risks. Security policies should explained, examined, and understood. Balance risk and reward. Agreements are in place to facilitate these things.

Data Ownership is also crucial with third parties. The ownership should be clear and unmistakable. There must be a good, binding agreement. This understanding should extend through the end of the agreement. It should also include the destruction of data.

Data backups with third parties are often overlooked. Yet they contain critical information, potentially every bit of data. When working with third parties, that data might be kept offsite with a third party. There are different kind of backups with different requirements. Health records are handled differently from operational secrets. Manage backups carefully.

Security Policy is the weakest link. Bad policies put data at risk. Protect information between vendors, partners, customs. Avoid data modification. Preserve CIA. Security policies should be kept up to data. The threat landscape is very dynamic.

Third party agreements highlight the need for Security Compliance. Mechanisms should be in place to ensure that everything is as it should be. Make sure due diligence is being practiced. In certain cases (HIPAA, PCI DSS, FISMA) compliance standards are required. Perform gap analysis. Resolve issues when possible. Consider cost and benefit. Perform periodic audits.

2.3 Given a scenario, implement appropriate risk mitigation strategies

Change Management involves upgrading software, modifying switch ports, and so forth. This should be done with appropriate planning. With large organizations, this occurs often. Don’t let changes be overlooked or ignored. Provide a roll-back policy, and good policies in general.

Incident Management can help deal with problems that inevitably arise. Who should be contacted, both internally and externally? Who’s responsible? What are the technical steps? Again, sound policies will help mitigate damages and provide for a minimum amount of down time.

Risk can also be mitigated with good User Rights and Permissions policies. Privileges should be limited to what is needed to accomplish a task. Limit access to those who need it. HR doesn’t need access to operations management, for example. Audit permissions and rights.

Security Audits are very important. This is a way to make sure that policies are established and properly implemented. Audits allow for more security because it helps ensure that things are being done as they should be. Make sure routine functions are scheduled. Consider a tool for log analysis. This can help identify abnormalities.

Privilege Audits help limit unnecessary access.

Usage Audits ensure that the proper resources are being used (or denied) as policy requires.

Preventing Data Loss/Theft is very important. It’s easy to physically bar access. Data theft is a little harder to manage. USB drives allow large amounts of data to be carried around freely. There are internal and external threats.

Data Loss Prevent (DLP) ensures that private data is kept secure. Different data require different levels of protection. It’s important to prevent attackers from gaining access. Loss of data is called Data Leakage. Have systems in place to watch for leakage.

Data in Use can be protected with Endpoint DLP.

Data in Motion is data on a network.

Data in Storage is data at rest, on a server.

2.4 Given a response, implement basic forensic procedures

Forensics deals with the collection and protection of data. RFC-3227 provides for baseline for forensics. There are three basic steps: acquisition, analysis, reporting. Forensics must be detail oriented. Take lots of extensive notes.

Volatility of Data refers to how long data sticks around before it is no longer available. Some media is much more volatile than others. Gather volatile data first. The most volatile information is found in registers and caches. This changes in nanoseconds. Next comes router tables, ARP caches, kernel stats. Then temporary file systems. Then disk. Followed by remote logging and monitoring. Physical configuartion. Archival media.

Capturing System Images is important for forensics. It’s a good idea to make a complete copy of a drive. An image should be a bit-for-bit, byte-for-byte copy. Get everything, every last one and zero. Software tools make this possible. There are also physical options like bootable devices. You can use a hardware write-blocker. Such devices are specifically designed for forensics. They cannot write data to a drive. Get backup tapes.

Network Traffic/Logs are also important in forensics. Traffic logs are common. Firewalls log lots of information. Switches and routers don’t typically log user-level information. IDS and IPS tend only to log unusual traffic and traffic patterns. A Stream-to-disk system allows all traffic on a network to be recorded. These systems tend to be terabytes.

Video is a good source of forensics. It’s a moving record of the event. This helps capture volatile information and allow it to be shared. Using an external allows you to record without compromising a system. Don’t forget security cameras. They tend to be volatile. Record video content because it can be very important.

It’s important to take note of the time offset. Windows uses a 64 bit time stamp. Unix uses a 32 bit time stamp. Different file systems store timestamps differently. FAT stores time in local time. NTFS uses GMT. Thus you need to know what time zone is being used. The Windows Registry stores this information. It has many different values.

Hashing is a way to ensure a file hasn’t been changed. A hash is also called a digital fingerprint. Message Digest 5 (MD5) is a hashing algorithm. It will give a 128 bit hexadecimal number. A Cyclical Redundancy Check is another hashing algorithm, used in memory error correction. It displays a 32 bit hexadecimal number. If authenticity is a concern, hash whatever data is at issue.

Sometimes its difficult to reproduce the state of a screen, even with a disk image. A way around this problem is to grab a screenshot. This is best done with an external camera because it doesn’t alter the contents of the hard drive. It’s also possible to save screenshots directly to a USB so the contents of the disk drive isn’t changed.

It is very important to maintain integrity of data. Control the evidence. Everyone who contacts the evidence and are part of the chain of custody. Avoid tampering. Use hashes. Label and catalog everything. Seal and store items.

Big Data Analysis is more than just large collections of data. Incidents can create enormous amounts of data. There will also be lots of log formats and data types. Use visualization and query tools to make sense of the mass of data. Tag clouds can make it easer to visualize correlations.

2.5 Summarize common incident responses

What happens after an incident? It’s important to have good Incident Response policies in place. One of the first steps is to reestablish a secure working system. Preserve evidence of the incident. Learn from situation and work to prevent recurrence. NIST-800-61 offers security incident handling instructions.

Be sure you have good communication methods in place. Make sure you have necessary hardware and software in place to handle the response. Documentation is key. It’s a good idea to have installation media or system images in case it’s necessary to start from scratch.

It’s better to prevent than respond to an incident. Perform analyses, prioritize risks, etc. Harden operating systems, use patches, and monitor for changes. Make sure network security is good. Use hardware and software solutions. Use anti-malware software on the hosts.

Incident Identification is not easy. Attacks are varied. There are many different levels of detail, levels of perception. Attacks are incoming all the time. Use defense in depth to defeat these attacks. Incident Precursors help identify the possibility of an incoming attack. Analyze web server logs, for example, to see the use of vulnerability scanner. Microsoft announces vulnerabilities and this causes a flood of attacks.

Incident Indicators look for things like buffer overflow attempts. These can be identified by IDS/IPS. AV software identifies malware, deletes it from the OS, then notifies the administrator. Look indicators in network traffic, such as things falling outside the norman.

Incident Notification should be executed when an attack is suspected or underway. Maintain a good contact list. Contact the appropriate people. Maintain good channels for communication, include alternative methods.

Incident Mitigation aims at limiting the scope of an attack. Most incidents require some kind of containment. Make sure there are plans in place before the attack. Keep critical resources up as long as possible. Collect as much information as possible. Consider how much potential damage is being done. Prevent the destruction whenever possible. Preserve the evidence.

After an incident is over, document Lessons Learned. Document findings, performance. Have a meeting with affected staff. Discuss what happened. Do this as soon as possible. Answer questions like what exactly happened, did incident plans work, what should be done differently? Were there indicators?

Incident Reporting is critical. Incidents produce large volumes of data. Use a logbook. Digital cameras can create a snapshot or a movie of a device. Use an audio recorder and transcribe it later. It’s a good idea to have a central space to store incident reporting. It should never connect to an internal network.

Document the status of the incident. Include summary information, look for relationships between incidents. Record actions taken by all parties, chain of custody information, contact information, and comments from incident handlers.

Have in place good recovery policies. Get things back to normal. Eradicate bugs when possible. It may be necessary to recover the system. Rebuild it from scratch, to a known good state. Replace compromised files. Tighten the perimeter. It may be necessary to use phased reconstitution. It may take a long time to return to normal. Be prepared for this. Break up tasks into small pieces and focus on the easy, quick, important items first.

First Responders should have clear, well documented tasks. Don’t disturb the environment, or disturb it as little as possible. Have multiple people involved. Don’t damage evidence. Follow the escalation policy.

Protect against Data Breaches. If data is stolen, lots of damage can be done. Preserve confidentiality. After an attack, identify what data was taken. It may be necessary to notify users. Try to identify the attacker. If you’ve been breached, go into recovery mode. Secure systems, change passwords, update firewall rules.

2.6 Explain the importance of security related awareness and training

It’s important to make sure policies are read and understood. Training classes are a good way to ensure policies are understood. This should cover the important subjects, the common threats, and how to deal with them. Sometimes it’s a good idea for job-specific training to help users who have special security considerations. Role-based training can be employed.

Personally Identifiable Information pertains to privacy policies. It encompasses things like personal addresses, phone numbers, and so forth.

Data has different classifications. Some of these include confidential, internal use only, etc. It varies based n the data.

In order to run an efficient organization, data labeling is a good idea. This is also a good idea from a security perspective. Data tends to stick around. Labeling helps mitigate the risk of lost data and dumpster diving. Document and label everything.

Disposal can be a legal issue if data is sensitive, protected, or regulated. It may need to be stored off-site.

Compliance is critical. Non-compliciance can have serious consequences. The Sarbanes-Oxley Act (SOX) deals with account and investor protection. HIPAA covers medical data storage, how it’s transmitted, etc. The Gramm-Leach-Bliley Act of 1999 requires disclosure of privacy information from financial institutions.

HIPAA violations can result in serious consequences, including heavy fines and imprisonment.

SOX violations can cause your organization to be pulled from an exchange. Fines can run in the millions of dollars. CEOs and CFOs can even face jail time (haha!)

User Habits need to be controlled or guided to best practices. Sticky notes are a bad idea. People need to store data properly. Public folders are a bad idea. Sometimes a clean desk policy is a way to avoid accidental exposures of data. Users must be trained to prevent tailgating.

There are thousands of new viruses each week. Firewalls and such are put in place to filter them out. Anti-virus technology is nearing the end of its viability. New technology is needed. Phishing is an effective means of social engineering. Users should be trained to guard against it. Spyware gathers information about a user. It captures keystrokes, browsing information, etc. Zero-day exploits require

quick reaction. Before the manufacturer can patch the software, you are at risk.

Social Networking and Peer-to-Peer Networking can be a security nightmare. A compromised system may spread a threat to other systems. This can quickly overwhelm a large organization.

User Training should be measured. How well did it go? Was it effective? A Formative Assessment monitors training to target areas that need work. A Summative Assessment is a high-stakes assessment. This is a high-stakes exam. Most assessments are automated. A Learning Management System (LMS) can consolidate training into a single system. This can provide detailed feedback.

2.7 Compare and Contrast Physical Security and Environmental Controls

Computer systems require a good HVAC System. Lots of engineering is involved. It must be integrated into the fire system. The data center should have a separate HVAC system. It puts off a lot of heat. Overheating is dangerous. By having its own HVAC, the temperature can be controlled carefully.

A Closed Loop Recirculating keeps air filtered through the data center. Positive Pressurization means that if a door is opened, air rushes out. This keeps out dust and even smoke.

Humidity should be kept under control. Limit humidity. Water corrodes components. Too little water may facilitate static discharge.

Fire Suppression should be ready and in place. Water typically isn’t used because it can cause damage to equipment. The first step is detection. This may involve heat, flame, or heat detectors. Dry pipes are empty and require some time to fill. This is sometimes a good thing because it allows time to determine if the situation is a false alarm.

Computers put out a lot of Electromagnetic Interference. Metal shielding helps reduce EMI. EMI can cause noise in a system. Use shielding when appropriate.

Hot and Cold Aisles help control the airflow to maximize desirable airflow. All cool air comes into the system from one direction and heat is taken out in the same direction. Cold air will be pulled through the system by the servers’ fans. Hot air will be raised to the top of the building.

Environmental Monitoring are needed to optimize infrastructure cooling. Some cooling systems may be superfluous, but without monitoring you’ll never know. Many servers have internal temperature sensors. Increases in server load, time of day/year can cause fluctuations in temperature.

Physical Security helps limit access to facilities or sensitive areas. Such controls may be physical or electronic/token based. A mantrap has two doors. If one door is open, another closes. This can keep a potential intruder in place.

CCTV can replace physical guards. Depth of field and focal length must be considered, as will illumination requirements. Cameras can be networked together.

Technical Control Types are technological. They may include hardware devices or OS controls. Administrative Controls involve security policies, best practices.

Deterrence doesn’t keep people out, but it does discourage them. This may involve warning signs. Another type is Preventive, with door locks, guards, etc. Detective is a third type. This may include motion detectors. It is a passive system, potentially recording intrusion attempts. Sometimes it works in concert with the Detective system. Compensating systems doesn’t prevent attacks. It restores by other means, such as backups or images. It can be a hot site.

2.8 Summarize risk management best practices

Business continuity is all about keeping the business going. There are many, many things that can hurt an organization or disrupt operations. It’s important to start by asking what can happen? Determine critical functions, define important objectives. What will be impacted? How much revenue will be lost, what are the legal requirements? How long will you be affected? What’s the bottom line? All these ideas and questions fall under the idea of a Business Impact Analysis.

Tangible Assets includes buildings, furniture, data, paper documents.

Intangible Assets includes ideas, commercial reputation brand.

Procedures include supply chains and the like.

It’s necessary to identify critical systems. Make a list of critical systems. List business processed. Associate tangible and intangible assets. Sometimes it may be necessary to bring in third parties because this step is very complicated.

A Single Point of Failure is something, usually a device, that can bring down a system. Try to have two of each important device: two routers, two servers, two cooling systems. If one system fails, the backup can take over. Redundancy helps mitigate the risk of single points of failure. But there’s no way to remove all SPFs.

Continuity of Operations ensures that, in the event of a critical event, a business can continue operating. Almost everything depends on IT, so it’s important to get systems up and running quickly. Document everything, or as much as possible.

Disaster Recovery begins with a plan. With no plan in place, recovery will be ad hoc and disorganized. A third party, such as a data facility, can be contracted with in order to facilitate rapid return to operations. A disaster is “called”. Someone has to set the recovery process in motion.

Disaster Planning/Testing is necessary to implement recovery plans. You don’t want the disaster itself to be the first run-through of a recovery plan. Create a likely scenario.

Afterwards, document everything from beginning to end. What went wrong? What was done right? What should be modified?

NIST Special Publication 800-34 provides guidelines for creating recovery plans.

There are in general seven steps for contingency planning.

1. Develop a planning policy statements

2. Conduct the business impact analysis

3. Identify preventative systems to reduce disruptions

4. Create contingency strategies to recovery quickly and effectively

5. Develop an information system contingency plan

6. Ensure plan testing, training, and exercises

7. Ensure plan maintenance

Succession Planning prevents gaps in leadership. Such gaps can be detrimental. Leadership gaps can be caused by retirement, death, or termination. A deputy can be in place to assume the duties of leadership.

Tabletop Exercises simulate disasters to gauge readiness. In tabletop exercises, leadership or recovery teams simply talk and analyze potential problems. Involve as many people as possible. It may be a good idea not to give participants advanced notice. Participants determine and “implement” the proper plans.

Redundancy and fault tolerance are necessary to maintain uptime. You want to prevent hardware failures that cause downtime. Likewise, software should work in conjunction with other pieces of software to keep the system running.

Redundancy doesn’t necessarily mean availability. You may have to manually set the backup system in motion. High availability indicates a system that is ready to go instantaneously or nearly so, and with no human intervention. Many components will be necessary to work together to provide high availability.

You’ll need multiple power supplies, components, and the like. Employ RAID to protect data. Uninterpretable Power Supplies (UPS) ensure that computers will be able to operate, at least for a time. This is good when power is sketchy.

Clustering servers can help prevent downtime. They coordinate and communicate in order to keep things running. This is a logical collection. In an active server cluster, all servers are readily available. The servers will share a data pool through a storage area network. In an active/passive system, one (or several) servers sits idle. When a system goes down, the backup(s) engages.

A cold spare is equipment that isn’t prepared/configured. It sits in a supply room or something like that. A warm spare is ready to go, say, in a rack. It can be updated, patched, etc. A hot spare is always up, powered on, and running.

Load balancing ensures that components are loaded at safe levels and that if one system goes down the others can pick up the slack.

2.9 Given a scenario, select the appropriate control to meet the goals of security

The fundamentals of security are Availability, Integrity, and Confidentiality.

Availability means that systems are up and running and data is available.

Integrity ensures that no changes are made to information. It must be the same.

Confidentiality is concerned with keeping data secret.

With confidentiality, only certain people should have access. Encryption helps keep information confidential. Confidentiality can also be provided with access controls or steganography, which involves concealing information in another piece of information.

With integrity, data is stored and transferred as intended and that it hasn’t been modified. Hashing can be used to determine if data has been changed. If two hashes are the same, the data is the same. Digital signatures are mathematical schemes to verify the integrity of data. Signatures work in conjunction with certificates. Non-repudiation provides proof of integrity.

Availability means information is accessible. Data always needs to be at your fingertips. Redundancy can provide for availability. Fault tolerant systems can continue to run even if a failure occurs. Patching closes security loopholes and thus provides stability.

3.1 Explain types of malware

A virus is a type of malware designed to replicate itself. It may not need you to click anything, only execute a program. It reproduces through file systems or even networks. Running a program can spread a virus. Sometimes viruses don’t display malicious behavior. Some can almost be benign. Others corrupt, delete, or encrypt files. It is imperative that AV software is up to date.

A boot sector virus can sit in the boot sector. When the OS starts, the system becomes infected. These are difficult to remove because the boot sector may not be available when the OS is running.

Program viruses are embedded in or attached to applications. They can execute when a program is executed.

Script viruses can be problematic as well. JavaScript is a common vector.

Macro viruses are common in Microsoft Office. A Macro is essentially a miniature program within an office program.

Multipartite viruses can use multiple viruses or types of viruses working in conjunction. They require a lot of thought.

Worms are a special kind of virus. They don’t need you. They can spread themselves without help. All it needs is the power to be turned on. They typically exploit vulnerabilities, using networks as a transmission medium. Removing a vulnerability can stop worms because they lose their vector. Sometimes worms have been so efficient at replication that they have shutdown networks by consuming bandwidth.

Nachi was a special kind of worm that tried to patch your computer. It had good intentions. The Conficker Worm is a special kind of worm that is controlled from outside, able to adapt by means of the conficker controller. Conficker is adaptive.

Firewalls and IDS/IPS can mitigate virus threats from outside, but they can’t do much once systems are infected.

Adware causes an abundance of ads. Your computer becomes one big advertisement so someone can gain ad revenue. Adware can cause performance degradation. It can be install accidentally or alongside other pieces of software. Be careful what you do to remove adware because anti-adware software may actually install more adware. Use trusted brands like McAfee, Symantec, etc.

Spyware tries to gather information about you. This can result in identity theft, affiliate fraud. Peer-to-Peer is a common source of spyware. Browser monitoring captures surfing habits. Keyloggers send back every key stroke to the mothership.

Adware/spyware is so abundant because it can be very profitable. More views/clicks means more revenue. If a million machines are infected, that’s a lot of potential revenue. Spyware/keyloggers can facilitate theft directly from banks.

A Trojan Horse is a unique kind of malware that infects your system by sneaking in. It is often embedded in legitimate or seemingly legitimate data/apps/software. It’s not concerned with replication but rather to fool you into letting it in. One of the first things it does is disable antivirus software. Disabled AV could indicate a Trojan horse.

A Backdoor gets around normal authentication. Sometimes programs include backdoors, intentionally or otherwise. Bad software can come with backdoors. Tests can be performed to make sure computers are secure after installing software.

Scamware/Ransomware is a relatively new kind of malware. It scares the user into paying a third party to remove malware that may or may not even exist.

Rootkits modifies core system files. It tries to become part of the kernel. If a rootkit is embedded deeply into the OS, it’s a lot harder to remove. It can be invisible to the OS. It won’t show up in task manager. Sometimes a rootkit can hide itself by inserting itself into a legitimate file system. This is effective in a directory with thousands of files. Rootkits are difficult to remove. Sometimes you’ll need very specific uninstall methods.

A Logic Bomb is a threat designed to go off once a condition is met, such as reaching a certain date or a user event. These are often left by people who have grudge. They can be difficult to identify because they aren’t viruses or malware per se.

A Botnet is a network of computers under the control of a third party. Botnets can be used for DDoS attacks or for adware revenue. Your computer (a zombie) is infected through a Trojan or a worm. Most of the time, a Botnet doesn’t do much. It waits for directions to act. ZeuS was a botnet for stealing money and was one of the most successful. It is still active and in play.

Ransomware locks down a computer until you pay a ransom or otherwise concede something to the attackers. Ransom messages often claim to be from the police or some other authority. Generally ransomware can be removed by a professional. Newer forms of ransomware encrypts your data until you pay the attacker. However, the OS remains functional.

Malware is typically identified with signatures. Some malware detection is based on heuristics, a more complicated detection system. Polymorphic malware takes advantage of the weaknesses of signatures. The virus is able to slightly change itself so that it won’t be recognized by signature based detection engines. Sometimes the attack code will be encrypted, each time using different keys. Heuristics can detect polymorphic viruses. However, it is more resource intensive.

An armored virus is one that tries to obscure its code by sheathing itself in armor. It tries to avoid detection by making AV software look elsewhere. The AV researcher tries to deconstruct the code. They add obfuscation to conceal the purpose of the virus and make it harder for the researcher to determine the purpose.

3.2 Summarize various types of attacks

A Man-in-the-Middle attack involves an attacker situating himself between two systems and observing/redirecting traffic. Instead of information being sent to your router, you unintentionally send it to the MITM. Often time ARP Poisoning is the means to accomplish this kind of attack because machines trust one another during the ARP process. ARP has no security. Computers create an ARP cache as they communication with other computers. ARP attacks can only take place on local networks.

The attacker will send an unsolicited message claiming to be another computer on the network. The recipient computer will adjust its cache accordingly. The attacker will then poison the other victim computer. Once the ARP poisoning has taken place, the two victim computers will communicate through the attacker. Cain and Abel is an effective tool to carry out ARP poisoning.

A Denial of Service entails preventing a service from operating. It can involve overloading a server with too many requests for services, or it can exploit vulnerabilities. An attacker may want to bring down a system for competitive gain. DoS can also be a precursor to other attacks, such as DNS Spoofing. DoS can also be as simple as shutting off power.

A Distributed Denial of Service (DDoS) involves an “army” of computers to bring down a service. A DDoS will use all the bandwidth or resources of a server. This is another reason why botnets are in use. The Coreflood botnet employed up to 2.3 million hosts A DDoS can be an asymmetric attack, meaning, in this case, the attacker has fewer resources than the victim.

A Smurf Attack involves getting all the computers on a network to take part in the DDoS. The attacker pretends to be the victim, pings the network with a broadcast, and they respond with ping responses. An entire network will flood the victim with ping responses. Today, routers are configured to prevent such attacks.

A Replay Attack takes advantage of information being transmitted over a network. The attacker will need access to the raw data. This can be gained by physical access, ARP poisoning, or malware. The attacker gathers information needed to impersonate the user. With this information, the attacker can authenticate to the server with stolen credentials. Salting a hash can prevent replay attacks.

With Spoofing, the attacker pretends to be something else—a fake web server, fake DNS, etc. It’s also possible to do Caller ID spoofing, email spoofing. DNS Poisoning involves modifying the DNS server. This requires some skill. Another way to do this is to modify the client host file. Redirection to a bogus site is called Pharming, which is sometimes combined with phishing. Anti-virus/anti-malware has trouble detecting this because everything seems legitimate.

Spam is unsolicited email. The attackers send out such volume of emails that some people will respond and fall victim to scams. Spam infects a machine from a botnet and then the computer becomes part of the botnet.

Spim is Spam over IM. Links can be deadly. These attacks are very directed, very pointed, very tricky.

Spit is Spam over Internet Telephony. VoIP has made it difficult for attackers to employ Spit.

Spam can be tackled in a few ways. The first is to create a Whitelist, a list of approved senders. The challenge here is that legitimate traffic might be blocked. Alternatively, you can use a Blacklist, which blocks specific, known attacks. However, legitimate senders can wind up blacklisted. Bayesian Filtering looks for words and phrases to identify potential Spam. It’s not perfect but it can help. Spam filters are built into modern email systems.

Phishing is a kind of social engineering that takes advantage of misdirection. It makes it seem that we are logging into a legitimate account but are in fact logging to something malicious. Typically an email is sent to you that directs you to the site, often times indicating that there is a problem and you need to log in. Never click a link in an email. Instead, type in the address of the website indicated by the link.

Spear Phishing goes after targets that can be very profitable. Spear phishing that goes after a valuable target like a CEO is called Whaling.

Vishing is phishing done over the phone. It provides a direct connection to you. A human voice can carry more authority than an email. It seems more trustworthy. Sometime vishing schemes put a phone number in an email and ask you to call them to correct an issue. Another way scammers make things seem professional is to use interactive voice response. They also use caller ID spoofing. This adds legitimacy.

A Xmas Tree Attack sends a specifically crafted a TCP/IP packet. There are flags in the header. In a Xmas tree attack, the urgent, push, and fin flags are set. The way remote systems respond will give you an idea what device received the message. Routers can be susceptible to Xmas Tree Attacks. However, IPS can easily identify such attacks. With Wireshark, you can see the flags of the packet.

Privilege Escalation is when you can gain higher privileges than you are actually allowed. More access means you have more capabilities on a system. With privilege escalation, you want to achieve, if possible, administrator or root privileges. There is also horizontal privilege escalation.

Systems should be patched as soon as possible. If there is a known vulnerability, anti-virus might be able to prevent it. Data Execution Prevention allows executables to operate only where they should. This helps prevent privilege escalation.

Insider threats can be especially dangerous. Make sure to use the principle of least privilege. Users should only have enough rights and permissions to do their job. Important documents should be locked away when appropriate. In high security environments, users should turn off their screens and hide papers. Insider threats can damage your reputation, disrupt critical systems, cause data leakage.

Transitive Attacks is where A trust B, B trust C, and therefore A trusts C. Thus, A can attack C. You might not want this configuration, but sometimes it’s hard to avoid or detect. Today, systems are designed to trust no one and nothing. This can be cumbersome but it is more secure. Attackers are now moving to attack Clients. Bad programming can allow this. Programs can include media players, office applications, email clients. Because of all this, OS should be kept up to date and patched.

User names/login info is sent in plaintext. Passwords are stored and checked as hashes. A Brute Force Attack will use every single combination of characters in order to determine a password. Given enough time, a brute force attack will always succeed. In order to prevent brute force attacks websites can lock out accounts or impose a period of time between attempts. This can be gotten around by getting the hashes. The attacker can then try to find the password without locking out an account.

Different operating systems use different hashing methods. Because of this, hashes calculated on Linux won’t be the same as Windows.

A Dictionary Attack uses a selected word list. If you’re going to brute force, start with common words and commonly used passwords. These attacks can be successful against low-hanging fruit, i.e. users who don’t use good passwords. A Hybrid Attack combines Brute Force and Dictionary Attacks. It uses special dictionary words, like leet spelling.

A Birthday Attack takes advantage of the birthday paradox. With 23 people in a room, there’s a 50% chance two will share a bathroom. A Hash Collision takes place when two different plaintexts create the same hash. This isn’t supposed to happen but it does from time to time. A hash collision will allow attackers to login without possessing the real password. Very large hashes will limit the chances of a collision.

A Rainbow Table contains optimized, pre-built sets of hashes. It’s used quite often to reverse engineer hashes. It allows for a marked increase in speed, especially with longer passwords. Rainbow attacks won’t work with salted hashes.

URL Hijacking is when you think you’re going to one website but ended up going to another. The goal of URL hijacking is often times simply ad revenue. Badly spelled domain names can generate revenue because of typos. The owner of a domain can then offer to sell the misspelled domain name. Sometimes the owner can redirect traffic to a competitor. At other times, URL Hijacking is a part of a phishing scheme.

Typo-squatting (or brandjacking) involves taking advantage of poor spelling, say, professor versus professer. They look the same and it’s a common mistake. The user then accidentally goes to the wrong site. Attackers can also use different top level domains, viz. .org versus .com

Shoulder surfing is very easy to do. People do it because you have access to important information. They may do this because of curiosity, espionage, competitive advantage. Surfing can be done from a distance with binoculars or telescopes. This can also be done with a webcam in an organization. One way to defeat shoulder surfing is awareness. If people can see the screen, there’s a risk. Additionally, you can use privacy filters. These can be very effective. Keep your monitor out of sight if at all possible.

Dumpster diving is very straightforward. It’s important to dispose of sensitive material appropriately. Hard drives should be wiped or destroyed, paper documents should be shredded. This information can be used to impersonate names, use phone numbers. Timing is important because organizational schedules. The dumpster should be fenced off and locked up.

Tailgating occurs when an attacker uses someone else to gain entrance to a building. This is a very simple way to gain access without permission. The victim helps the attacker in, usually without knowing any better. Policies need to be in place for visitors. There have to be penalties for employees who allow tailgating. Badges should be visible. Ideally, when gaining entrance, only one person should be allowed through at once. Mantraps serve that purpose. Don’t be afraid to ask someone if they belong or who they are.

Impersonation is when a person pretends to be someone else, perhaps a person with authority. The attacker may throw around a lot of technical jargon to confuse and overwhelm the victim and convince them to provide login information. Or the attacker may come off as a buddy.

Never volunteer personal information. The help desk doesn’t need your personal details to access your system. Always verify before revealing information. Verification should be encouraged.

Hoaxes are messages that seem legitimate but are not. They often consume lots of resources because of forwarded emails, printed documents, wasted time. The hoax might show up as an email, a tweet, etc. A hoax about a virus can waste as much as a real virus. Verification is thus very important. Remember, trust no one and nothing that comes from the internet. Verify.

Whaling goes after the big fish, the high value targets. Executives have access to valuable information. Very precise information is necessary to gain trust. Whaling is difficult to identify with traditional security techniques. To the firewall/IPS, it seems legitimate. Training is key.

3.4 Explain types of wireless attacks

A Rogue Access Point is one added to your network without your knowledge. This can create a serious backdoor. It poses huge security concerns. Unfortunately, access points are easy to plug in. Another problem is that a computer connected to a network can create a network share. To catch/defeat RAPs, it may be necessary to survey the facilities.

Network access controls (802.1x) requires authentication. It won’t prevent access points being connected, but it will require them to authenticate.

When an attacker sets up a rogue access point, it’s for exploitation. This kind of attack is called an evil twin attack. It is configured to match an existing AP. It will have the same SSID, the same security settings, etc. If it has the strongest signal, it will appear to be the legitimate AP. Users will connect to the evil twin. Encryption/VPN will mitigate the risks if you unintentionally connect to an evil twin.

Wireless systems are susceptible to interference. Most of the time the cause is benign, such as power cables, microwaves, phones. Intentional jamming is illegal in the US. Interference degrades service. Attackers can even create denial of service attacks if the interference is strong enough. This kind of attack can go hand in hand with an evil twin by knocking out the legitimate WAP. It may be necessary to use a spectrum analyzer.

Wardriving is simply driving around with Wifi monitoring equipment, perhaps nothing more than a computer, and drive around. The GPS will monitor where you go. The computer will automatically detect networks. Warchalking involved marking buildings or vicinities to indicate if a node is open, closed, WEP, or mesh. This is mostly archaic.

Bluejacking is the sending of unsolicited messages to another device via Bluetooth. No mobile carrier is required. Basically spam for Bluetooth. Bluetooth has a limited distance, which means bluejackers will have to be nearby. However, this is more of an annoyance than anything.

Bluesnarfing is malicious. It is used to take data from a device, especially phone. It can steal contact lists, email, pictures, videos, etc. This is an old vulnerability. It has been patched.

Initialization Vectors are part of the encryption process. If a key is reused, it can be cracked. An IV is an extra bit added to the key in order to scramble the key. It should change every time. This system isn’t entirely secure.

One of the weaknesses of 802.11 WEP was that the government first limited key sizes to 64 bits, then 128 bits. This produced weak keys. In practice, the system had a 40 bit key and a 24 bit initialization vector for a total of 64 bits.

One of the problems with WEP is that everyone has the same key. The system didn’t require that everyone have different keys or that keys rotated. Another problem is that a 24 bit IV is small. This allowed only about 16,000,000 IVs. Some IVs were weak and didn’t provide for good encryption. Attackers created software that put IVs on the network and looked for changes in the ciphertext.

A lot of information can be gathered from gathering packets. Most information is sent in the clear. Encryption is coming slowly. It’s harder to capture wired traffic, but it is possible given physical access. Sniffer is actually a registered trademark. Network analyzer is a generic term.

Wireless networks are terribly easy to monitor. Every device on a network can hear from every other device. Software is readily available to capture and monitor traffic.

Use WPA2 or at least WPA. These two standards are much stronger than WEP. Avoid WEP. Some access points don’t even offer WEP. Use encryption for authentication, or VPN.

Near Field Communication (NFC) builds on RFID. It connects between a mobile device and a third party device. It is very short ranged. Google Wallet and MasterCard are working on this for POS. This can also be used to gain access to a system or room. NFC does prevent security concerns because it is wireless. It is possible to capture communication from NFC. DoS is also a potential problem, as it MITM.

Replay Attacks are a little easier on wireless networks. It’s very easy to capture data on a hotspot. WEP was weak, in part, because of replay attacks. Cracking WEP requires massive amount of IVs. Once you have the IV packets you can crack WEP in seconds.

WEP’s weaknesses led to the creation of WPA. It is more secure than WEP, though it has some vulnerabilities with TKIP.

WPA2 uses CCMP-AES. There are no known cryptographic vulnerabilities. Because of this, other methods have to be used to crack it. WPA-PSK (Pre-shared key) involves everyone using the same key. To gain access, you’ll have to get the key. The key is 256 bits. Make the key long and with various symbols and alphanumerics. WPA-802.1x is used by large organizations. It authenticates users with an authentication server such as RADIUS. There are no practical attacks.

Wifi Protected Setup (WPS) allows an easy setup of mobile devices. There are many different ways: a PIN on the device, pushing a physical button, NFC, even USB. Unfortunately, WPS was built wrong from the beginning. The PIN is just seven digits plus a checksum. Because of this, there are only 10,000,000 combinations. WPS validates each half of the PIN. The first half only has 10,000 possibilities, the second has 1,000. They can be checked in just about four hours. Disable WPS.

3.5 Explain types of application attacks

Code injection is an attack in which you insert your own code into a data stream. These vulnerabilities usually exist because of weak, insecure code. Input should be “clean”, i.e. not susceptible to code injection.

Structured Query Language (SQL) is the most common database system. It can be vulnerable to injection, which can be done through a website if the database is connected to the internet. SQL Injection attacks can be especially devastating.

Extensible Markup Language (XML) is a set of rules for data transfer and storage. Because it’s a standard format, it is vulnerability to bad code and injections. A good program will employ validation.

Lightweight Directory Access Protocol (LDAP) is typically used for name services.

A web server should be a closed environment. Behind it should be a main server. Directory Traversal allows an attacker to access back-end data and inject commands. Again, this is partly the result of bad programming.

Integer Overflows result when there is no fixed boundary for inputting integers. If input falls outside an expected value, it might wrap around or cause problems in some other way. One vulnerability is to cause the computer to create a negative computer address. This can cause crashes.

A program will set the amount of RAM it needs. A buffer overflow causes the RAM allocation to overflow its limits and eat up more system memory. This is a complicated exploit because without imposing limits system may crash or it will be unstable. Done properly, it allows an attacker to insert and execute commands.

Zero-day attacks take place when an attacker discovers a vulnerability and exploits it before it is known. Attackers and security specialists/code developers both look for unknown vulnerabilities. Developers notify the application’s creator. Because they attack unknown weaknesses, no patches exist for such weaknesses. The creator of the application will try to make a temporary fix will it works on a patch. Common vulnerabilities can be found at cve.mitre.org.

Browser cookies are little files stored on your hard drive. They contain website information like preferences and session information. They are not executable so they don’t pose a direct risk. However, information can be gleaned about the user. The risk here is more one of privacy than a direct security risk. Session IDs are often stored in the cookie. It maintains sessions across multiple browser sessions.

Header manipulation begins with information gathering, such as with Wireshark or Kismet. The attacker can modify headers in order to impersonate the victim. It is possible to modify and add cookies to take over a session.

Session hijacking can be prevented with end to end encryption. Use HTTPS all the way to the web server. This obfuscates cookies and other information. This does put an additional load on the server. Because of this, not all webservers support HTTPS. If you can encrypt end to end, at least try to encrypt end to somewhere. This is not as strong as end to end.

If you’re using Adobe Flash, you’re using Locally Shared Objects or Flash Cookies. This is where Flash stores data. It applies to all your browsers. Ideally, the LSO can only be read by the domain that created it. Anything can be stored as a flash cookie—browser history, flash cookies, etc. Some websites won’t tell you that they’re storing information as a flash cookie.

Attachments can be especially problematic. Unlike other attacks, the user adds/downloads/executes the malware. Add-ons can be very helpful, but they can also be dangerous. They seem to come from a trusted source, but this doesn’t necessarily imply security.

Nothing happens on a computer unless something is executing in memory. It’s not an application but the underlying software.

3.6 Analyze a scenario and select the appropriate type of mitigation and deterrent techniques

Review and study your logs. They contain lots of information that can be helpful for planning and operation. Often time applications are needed to help study logs. There are several different kinds of logs, including events, security, auditing, access, and so forth. Logging can be automated.

Event Logs record things like when a file is opened, when email is open, logins, etc. This can be helpful for determining when something happened after the fact. Event logs can be quite large, so you’ll need lots of storage.

Audit Logs are similar to event logs, but they only tell you when things change. Changes must be controlled, sometimes very tightly. Audit logs might record normal, approved things happen. They can also report unauthorized changes take place. In some ways, audit logs are more important than event logs.

Access Logs report when someone gains access to a service. This could be files, connections, etc. There can be may different formats, such as web servers, VPN concentrator. Access logs are important because they allow you to limit attack vectors. If a user is continually trying to authenticate but keeps getting denied, this could indicate an attempted attack. Access logs can also indicate when an attacker gained access.

Security logs are very focused. They’re not usually helpful to the rest of the organization. Security logs can be generated by many different devices.

Operating systems should be hardened, ideally before they are connected to a network. They need to be patched, kept up to date, and tweaked. Often times many different modifications are needed to harden a computer. Maintenance is critical. Keep the systems patched. Schedule preventative maintenance cycle.

Disable unnecessary services. This is sometimes easier said than done. What is unnecessary isn’t always clear. Every service has the potential for trouble. Some have worse vulnerabilities than others. Research the services, vulnerabilities, and so forth. Trial and error may be necessary. The fewer services the better, generally.

Protect management interfaces. Devices can have critical information, so they should be password protected appropriately. Change default passwords and user names. Add additional security, such third party authentication and additional logins.

Password protection is important. Weak passwords can be hard to protect against. Captured hashes can be brute forced at the attacker’s leisure. Passwords should be relatively complex so that a brute force attack would take a prohibitively long time.

Disable unnecessary accounts. These include guest, root, mail, etc.

The inside of a network is often insecure. A lot of time is spent protecting the outside of the network to the detriment of the inside. Physical ports should also be secured. Without protection, people can plug into a wall and potentially gain access to a system. MAC filtering can help limit access. Unfortunately, you have to record all the MAC addresses and filter appropriately. This is time consuming. It’s also possible to spoof MAC addresses.

802.1x requires authentication to a central server. Because authentication is required, connecting to a physical port won’t do much for the attacker. Disable unused ports. Periodic reviews are a must. Use visual audits to check for incursions.

The security posture is built on a baseline. This process requires a lot of planning and a lot of thought. Determine a minimum level of protection. Baselines may defend on the type of system. You must also consider legal and regulatory requirements, such as HIPAA and SOX. Maintaining the baseline can be difficult as systems change. Monitor systems.

What happens if a device doesn’t match a baseline? Remediation is the process of restoring a system to the baseline. If possible, use a remediation network. This can prevent a system from threatening the normal network.

Devices can generate a lot of information, usually more than a person can handle. You need to decide what metrics to monitor. Decide and set thresholds. This way the system will only generate reports if select criteria are met. Determine intervals and reporting time frames. Identifying trends can be helpful. Visualization facilitates this, providing a high-level view. Focus on security metrics.

Detection and Prevention both have advantages and disadvantages. Cameras are great at detection but can’t prevent. Guards can prevent but might not be able to detect quite as well. Guards are more expensive, but they can have a greater impact. In same cases, it’s better to have both.

An Intrusion Detection System can be used to discover threats coming into a system. IDS take on various standards and devices. The IDS is usually off to the side (parallel) to the inbound/outbound connection. It captures traffic and analyzes. If the IDS detects something suspicious, it will report accordingly.

An Intrusion Prevention System (IPS) sits in line (series) with the inbound/outbound. If an exploit or threat arises, the IPS will drop the packets. IPS must be able to keep up with the traffic.

3.7 Given a scenario, use appropriate tools and techniques to discovery security threats and vulnerabilities

Vulnerabilities are discovered daily. The National Vulnerability Database keeps a record of vulnerabilities for systems and services. Devices should be scanned to determine if it susceptible to a system. Be careful when doing this because it might cause problems with systems. Be aware that scanners aren’t perfect. Networks can be fickle, as can devices.

Employ assessment tools. These might be passive tools, which do not interact with systems. They simply look and listen. Packet capturing is passive. To really test a system, use active tools. These include honeypots, port scanners, banner grabbing. Application vulnerability scanners focus on specific applications. OS scanners look at the whole OS.

Honeypots are designed to look enticing and legitimate. The attacker will investigate the system. Multiple honeypots can be joined to create a honeynet.

A Port Scanner is used to identify open ports, such as nmap. It will also identify firewalls and packet filters. The port scanner uses the three-way handshake to determine what ports are open.

Banners can give away some information about a system.

Security risks can lead to compromised systems. Threats can be active or accidental. Understanding the risks is critical. Risks include the physical sphere such as doors and locks. They can be technical, like misconfigured firewalls or unnecessary services.

It’s important to establish a baseline. This will help determine risks and it’s useful for comparison after an incident. Establish which metrics and resources to monitor. This can provide useful information. The baseline is constantly changing. Update as needed.

Review the code from time to time, if you build your own. The idea is to examine the code and see if there are vulnerabilities, that you are protected against various attacks like SQL injection or XSS. A design review will help everyone understand what the code is supposed to do, what the application’s purpose is, and so forth. This knowledge will help during the code review. Reduce the attack surface.

Examine what components an application uses, such as database engines, web servers, browsers, etc. Although SQL is a standard, different applications may employ SQL differently.

3.8 Explain the proper use of penetration testing versus vulnerability scanning

Penetration Testing (Pentest) simulates an attack. This is not the same as vulnerability scanning, which is more passive. Penetration testing is active. It tries to exploit vulnerabilities. Often times third parties are brought in to perform testing. Try to bypass controls, gain physical access, get around the firewall. Use the same tools that attackers use during penetration testing. In short, get into the mindset of the attacker and employ his methods.

During penetration testing, there is a limit to what should be done, however. Without planning and consideration, you can actually cause disruptions to the systems such as buffer overflows.

Penetration testing can be approached from the different directions, white box, gray box, and black box. A black box test is blind, a white box test has full knowledge, and a gray box test has some knowledge of the systems.

Keep up to date on the latest threats. NIST provides a vulnerability database. Perform regular vulnerability scans. Even watching the news can be helpful.

Vulnerability scanning is generally passive. A port scan is one such passive test. There is no attempt to exploit a weakness, just determine if a weakness exists. Scans should be run from the inside as well as the outside, to help reduce internal threats. Scanners are very powerful. Use non-credentialed scans too. A credentialed scan can mimic an inside threat. Scanners can indicate if there aren’t firewalls, if systems aren’t configured in a secure way, and so forth.

4.1 Explain the importance of application security controls and techniques

Fuzzing is a vulnerability testing technique. Also called fault injecting, syntax testing, negative testing. Random data is inserted into a field to look for server errors, application crashes, etc. This information will help the attacker determine aspects of the system and what its weaknesses might be. It’s a very old technique. Fuzzing is very resource intensive, requiring many iterations. Fuzzing engines first use high probability tests before trying less likely methods.

Coding is a balance between time and quality. Security is unfortunately often a secondary concern. Quality assurance should be carried out to test for security. Vulnerabilities will always exist, however. Input Validation is very important for secure coding. The acceptable data should only fall within a selected range. Zip codes shouldn’t have lets, and names should have special characters. You can use fuzzing to check input validation.

Input validation should check for cross site scripting and cross site request forgery. Authentications should be encrypted and protected.

If an error occurs, a generic message should be available. Messages should be available for specific problems, where appropriate. Don’t include specifics. Attackers can use this specifics to gather information about application. This falls under error and exception handling.

Determine a security baseline for every application. Monitor the baseline over time. Perform scheduled scans. The baseline will change constantly because of patches, service packs, etc. After major updates, reevaluate the baseline and confirm the system is secure.

These principles apply to the operating system as well. Make sure it is patched and kept up to date. This can go a long way to keeping systems secure. Update application software, especially network based security. Again, restrict access to least privilege. Limit an applications ability to change, move, delete files.

Patch applications as well. Patches might include new features or upgrades, they might include bug fixes, or patch security holes. Windows update can provide client-initiated updates. It gives you the option of updating systems automatically, for systems that come from Windows. Centralized patch management is provided by Windows Server Update Services. Global policies can automate this process. Apple menu/software update can perform the same function. Linux has many different version. Keeping systems up to date can be difficult at times. Patches and updates might adversely affect systems, or they might be completely unnecessary for your system. Check updates on computers set aside for testing updates.

SQL Databases are the most common. They allow for centralization. Its format allows for easy retrieval. When storing lots of information, this can be very helpful. The information is stored in a relational database management system. Data is stored in a table, each of which has rows and columns. You can correlate and compare information across databases. One of the advantages of SQL is that it is ubiquitous.

NoSQL (Not Only SQL) is useful for non-relational, non-associated data. This is a good choice for large datasets. NoSQL has to be able to scale to massive datasets that have no apparent structure. The difficulty is in finding relationships.

One way to gain access to a system is to provide it with unexpected information. With server-side validation, the server will check for validation. This helps protect against malicious users. On the other end, client side validation checks validity before it’s sent to the server. If the query information is sound, then it’s sent to the server. However, this system could filter legitimate information. Client side validation also provide better performance. Use both kinds of validation.

4.2 Summarize mobile security concepts and technologies

Mobile Device Management is increasingly important with the proliferation of mobile devices. MDM is a centralized management usually running on a server that can communicate with devices on the network or outside it. It’s specifically designed for mobile devices. You can set policies such as acceptable apps, disabling the camera, control the device remotely. It’s possible to impose access controls with lock screens/PINs.

Encrypting a device is sometimes a good idea. This will protect against unwanted, third party access to data. Something to consider when using encryption is the strength of encryption. It might not be a good idea to encrypt non-sensitive information with the strongest encryption. It’s too resource intensive.

Mobile devices can often support remote wiping. This will sanitize the device remotely. It can be done via the MDM or a browser front-end. Have policies/plans in place for the possibility of having to do a remote wipe. iPhones can be setup to erase all data after 10 failed attempts to unlock it. Define a lockout policy.

GPS tracking is built into most modern phones. This can be very helpful, but it also poses some risks. It can allow you to be tracked.

MDMs can control nearly every aspect of a device. It may allow you to load only approved applications. Unapproved applications can be removed. Advanced managers can create “partitions” to company functions.

4.3 Given a scenario, select the appropriate solution to establish host security.

The operation system is a good starting point for security. It is the foundation of device security. There are hundreds of settings. You’ll only change some of these, such as firewall settings and disable guest accounts.

User Rights allow you to change who has access to what, and to what degree.

Log Settings determine what gets logged. Logs can also be forwarded.

File Permissions can be locked down, such as operating system files.

Account Policies determine what a user can and can’t do.

Patch management for applications is as important as operating system patches. Patches increase stability and security. Service Packs install many patches at once. This helps quickly get a computer up to date rather than having to download a lot of patches. Windows offers incremental, monthly updates. Critical updates may be released whenever the need arises. An out of band update is one that falls outside the normal update cycle. WSUS helps simplify the update process for lots of clients.

Updates and patches may introduce other problems. They can adversely affect other systems. You may have to pick and choose what patches to install. WSUS helps this process by controlling the patching system from a central place.

Any application can pose a threat. Whitelisting and Blacklisting help control the execution of applications. Whitelisting says that nothing can run unless it is approved. This can be very restrictive and it requires lots of maintenance. However, it is secure because nothing unapproved can be installed. Blacklisting provides a list of applications that can’t be installed. Everything else is permitted. This requires less administrative overhead, but it is less secure.

Hashes are used to identify programs. Program names can be changed, but hashes will demonstrate if a program is actually a viruses. Conversely, it will demonstrate if a program has been changed. Publishers often use certificates with digital signatures. You can configure systems only to run executables from a path. This can also be applied to a network zone.

Trusted OS is one created from a security standard point. It derives from a international standard, sometimes shortened Common Criteria. The US government is a common user. A tested system is given an Evaluation Assurance Level, from 1 to 7. EAL4 is the most accepted minimum standard. Private organizations also rely on trusted systems.

Host based firewalls are an excellent way to protect your system. These are also called personal firewalls and they’re often included by default in an operating system. Host based firewalls are stateful meaning they keep track of connections. They can also manage individual applications. Windows firewall can filter by port numbers and application. This allows security to span multiple applications. Endpoint products today can include host based IPS, examining traffic based on signatures.

A cable lock is temporary security, used to lock a device in place for a time. They’re not necessarily secure, because locks can be picked and cables can be cut. Safes can be used to securely lock up important hardware and media. This is a good idea for things like encryption keys. Use a safe that has environmental protection.

Data center hardware is managed by different groups. Responsibility lies with the owner. Use locking cabinets.

Security baselining is necessary to understand the particulars of a system. What resources are being used, what is network connectivity? Tighten down the operating system. Use a host based firewall and limit application execution. Limit access to certain folders. External security devices are also a good idea.

Applications may not be centralized. Parts may be on a data center here, and other parts over there. Other applications might be running on the same hardware. This can complicate establishing security policies. Redundancy might be required.

Virtualization allows one physical device to run many different operating systems. Each OS will use CPU, memory, network, etc. Usage is allocated when the OS is configured. On the enterprise level, there are servers that host virtual machines.

The hypervisor is what manages the virtual platform and guest operating systems. You may need a CPU that supports virtualization. Even if the hardware doesn’t support it, the OS might run just less efficiently. The hypervisor is what shares resources between physical and virtual systems. It provides security between operating systems.

Every guest is self contained, stored in a single file. This allows for high portability. You can take snapshots, allowing you to roll back a system if something adverse happens. It’s also easy to roll back to an earlier date and time. This is helpful for historical analysis.

Big data centers have elasticity, meaning they can roll out more capacity when a system is under load. Systems can be scaled down when demand is low. The process can be automated. Virtualization is great for creating a custom host for security testing and software testing. It can serve as a sandbox.

4.4 Implement the appropriate controls to ensure data security

Data controls implemented on the Cloud is similar to controls on your own system. Make sure security controls are in place, such as rights and permissions. Consider encryption for data at rest. Encryption will help keep it private, even if it is leaked or stolen.

The Storage Area Network secures data in one place. The SAN should be physically secure. Protect the data center. Consider self-encrypting drives. These automatically encrypt data as it is written. Consider how data will be stored if it taken from a protected area. Remember that encryption can consume considerable resources depending on the strength of the encryption.

Big Data consists of massive datasets. Normal access controls may not work. Apply a need to know principle. If you’re in accounting, you can access accounting information in a normal database. With Big Data, it’s not clear what is what. Consider removing PII.

A problem with big data is that there can be lots of queries and a massive amount of data coming and going. It may be best just to store queries and check them later. Data Loss Prevention techniques could come in handy. These could prevent transferring PII or other sensitive information.

With full disk encryption, every bit and byte is encrypted. Nothing is omitted. Even the OS is encrypted. This is great for mobile devices, such as phones and laptops. BitLocker and FileVault are two built in encryption tools, from Windows and Mac respectively. PGP has a full drive encryption. As with all encryption, key management is critical. If you lose the key, you lose the data.

With a massive amount of data, it can be difficult encrypt all of it. Database Management Systems might require or offer different encryption options. You can encrypt individual columns and fields. This allows you to keep data secure but also access it quickly. But don’t encrypt key fields if the database is relational.

Individual encryption is also an option. Encryption software might be included in the OS or provided by a third party. With this technique, you have to determine what information is worth encrypting.

Removable media encryption is a significant concern. Administrators may require that data stored on a removable device is encrypted. Again, key management is critical. Lose your key, lose your data. Keys can be stored automatically so that they can restore it if you lose it or move on. In another case, an administrator might specify no removable devices at all.

The Trusted Platform Module refers either to the standard cryptographic function build onto a motherboard. The TPM contains a cryptographic processor. It is a random number generator and a key generator. Unique keys are burned into the hardware, called persistent memory. The TPM also serves as versatile memory, storing things such as keys, hardware configuration. Information stored on the TPM is password protected to help defeat brute force/dictionary attacks.

High end cryptography consists of Hardware Security Modules. These provide key back up, CPU accelerators to help the encryption/decryption process. This takes load away from other CPUs. HSMs are usually clustered and provided with redundant power.

Many USBs now include hardware based encryption. They typically use AES 256. These USBs can also be used as secure tokens. They often have a remote management.

Hard drive encryption is invisible to the operating system. It is hardware based and can be integrated with a USB key. Cleartext goes in, cipher comes out. You can also configure this so that two USB keys are required to decrypt information.

Data in transit/motion covers data that is actively moving across the network. It travels through many different devices, such as switches and routers. TLS and IPsec are used to encrypt data into the header of the packets.

Data at rest is data that is stored, on a drive or in a SAN, etc. Sensitive data should be encrypted appropriately. Access control lists should be applied, as always, so that only authorized users can access data.

Data in use is found in the memory of a device, such as RAM, CPU, and/or cache. Data in use is almost always (or always) decrypted. It can’t be used if it’s encrypted. Because of this, there is a small threat because attackers could potentially take the information directly from RAM. This defeats encryption. This particular attack was employed against Target, with POS RAM being targeted.

An access control list is a set of permissions assigned to an object. They are used in various file systems, operating systems, network devices. ACLs can be quite complicated, allowing for very granular control.

Data policies include things like data wiping, retiring hardware, and so forth. Data policies can also be based around onboarding/offboarding. Other data policies include how to dispose of data. There can be legal or regulatory restrictions. Only store PII as long as necessary.

4.5 Compare and contrast alternative methods to mitigate security risks in static environments

Static environments can’t change much. This is good from a security perspective because changes to a system can degrade security. Embedded systems are static. They are designed very specifically. Often times these systems will have firmware updates, but nothing like what you see with PCs.

Supervisory Control and Data Acquisition Technology is used on very large scale industrial system. Because of how critical SCADA systems can be, there is a huge emphasis on security. Typically they aren’t connected to the internet, and they are protected with firewalls. Access is restricted.

HVAC can also run with static, embedded systems.

Printers, scanners, and faxes are often incorporated into a single machine sometimes called a multifunction device. They have their own memory and software, and they can store print jobs/faxes in the memory. An attacker may be able to print from memory. Logs are stored on the device.

A lot of time is spent securing computing devices. Static devices are those where the OS and the hardware are closely related, such as iPhones. If you need updates, you have to go to the manufacturer. Closed environments like this can provide security in the case of app stores. Android is more open than OS. Android is more susceptible to malware.

Mainframes use proprietary operating systems. They are still used for storing and processing bulk data. They are extremely reliable. They can run for years on end. It’s hard to attack mainframes because they have unique operating systems. Attacks on mainframes tend to come from the inside, making it an attractive target.

Static environments present unique risks. Some things don’t change, though. Employ defense in depth. The more layers of security the better. Security controls should be diverse. Use different firewalls, IPS, etc. Avoid single points of failure.

Segment the network into logical pieces, such as DMZ, internet, storage, etc. This can limit the impact of an attack. Such segmentation might be physical or logical.

TCP Wrappers are a very early form of application control. It places protection between the network and the service. ACLs were used to filter access to services. Today’s firewalls filter based on application and can provide very detailed application control. They can also protect specialized applications such as SCADA.

Embedded systems aren’t updated often. Some cannot be updated at all but must be removed instead.

5.1

Compare and contrast the function and purpose of authentication services

Remote access administration is a critical part of modern networks. It allows for authentication, authorization, and accounting.

Remote Authentication Dial-in User Service (RADIUS) is a remote authentication service. It uses UDP port 1812 by default.

Terminal Access Controller Access-Control System (TACACS) is a remote access protocol, created to control access to ARPANET. Cisco created XTACACS, for additional support and accounting. Today the most commonly used is TACACS+. It’s not backwards compatible. It uses TCP port 49.

Kerberos is a network authentication protocol. It allows you to authenticate once and then be trusted by the system. There’s no need to re-authenticate. It also allows you to be sure that you’re talking to the server. This helps protect against MITM attacks. Windows started using Kerberos in Windows 2000. It is supported by Linux and Mac as well.

Kerberos has three parts. One is the key distribution center, which vouches for the user’s identity on TCP 88 or UDP 88. The second is the authentication service. The third is the ticket granting service.

The first step in using Kerberos is to send an authentication service a request. Information is sent encrypted. The user’s password hash is the key. If the time frame falls within a five minute period, it checks other items. If everything is as it should be, it supplies a ticket granting ticket. It includes client name, IP, validity period, etc. It is encrypted with the KDC secret key. The Ticket granting service key is used to encrypt communication between the client and the ticket granting service.

(return to this)

Lightweight Directory Access Protocol (LDAP) is a protocol for reading and writing directories over a network. It’s very standardized, using x.500 standard developed by phone companies. LDAP used TCP/389 or UDP/389. It uses distinguished names. LDAP creates a hierarchical tree. It uses container objects which store other objects within it. Within those containers we have leaf objects, which has users, computers, printers, and files.

LDAP v3 uses simple authentication and security layer (SASL). It can allow for no authentication, anonymous authentication, or simple authentication and security layer, using something like Kerberos. Once authenticated, the user has either read access or read/write access. Anonymous users typically only have read access.

Normally LDAP sends data in cleartext. LDAP over SSL (LDAPS) is a secure version using SSL/TLS to encrypt data transmitted over a network. Active Directory uses LDAPS over TCP Port 636.

Security Association Markup Language (SAML) allows you to authentication to one site even though it doesn’t have access to that data. This is very important for information in the cloud. Three pieces are typically needed. The first is the service provider, the second is the client via the web browser, and the third is the identity provider.

The client accesses the application URL on the resource server. The resource server sends the signed/encrypted SAML request and redirects you to the authentication server. The user logins. If authentication is successful, a SAML token is generated. The token is verified by the resource server. If everything checks out, access is given.

5.2 Given a scenario, select the appropriate authentication, authorization, or access control.

Identification associates a user with an action. The information is logged. The identifier could be something unique. Windows associates Security Identifier with every account.

Authentication proves a user or process is who/what it claims to be. It’s not enough to make the claim. The proof often comes in the combination of a user name and secret passphrase. Additional kinds of authentication can be added, such as biometrics.

Once you are authenticated, authorization defines what rights and permissions are to be granted. Policy enforcement is typically put in place. Authorization provides the limits and access.

Access control revolves around authorization. The process ensures only authorized rights are exercised through policy enforcement. The process of determining rights is a policy definition. Often access control lists are employed. Discretionary access control is owner controlled. It’s very flexible but also very weak. Role based access control uses administrators to provide access based on the role of the user. In Windows, Groups are used to provide this. Governments use mandatory access control. All data has a security clearance level.

Rule-based access control is a generic term for following rules, i.e. RBAC or MAC.

An implicit deny says that unless someone is granted access then that person is denied access.

Time of day restrictions can also be put in place.

Single factor authentication provides only one authorization factor. It could be a PIN, a smart card, biometrics, etc.Most often, we use passwords. Usernames are public but they shouldn’t necessarily be shared. A password or passphrase should be strong, employing letters, numbers, and characters. PINs can also be used. To reset passwords you may need to use PII.

One of the key weaknesses to SFA is that it’s not very strong. A password can be gained by shoulder surfing or stolen via phishing. Unfortunately, many passwords are easily guessed. Don’t use one password across multiple services.

Multifactor authentication is stronger. They can also be expensive at times, such as biometric scanners or security tokens. MFA uses three types of authentication: something you know, something you have, something you are. Remember that no single factor is fool proof. A smart card or token is an example of something you have. A PIN may be required to use in concert with the card.

Something you know is very cheap and ubiquitous. It is often just a password or PIN. Swipe patterns are used on Android.

A new factor of authentication is somewhere you happen to be. Access is allowed or disallowed based on location. If you should be in LA, an login attempt from New York will be denied. This is typically based on IP addresses. This can be strengthened with GPS or nearby 802.11 networks.

Something you do is another form. This can be something like a signature. Handwriting analysis is very common.

A one-time password is a password used a single time and never again. It can be used for a single session. HMAC based one-time password algorithm uses a keyed HMAC. This is often used on token based authentication. Hardware and software tokens are available.

Time based one time password provides a password based on the time of day. It doesn’t use an incremental counter. Time synchronization is critical. Timestamps are synced via NTP.

A series of protocols are involved in the authentication process. One of the first was Password Authentication Protocol (PAP). It sends data in the clear. There is no encryption. This is terribly insecure.

Challenge-Handshake Authentication Protocol (CHAP) is an encrypted message sent across the network for authentication. Microsoft uses MS-CHAP. CHAP uses a three way handshake. After a link is established, a server sends a challenge. The client responds with a password hash. The server compares the received and stored hashes. By using hashes, passwords aren’t transmitted directly.

LAN Manager (LANMAN) was created by Microsoft and 3Com. This was a very early type of operating system. It uses hashes like CHAP. It only allowed uppercase ASCII, with a 14 character max. Passwords longer than 7 characters were split up. Passwords were not salted.

LANMAN was succeeded by NTLM. The password is unicode, up to 127 characters. Passwords are stored as 128 bit MD4 hashes. Later, MS came out with NTLMv2. It uses HMAC-MD5 hash of username and server name. NTLM and v2 have some weaknesses. It was vulnerable to a credential forwarding attack.

Single sign-on (SSO) allows you to authenticate once and gain access to everything. There are many options for this, including Kerberos and third party applications. SSO doesn’t show up much in smaller organizations. Kerberos is one of the most SSO systems.

Software as a Service (SaaS) is changing the way we use applications. SSO helps streamline the process and reliably authenticate users. OneLogin is one such example.

Federation allows you to give access to people who aren’t part of your organization, such as suppliers and customers. A Federated Network gains access via authentication through a third party. An example is logging into something via Facebook. In this case, a trust relationship exists between the two. Trust relationships should be establisehd early, and be well planned.

In a One Way Trust, A trusts B but B doesn’t trust A. There are also Transitive Trusts, where A trusts B, B trusts C, and therefore A trusts C. Again, all this needs to be well planned.

5.3 Install and configure security controls when performing account management, based on best practices

One challenge for security is assigning the right roles for users. Access is given based on one’s role. You need more than just a user and an administrator. A member of accounting needs access to accounting. If they are moved from accounting, they have no need to access accounting information. Definitions should be tight enough to apply security but not so much as to cripple operations.

A user can logically only have rights for one role at a time. A shared account is one where authentication details for one account is known by more than one person. Sharing accounts makes auditing difficult. It breaks non-repudiation. Shared accounts are much more likely to be compromised. If a password needs to be changed, you’ll have to notify everyone.

Credentials should be protected. Passwords should not be embedded in an application. Everything should reside on the server, not the client. Communications across a network should be encrypted. Authentication traffic should be impossible to see.

Windows Group Policy Management allows you to apply security and administration settings across computers. WGPM has thousands of settings, making it very granular. This is not the same as NTFS or Share Permissions. Group policies let you change how the system is configured.

There are typically two different group policies, administrative and security. Administration allows things like add/remove programs, allow font downloads, etc. Security policies are much more focused, such as setting minimum passwords, require smart cards, enforce user login restrictions.

Passwords should be strong, not a single word, and ideally should use a mix of upper and lower case letters. Use special characters, but don’t replace letters with numbers and the like. Passwords should ideally be more than 8 characters, with alphanumerics and characters. Prevent password reuse. This can be enforced with group policy settings. Passwords should also expire at regular intervals. Critical systems might require password changes every week. Resetting a password shouldn’t be a trivial process.

Privilege management can be tricky. The challenge is keeping track of everything. What rights does a user have? This can be complicated by policies, operating systems, group permissions, etc. Popular management types are user, group, and role-based.

User management is done a user-by-user basis. It’s unsophisticated. Each user is given specific rights. However, the problem here is that you have to change every user account if something needs to be modified.

Group management grants privileges based on what you do. If you are a member of accounting, you get accounting privileges. If you are removed from accounting, you lose accounting privileges. It is possible to be a member of multiple groups, each with different permissions for the same resources.

Role-based management goes one step further. It offers granular controls. Privileges can be based on the role of the user, rather than department. This can be complicated because of how granular it can be. However, it’s easier when you change positions. When organizations mandate job rotation, this can come in handy. Under this system, you can only be a member of one role.

Because all systems have flaws, access reviews should be performed. Hopefully this will find misconfigurations and changes in user policies. This should occur often. Disable unnecessary accoutns, review ACLs, monitor group membership. Specialized tools exist to identify red flags.

Monitor event logs. These keep a list of every action. There are specialized logs for application, security, and audits. Logs can be extensive. Maintain and audit trail. This answers questions about the past. However, a downside is that audit logs can be massive. Event logs allow you to detect unauthorized access to a resource, or access attempts.

6.1 Given a scenario, utilize general cryptography concepts

Cryptography is extremely important in security. It brings several things to the table: confidentiality, authentication and access control, non-repudiation, and integrity.

Plaintext is an unencrypted data. It is also said to be “in the clear”. When it is encrypted it is then called ciphertext. The cipher is the algorithm used to encrypt/decrypt. Cryptanalysis is the science of cracking encryption.

A substitution cipher (or Caesar’s cipher) involves substituting letters. It may run something like this: take a text, move all the letters three places to the right. To decode, shift the letters back to the left.

A transposition cipher changes the order of letters.

Frequency analysis is used to break substitution and transposition.

A key is used to encrypt and decrypt a message.

Two major methods of encryption are used. Symmetric encryption uses the same key to encrypt and decrypt information. This is also called shared key encryption. If someone gets the key, data could be compromised. Symmetric encryption doesn’t scale very well. However, we still use it because it is very fast. Often times symmetric is combined with asymmetric.

Asymmetric (or public key) cryptography is relatively new. It uses two keys: a public and a private. The private key should not be shared. A public key can be shown to anyone. Anyone who needs to communicate with you needs the public key. The private key is the only one that can decrypt data.

Key generation builds both keys. They are constructed at the same time. A lot of math goes into the process. The mathematical relationship is what allows them to work in concert.

Digital signatures confirm that information has gone from point a to point b without being altered. It provides non-repudiation. Digital signatures are signed with private keys. The recipient uses the sender’s public key to decrypt or verify the sender and the validity of the message. Any change in the message will invalidate the signature. A public key and private key can be used to create a symmetric (or shared secret) key.

Encryption is difficult and there are many ways to do it. Asymmetric encryption allows the sharing of encryption keys without worry. This is resource intensive, it uses more CPU cycles. Symmetric encryption is faster.

Out of band key exchange uses something other than the network to transmit the key. This could be a telephone, via courier, or face to face.

An in band key exchange is done over the network and so it requires additional security. This is more common. Asymmetric encryption is used to transmit symmetric keys.

Real time encryption/decryption uses asymmetric encryption to transmit the symmetric key. When the server receives the asymmetric data, it uses its private key to decrypt the data and uncover the symmetric key. This key is used for the session, thus it is called the session key. Make sure keys a random.

Block ciphers are used in symmetric encryption. It encrypts data in fixed blocks usually of 64 or 128 bits. Padding is added when a block can’t be filled. Confusion in this context refers to the fact that the relationship with the ciphertext should be complicated.

Diffusion means that the output should be very different from the input. A minor changes to the input should cause drastic changes to the output, ideally more than 50%. The more the better.

A stream cipher is only used with asymmetric encryption. Stream ciphers encrypt one bit or one byte at a time. It can run very quickly and can be done on minimal hardware. The starting state should never be the same twice. The initialization vector can be discovered if this is done. The IV should always change.

Transport encryption aims to protect network communications, i.e. data in motion. Again, complex mathematics is at play to provide encryption. One common form of transport encryption is the Virtual Private Network. This creates an encrypted tunnel for secure communications. Information passes from your location to a VPN concentrator. It is then decrypted and passed to the network.

HTTP is sent in the clear. HTTPS is a secure form of HTTP using SSL/TLS.

Non-repudiation precludes a user or person denying they transmitted something. Cryptography provides proof of integrity and proof of origin. This provides a high assurance of authenticity. Digital signatures are used in non-repudiation. In the case of email, a digital signature is usually appended to the message but the message itself is not encrypted. The recipient uses your public key to verify authenticity.

A cryptographic hash represents data with a short string of text, called a message digest. A massive amount of data can be represented by the message digest. User A can create a hash, transmit it, and user B can hash it too. If the hashes are the same, the data hasn’t been changed. Generally, passwords are stored as hashes. This provides security because the hash doesn’t tell an attacker anything about the password. Tis provides confidentiality.

Hashes can also be used as digital signatures, providing authenticity and non-repudiation.

An escrow involves a third party holding something in trust. In the case of key escrow, a key can be stored in escrow. This can be critical if your organization encrypts lots of data. If a key is lost, key escrow allows it to be recovered. One simple form of key escrow is a safe, which is suitable for symmetric encryption. With asymmetric encryption, there’s no need to store the public key because it is readily available. The private key should be put in escrow. Keys in escrow can be broken up and stored separately so that multiple people will be needed to assemble it.

Steganography is a technique to hide information in hide sight. It is considered security through obscurity, though it isn’t particularly secure if an attacker knows what to look for. Data can be stored in pictures. The covertext would be the picture.

Data can be hidden within network packets themselves. Printers can print watermarks on images. This watermark shows the printer’s serial number.

Elliptic curve cryptography was created because of the limitations of asymmetric encryption, which is resource intensive. As it stands, asymmetric encryption has to calculate very large primes. ECC uses curves rather than numbers. This provides for smaller storage and transmission requirements. It’s great for mobile devices.

Quantum cryptography employs quantum mechanics for cryptography. It involves the mathematical descriptions of particles. A form that is being used is quantum key distribution. If a third party tries to intercept the data, it is disturbed.

Perfect forward secrecy changes the way the key exchange operates. It uses ECC or Diffie-Hellman ephemeral keys rather than the private RSA key. This process does require additional resources. Not every browser knows how to handle PFS, so it might not be the best encryption method.

6.2 Given a scenario, use appropriate cryptographic methods

The idea behind wireless encryption is that only people with the password can transmit and listen. Two standards have been used: WEP and WPA.

WEP was introduced with 802.11. It used 64 and 128 bit key sizes. Weaknesses were discovered in 2001. The first bytes of the output keystream are strongly non-random. If you gather enough packets, you can discover the key. Never use WEP.

Wifi Protected Access (WPA) replaced WEP. It uses RC4 with a new TKIP. The IV was sent across the network in an encrypted hash. Every packet gets a unique key. WPA was a short term work around.

WPA2 replaced RC4 with AES and CCMP replaced TKIP. WPA2-Enterprise adds 802.1x, regarding something such as RADIUS authentication.

Message Digest 5 (MD5) was first published 1992. It replaced MD4. It uses a 128 bit hash value, displayed in hexadecimal. One of its weaknesses is that it is not collision resistant. What this means is that two sets of information can generate the same hash. This makes MD5 a weaker hashing algorithm.

The Secure Hash Algorithm was developed by the NSA. It is a federal government standard used to provide hashes. SHA-1 uses a 160 bit digest. Collisions were possible. SHA-2 uses 512 bits and was meant to replace SHA-1.

RACE Integrity Primitives Evaluation Message Digest (RIPEMD) is a family of digest algorithms. The original version was found to have collision issues. RIPEMD-160 was put in place, and so far it hasn’t had collision issues. It is a relatively fast hash.

Hash-based Message Authentication Code uses a secret key and combine it with the hashing process. The recipient can use the same key to determine if the person who sent it is the person you expected it. This ensures that the data hasn’t been secured. It also provides for non-repudiation. IPsec and TLS both use HMAC.

Rivest Cipher 4 (RC4) was part of WEP, and also used in SSL. It has a biased output, meaning that an input can create a predictable output, though this exists only with individual bytes.

Data Encryption Standard (DES) was created as a standard for the government. It used a 64 bit cipher with a 56 bit key. By modern standards that is very small. A mobile phone can crack a DES. 3DES runs DES three times. It can use a different key with each pass or the same key thrice.

Advanced Encryption Standard is the most modern standard. It replaced DES in 2001. It is a 128 bit cipher, using 128, 192, or 256 bit keys. WPA2 uses AES.

Blowfish and Twofish are very open. It uses a 64 block cipher and can use variable length keys, 1 to 448 bits. They were created to be in the public domain. Twofish can use keys up to 256 bits. It was meant to be stronger than Blowfish.

RSA is one of the most popular algorithms. It uses public key cryptography. The idea is to find the product of two very large prime numbers. It is now in the public domain.

Diffie-Hellman (DH) is used to securely exchange keys. It is not meant to encrypt messages or authenticate. It is used in perfect forward secrecy. In PFS, ephemeral DH is used. It can be combined with ECC.

A one-time pad dates to the early 20th century. It’s quite simple and very secure. When used properly, when pads aren’t reused, it is unbreakable. The key is the same size as the plaintext. The key should be completely random. The key should be destroyed after one use. There should only be two copies of the key: the sender’s and the receiver’s. If a key is intercepted, the encryption can be broken.

Secure Sockets Layer (SSL) is one of the first browser encryption methods.

Transport Layer Security (TLS) is derived from SSL. It is a worldwide standard.

When SSL/TLS is combined to encrypt web server communication, you have HTTPS. By default it uses TCP/443. It’s built into most browser.

Secure Shell (SSH) is a way to communicate with a server from a terminal. It uses TCP/22. It is very flexible. It provides for remote administration, SFTP and SCP.

IPSec is security for OSI Layer 3. It was built specifically designed for layer 3 packets. It provides confidentiality and integrity. It also allows for packet signing and is very standardized. Its two core protocols are Authentication Header and Encapsulation Security Payload.

IPSec works something like this. In phase 1, internet key exchange is the process of mutual identification. Once identified, the two systems exchange keys. This step uses UDP/500. In phase 2, the two systems coordinate ciphers and key sizes. After phase 2 is complete, communication is established.

The authentication header is a hash that is created based on the packet and the shared key. It usually uses MD5, SHA1 or SHA2. The authentication header is inserted into the packet. ESP usually uses 3DES or AES. It adds a header, a trailer, and an integrity check value.

IPSec can be used for communication between a client and a server. It can be used in transport mode and tunnel mode.

PGP and AES are very strong encryption algorithms, while DES and WEP are weaker. A weak key can be strengthened by performing multiple processes to the same key. You can hash a password, then hash the hash, etc. This is called key stretching. Bcrypt creates hashes from passwords, using the Blowfish cipher. Another system is the Password Based Key Derivation Function 2, part of RSA public key cryptography.

6.3 Given a scenario, use appropriate PKI, certificate management, and associated components

Certificate authorities are what provides confidence between entities. A certificate can be purchased. Web browsers will trust certificates. There are different trust levels.

Certificates can be built in house. This is helpful for medium to large organization so that they don’t have to purchase certificates. OpenCA and Windows Certificate Services serve this purpose.

A Certificate Revocation List (CRL) is a list maintained by a CA. It includes certificates that have been revoked for one reason or another. Online Certificate Status Protocol is used to check the status of certificates. It uses HTTP to communicate. But not all browsers support OSCP, this is especially true for older browsers. In a web of trust, keys are managed by key users. You must find others to sign your certificates. If you need to revoke a key, you create a revocation certificate.

A digital certificate binds a public key with a digital signature. The signature adds trust. PKI uses CA for additional trust. Web of trust adds other users for additional trust. Certificate creation can be built into the OS. X.509 follows a standard format for certificates. It includes things like validity dates, subject, etc. Certificate extensions add more information.

A Public Key Infrastructure is a mixture of a lot of things working together, such as hardware, software, protocols, and policies. It requires a lot of planning because it is complicated. It will build certificates and bind them to people or resources

The first step in key management life cycle is key generation. Next, a certificate is generated. This will be the X.509 form. Then distribution. Storage should be in place. It may be necessary to revoke keys. Finally there is expiration. Keys have a limited life span. Once it expires, it is no longer valid. After expiration, the process starts over.

To avoid losing a key, it’s a good idea to use a backup. However, don’t create too many backups. If there is no backup, you have a single point of failure. The key recovery process requires planning. There are different approaches to key recovery. One is simply to use backups and storage. Another way is to use an “M of N” control, meaning you have to use a certain number of people to recover it. Some certificate services have built in recovery techniques. This is not possible with public CAs.

The registration authority ensures that the right people are lined up with the right certificate. This is important for non-repudiation. This can be done casually or it can require multi-step verification. One process is the Federal PKI authority.