1 of 50

Chapter 10: Mass-Storage Systems

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

2 of 50

Chapter 10: Mass-Storage Systems

  • Overview of Mass Storage Structure
  • Disk Structure
  • Disk Attachment
  • Disk Scheduling
  • Disk Management

10.2

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

3 of 50

Overviews of Mass-Storage Systems

  • we present a general overview of the physical structure of secondary and tertiary storage devices.

Magnetic Disks

  • Magnetic disks provide the bulk of secondary storage for modern computer systems. Conceptually, disks are relatively simple (Figure 10.1). Each disk platter has a flat circular shape, like a CD. Common platter diameters range from 1.8 to 3.5 inches.
  • The two surfaces of a platter are covered with a magnetic material. We store information by recording it magnetically on the platters.
  • A read–write head “flies” just above each surface of every platter. The heads are attached to a disk arm that moves all the heads as a unit.
  • The surface of a platter is logically divided into circular tracks, which are subdivided into sectors. The set of tracks that are at one arm position makes up a cylinder.

10.3

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

4 of 50

Moving-head Disk Mechanism

Figure 10.1 Moving-head disk mechanism.

10.4

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

5 of 50

Overview of Mass Storage Structure

  • There may be thousands of concentric cylinders in a disk drive, and each track may contain hundreds of sectors. The storage capacity of common disk drives is measured in gigabytes.
  • When the disk is in use, a drive motor spins it at high speed. Most drives rotate 60 to 250 times per second, specified in terms of rotations per minute (RPM). Common drives spin at 5,400, 7,200, 10,000, and 15,000 RPM.
  • Disk speed has two parts. The transfer rate is the rate at which data flow between the drive and the computer. The positioning time, or random-access time, consists of two parts: the time necessary to move the disk arm to the desired cylinder, called the seek time, and the time necessary for the desired sector to rotate to the disk head, called the rotational latency.
  • Typical disks can transfer several megabytes of data per second, and they have seek times and rotational latencies of several milliseconds.

10.5

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

6 of 50

Overview of Mass Storage Structure

  • Although the disk platters are coated with a thin protective layer, the head will sometimes damage the magnetic surface. This accident is called a head crash. A head crash normally cannot be repaired; the entire disk must be replaced.
  • A disk can be removable, allowing different disks to be mounted as needed. Removable magnetic disks generally consist of one platter, held in a plastic case to prevent damage while not in the disk drive.
  • Other forms of removable disks include CDs, DVDs, and Blu-ray discs as well as removable flash-memory devices known as flash drives.
  • A disk drive is attached to a computer by a set of wires called an I/O bus. Several kinds of buses are available, including advanced technology attachment (ATA), serial ATA (SATA), eSATA, universal serial bus (USB), and fibre channel (FC).

10.6

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

7 of 50

Overview of Mass Storage Structure

  • The data transfers on a bus are carried out by special electronic processors called controllers. The host controller is the controller at the computer end of the bus.
  • A disk controller is built into each disk drive. To perform a disk I/O operation, the computer places a command into the host controller, typically using memory-mapped I/O ports.
  • The host controller then sends the command via messages to the disk controller, and the disk controller operates the disk-drive hardware to carry out the command.

10.7

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

8 of 50

Solid-State Disks

  • Sometimes old technologies are used in new ways as economics change or the technologies evolve. An example is the growing importance of solid-state disks, or SSDs.
  • Simply described, an SSD is nonvolatile memory that is used like a hard drive. There are many variations of this technology, from DRAM with a battery to allow it to maintain its state in a power failure through flash-memory technologies like single-level cell (SLC) and multilevel cell (MLC) chip.
  • SSDs have the same characteristics as traditional hard disks but can be more reliable because they have no moving parts and faster because they have no seek time or latency.
  • In addition, they consume less power. However, they are more expensive per megabyte than traditional hard disks, have less capacity than the larger hard disks, and may have shorter life spans than hard disks, so their uses are somewhat limited.

10.8

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

9 of 50

Solid-State Disks

  • One use for SSDs is in storage arrays, where they hold file-system metadata that require high performance. SSDs are also used in some laptop computers to make them smaller, faster, and more energy-efficient.
  • Because SSDs can be much faster than magnetic disk drives, standard bus interfaces can cause a major limit on throughput. Some SSDs are designed to connect directly to the system bus (PCI, for example). SSDs are changing other traditional aspects of computer design as well.
  • Some systems use them as a direct replacement for disk drives, while others use them as a new cache tier, moving data between magnetic disks, SSDs, and memory to optimize performance.

10.9

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

10 of 50

Magnetic Tape

  • Magnetic tape was used as an early secondary-storage medium. Although it is relatively permanent and can hold large quantities of data, its access time is slow compared with that of main memory and magnetic disk.
  • In addition, random access to magnetic tape is about a thousand times slower than random access to magnetic disk, so tapes are not very useful for secondary storage.
  • Tapes are used mainly for backup, for storage of infrequently used information, and as a medium for transferring information from one system to another.
  • A tape is kept in a spool and is wound or rewound past a read–write head. Moving to the correct spot on a tape can take minutes, but once positioned, tape drives can write data at speeds comparable to disk drives.
  • Tape capacities vary greatly, depending on the particular kind of tape drive, with current capacities exceeding several terabytes.

10.10

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

11 of 50

Magnetic Tape

  • Some tapes have built-in compression that can more than double the effective storage. Tapes and their drivers are usually categorized by width, including 4, 8, and 19 millimeters and 1/4 and 1/2 inch.

10.11

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

12 of 50

Disk Structure

  • Modern magnetic disk drives are addressed as large one-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
  • The size of a logical block is usually 512 bytes, although some disks can be low-level formatted to have a different logical block size, such as 1,024 bytes.
  • The one-dimensional array of logical blocks is mapped onto the sectors of the disk sequentially. Sector 0 is the first sector of the first track on the outermost cylinder.
  • The mapping proceeds in order through that track, then through the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost.
  • In practice, it is difficult to perform this translation, for two reasons. First, most disks have some defective sectors, but the mapping hides this by substituting spare sectors from elsewhere on the disk.

10.12

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

13 of 50

Disk Structure

  • Second, the number of sectors per track is not a constant on some drives.
  • Let’s look more closely at the second reason. On media that use constant linear velocity (CLV), the density of bits per track is uniform. The farther a track is from the center of the disk, the greater its length, so the more sectors it can hold.
  • As we move from outer zones to inner zones, the number of sectors per track decreases. Tracks in the outermost zone typically hold 40 percent more sectors than do tracks in the innermost zone.
  • The drive increases its rotation speed as the head moves from the outer to the inner tracks to keep the same rate of data moving under the head. This method is used in CD-ROM and DVD-ROM drives.
  • Alternatively, the disk rotation speed can stay constant; in this case, the density of bits decreases from inner tracks to outer tracks to keep the data rate constant. This method is used in hard disks and is known as constant angular velocity (CAV).

10.13

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

14 of 50

Disk Attachment

  • Computers access disk storage in two ways. One way is via I/O ports (or host-attached storage); this is common on small systems. The other way is via a remote host in a distributed file system; this is referred to as network-attached storage.

Host-Attached Storage

  • Host-attached storage is storage accessed through local I/O ports. These ports use several technologies. The typical desktop PC uses an I/O bus architecture called IDE or ATA.
  • This architecture supports a maximum of two drives per I/O bus. A newer, similar protocol that has simplified cabling is SATA.
  • High-end workstations and servers generally use more sophisticated I/O architectures such as fibre channel (FC), a high-speed serial architecture that can operate over optical fiber or over a four-conductor copper cable

10.14

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

15 of 50

Disk Attachment

  • It has two variants. One is a large switched fabric having a 24-bit address space. This variant is expected to dominate in the future and is the basis of storage-area networks (SANs).
  • Because of the large address space and the switched nature of the communication, multiple hosts and storage devices can attach to the fabric, allowing great flexibility in I/O communication.
  • The other FC variant is an arbitrated loop (FC-AL) that can address 126 devices (drives and controllers).
  • A wide variety of storage devices are suitable for use as host-attached storage. Among these are hard disk drives, RAID arrays, and CD, DVD, and tape drives.
  • The I/O commands that initiate data transfers to a host-attached storage device are reads and writes of logical data blocks directed to specifically identified storage units.

10.15

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

16 of 50

Network-Attached Storage

  • A network-attached storage (NAS) device is a special-purpose storage system that is accessed remotely over a data network (Figure 10.2). Clients access network-attached storage via a remote-procedure-call interface such as NFS for UNIX systems or CIFS for Windows machines.

Figure 10.2 Network-attached storage.

10.16

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

17 of 50

Network-Attached Storage

  • The remote procedure calls (RPCs) are carried via TCP or UDP over an IP network—usually the same local area network (LAN) that carries all data traffic to the clients.
  • Thus, it may be easiest to think of NAS as simply another storage-access protocol. The network attached storage unit is usually implemented as a RAID array with software that implements the RPC interface.
  • Network-attached storage provides a convenient way for all the computers on a LAN to share a pool of storage with the same ease of naming and access enjoyed with local host-attached storage. However, it tends to be less efficient and have lower performance than some direct-attached storage options.
  • iSCSI is the latest network-attached storage protocol. In essence, it uses the IP network protocol to carry the SCSI protocol. Thus, networks—rather than SCSI cables—can be used as the interconnects between hosts and their storage.

10.17

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

18 of 50

Storage Area Network

  • One drawback of network-attached storage systems is that the storage I/O operations consume bandwidth on the data network, thereby increasing the latency of network communication. This problem can be particularly acute in large client–server installations—the communication between servers and clients competes for bandwidth with the communication among servers and storage devices.
  • A storage-area network (SAN) is a private network (using storage protocols rather than networking protocols) connecting servers and storage units, as shown in Figure 10.3. The power of a SAN lies in its flexibility.
  • Multiple hosts and multiple storage arrays can attach to the same SAN, and storage can be dynamically allocated to hosts.
  • A SAN switch allows or prohibits access between the hosts and the storage. As one example, if a host is running low on disk space, the SAN can be configured to allocate more storage to that host.

10.18

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

19 of 50

Storage Area Network

Figure 10.3 Storage-area network.

10.19

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

20 of 50

Storage Area Network (Cont.)

  • SANs make it possible for clusters of servers to share the same storage and for storage arrays to include multiple direct host connections.
  • SANs typically have more ports—as well as more expensive ports—than storage arrays. FC is the most common SAN interconnect, although the simplicity of iSCSI is increasing its use.
  • Another SAN interconnect is InfiniBand — a special-purpose bus architecture that provides hardware and software support for high-speed interconnection networks for servers and storage units.

10.20

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

21 of 50

Disk Scheduling

  • One of the responsibilities of the operating system is to use the hardware efficiently. For the disk drives, meeting this responsibility entails having fast access time and large disk bandwidth.
  • For magnetic disks, the access time has two major components:, The seek time is the time for the disk arm to move the heads to the cylinder containing the desired sector. The rotational latency is the additional time for the disk to rotate the desired sector to the disk head.
  • The disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer.
  • We can improve both the access time and the bandwidth by managing the order in which disk I/O requests are serviced.
  • Whenever a process needs I/O to or from the disk, it issues a system call to the operating system.

10.21

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

22 of 50

Disk Scheduling (Cont.)

  • The request specifies several pieces of information:
  • Whether this operation is input or output
  • What the disk address for the transfer is
  • What the memory address for the transfer is
  • What the number of sectors to be transferred is
  • If the desired disk drive and controller are available, the request can be serviced immediately. If the drive or controller is busy, any new requests for service will be placed in the queue of pending requests for that drive.
  • For a multiprogramming system with many processes, the disk queue may often have several pending requests. Thus, when one request is completed, the operating system chooses which pending request to service next. How does the operating system make this choice? Any one of several disk-scheduling algorithms can be used

10.22

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

23 of 50

Disk Scheduling (Cont.)

  • Note that drive controllers have small buffers and can manage a queue of I/O requests (of varying “depth”)
  • Several algorithms exist to schedule the servicing of disk I/O requests
  • The analysis is true for one or many platters
  • We illustrate scheduling algorithms with a request queue (0-199)

� 98, 183, 37, 122, 14, 124, 65, 67

Head pointer 53

10.23

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

24 of 50

FCFS Scheduling

  • The simplest form of disk scheduling is, of course, the first-come, first-served (FCFS) algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.
  • If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to 183, 37, 122, 14, 124,65, and finally to 67, for a total head movement of 640 cylinders.
  • The wild swing from 122 to 14 and then back to 124 illustrates the problem with this schedule. If the requests for cylinders 37 and 14 could be serviced together, before or after the requests for 122 and 124, the total head movement could be decreased substantially, and performance could be thereby improved.

10.24

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

25 of 50

First-Come, First-Served(FCFS)

25

Illustration shows total head movement of 640 cylinders.

10.25

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

26 of 50

FCFS Scheduling

10.26

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

27 of 50

SSTF(Shortest Sleek Time First)

  • It seems reasonable to service all the requests close to the current head position before moving the head far away to service other requests.
  • Selects the request with the minimum seek time from the current head position. Clearly, this algorithm gives a substantial improvement in performance.
  • For our example request queue, the closest request to the initial head position (53) is at cylinder 65. Once we are at cylinder 65, the next closest request is at cylinder 67. From there, the request at cylinder 37 is closer than the one at 98, so 37 is served next. Continuing, we service the request at cylinder 14, then 98, 122, 124, and finally 183.

27

10.27

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

28 of 50

SSTF(Shortest Sleek Time First)

  • Illustration shows total head movement of 236 cylinders.
  • SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests.
  • Remember that requests may arrive at any time. Suppose that we have two requests in the queue, for cylinders 14 and 186, and while the request from 14 is being serviced, a new request near 14 arrives. This new request will be serviced next, making the request at 186 wait. While this request is being serviced, another request close to 14 could arrive. In theory, a continual stream of requests near one another could cause the request for cylinder 186 to wait indefinitely.

28

10.28

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

29 of 50

SSTF (Cont.)

29

10.29

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

30 of 50

SSTF(Shortest Sleek Time First)

30

10.30

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

31 of 50

SCAN Scheduling

  • The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues.
  • Sometimes called the elevator algorithm.
  • Before applying SCAN to schedule the requests on cylinders 98, 183,37, 122, 14, 124, 65, and 67, we need to know the direction of head movement in addition to the head's current position.
  • Assuming that the disk arm is moving toward 0 and that the initial head position is again 53, the head will next service 37 and then 14.

31

10.31

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

32 of 50

SCAN Scheduling

  • At cylinder 0, the arm will reverse and will move toward the other end of the disk, servicing the requests at 65, 67, 98, 122, 124, and 183 as shown in the Figure.
  • If a request arrives in the queue just in front of the head, it will be serviced almost immediately; a request arriving just behind the head will have to wait until the arm moves to the end of the disk, reverses direction, and comes back.
  • Illustration shows total head movement of 236 cylinders.

32

10.32

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

33 of 50

SCAN (Cont.)

33

10.33

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

34 of 50

SCAN Scheduling

34

10.34

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

35 of 50

C-SCAN Scheduling

  • Circular Scan(C-Scan) Scheduling is a variant of SCAN Provides a more uniform wait time.
  • Like SCAN, C-SCAN moves the head from one end of the disk to the other, servicing requests along the way.
  • The head moves from one end of the disk to the other. servicing requests as it goes. When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip.
  • When the head reaches the other end, however, it immediately returns to the beginning of the disk without servicing any requests on the return trip.
  • Treats the cylinders as a circular list that wraps around from the last cylinder to the first one.

35

10.35

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

36 of 50

C-SCAN (Cont.)

36

10.36

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

37 of 50

C-SCAN Scheduling

37

10.37

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

38 of 50

C-LOOK

  • LOOK a version of SCAN, C-LOOK a version of C-SCAN
  • Arm only goes as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk
  • Total number of cylinders?

10.38

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

39 of 50

C-LOOK (Cont.)

39

10.39

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

40 of 50

Selecting a Disk-Scheduling Algorithm

  • Given so many disk-scheduling algorithms, how do we choose the best one?
  • SSTF is common and has a natural appeal because it increases performance over FCFS. SCAN and C-SCAN perform better for systems that place a heavy load on the disk, because they are less likely to cause a starvation problem.
  • With any scheduling algorithm, however, performance depends heavily on the number and types of requests.
  • For instance, suppose that the queue usually has just one outstanding request. Then, all scheduling algorithms behave the same, because they have only one choice of where to move the disk head: they all behave like FCFS scheduling.
  • Because of these complexities, the disk-scheduling algorithm should be written as a separate module of the operating system, so that it can be replaced with a different algorithm if necessary. Either SSTF or LOOK is a reasonable choice for the default algorithm.

10.40

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

41 of 50

Disk Management

  • The operating system is responsible for several other aspects of disk management, too. Here we discuss disk initialization, booting from disk, and bad-block recovery.

Disk Formatting

  • A new magnetic disk is a blank slate: it is just a platter of a magnetic recording material. Before a disk can store data, it must be divided into sectors that the disk controller can read and write. This process is called low-level formatting, or physical formatting.
  • Low-level formatting fills the disk with a special data structure for each sector. The data structure for a sector typically consists of a header, a data area (usually 512 bytes in size), and a trailer.
  • The header and trailer contain information used by the disk controller, such as a sector number and an error-correcting code (ECC).

10.41

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

42 of 50

Disk Management

  • When the controller writes a sector of data during normal I/O, the ECC is updated with a value calculated from all the bytes in the data area.
  • When the sector is read, the ECC is recalculated and compared with the stored value. If the stored and calculated numbers are different, this mismatch indicates that the data area of the sector has become corrupted and that the disk sector may be bad.
  • The ECC is an error-correcting code because it contains enough information, if only a few bits of data have been corrupted, to enable the controller to identify which bits have changed and calculate what their correct values should be. It then reports a recoverable soft error. The controller automatically does the ECC processing whenever a sector is read or written.
  • Most hard disks are low-level-formatted at the factory as a part of the manufacturing process. This formatting enables the manufacturer to test the disk and to initialize the mapping from logical block numbers to defect-free sectors on the disk.

10.42

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

43 of 50

Disk Management (Cont.)

  • Formatting a disk with a larger sector size means that fewer sectors can fit on each track; but it also means that fewer headers and trailers are written on each track and more space is available for user data. Some operating systems can handle only a sector size of 512 bytes.
  • Before it can use a disk to hold files, the operating system still needs to record its own data structures on the disk. It does so in two steps. The first step is to partition the disk into one or more groups of cylinders. The operating system can treat each partition as though it were a separate disk.
  • The second step is logical formatting, or creation of a file system. In this step, the operating system stores the initial file-system data structures onto the disk. These data structures may include maps of free and allocated space and an initial empty directory.

10.43

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

44 of 50

Disk Management (Cont.)

  • To increase efficiency, most file systems group blocks together into larger chunks, frequently called clusters. Disk I/O is done via blocks, but file system I/O is done via clusters, effectively assuring that I/O has more sequential-access and fewer random-access characteristics.
  • Some operating systems give special programs the ability to use a disk partition as a large sequential array of logical blocks, without any file-system data structures. This array is sometimes called the raw disk, and I/O to this array is termed raw I/O. For example, some database systems prefer raw I/O because it enables them to control the exact disk location where each database record is stored.

10.44

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

45 of 50

Boot Block

  • For a computer to start running—for instance, when it is powered up or rebooted—it must have an initial program to run. This initial bootstrap program tends to be simple. It initializes all aspects of the system, from CPU registers to device controllers and the contents of main memory, and then starts the operating system.
  • To do its job, the bootstrap program finds the operating-system kernel on disk, loads that kernel into memory, and jumps to an initial address to begin the operating-system execution.
  • For most computers, the bootstrap is stored in read-only memory (ROM). This location is convenient, because ROM needs no initialization and is at a fixed location that the processor can start executing when powered up or reset.
  • And, since ROM is read only, it cannot be infected by a computer virus. The problem is that changing this bootstrap code requires changing the ROM hardware chips.

10.45

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

46 of 50

Boot Block

  • For this reason, most systems store a tiny bootstrap loader program in the boot ROM whose only job is to bring in a full bootstrap program from disk.
  • The full bootstrap program can be changed easily: a new version is simply written onto the disk. The full bootstrap program is stored in the “boot blocks” at a fixed location on the disk. A disk that has a boot partition is called a boot disk or system disk.

10.46

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

47 of 50

Booting from a Disk in Windows

Figure 10.9 Booting from disk in Windows.

10.47

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

48 of 50

Bad Blocks

  • Because disks have moving parts and small tolerances (recall that the disk head flies just above the disk surface), they are prone to failure. Sometimes the failure is complete; in this case, the disk needs to be replaced and its contents restored from backup media to the new disk.
  • More frequently, one or more sectors become defective. Most disks even come from the factory with bad blocks. Depending on the disk and controller in use, these blocks are handled in a variety of ways.
  • On simple disks, such as some disks with IDE controllers, bad blocks are handled manually. One strategy is to scan the disk to find bad blocks while the disk is being formatted. Any bad blocks that are discovered are flagged as unusable so that the file system does not allocate them.
  • More sophisticated disks are smarter about bad-block recovery. The controller maintains a list of bad blocks on the disk.

10.48

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

49 of 50

Bad Blocks

  • Low-level formatting also sets aside spare sectors not visible to the operating system. The controller can be told to replace each bad sector logically with one of the spare sectors. This scheme is known as sector sparing or forwarding.
  • A typical bad-sector transaction might be as follows:
  • The operating system tries to read logical block 87.
  • The controller calculates the ECC and finds that the sector is bad. It reports this finding to the operating system.
  • The next time the system is rebooted, a special command is run to tell the controller to replace the bad sector with a spare.
  • After that, whenever the system requests logical block 87, the request is translated into the replacement sector’s address by the controller.

10.49

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition

50 of 50

End of Chapter 10

Silberschatz, Galvin and Gagne ©2013

Operating System Concepts – 9th Edition