1 of 40

Process Synchronization

Dr. Noman Islam

2 of 40

Producer Consumer problem

Producer

Consumer

3 of 40

  • Although the producer and consumer routines shown above are correct separately, they may not function correctly when executed concurrently
  • A situation like this, where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition.
  • To guard against the race condition above, we need to ensure that only one process at a time can be manipulating the variable counter.
  • To make such a guarantee, we require that the processes be synchronized in some way.
  • The growing importance of multicore systems has brought an increased emphasis on developing multithreaded applications
  • Kernel data structures that are prone to possible race conditions include structures for maintaining memory allocation, for maintaining process lists, and for interrupt handling

4 of 40

Interleaving of instructions

5 of 40

Structure of a process with critical section

6 of 40

Requirements for a critical section

1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

2. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.

3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

7 of 40

Non-preemptive kernel

  • Two general approaches are used to handle critical sections in operating systems
  • A preemptive kernel allows a process to be preempted while it is running in kernel mode
  • A non-preemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU
  • Obviously, a nonpreemptive kernel is essentially free from race conditions on kernel data structures, as only one process is active in the kernel at a time.

8 of 40

Peterson’s solution

9 of 40

Synchronization hardware

  • Peterson’s are not guaranteed to work on modern computer architectures
  • Unfortunately, this solution is not as feasible in a multiprocessor environment
  • Protecting critical regions through the use of locks

10 of 40

11 of 40

12 of 40

13 of 40

14 of 40

  • Although these algorithms satisfy the mutual-exclusion requirement, they do not satisfy the bounded-waiting requirement

15 of 40

16 of 40

Mutex Locks

  • The hardware-based solutions to the critical-section problem presented are complicated as well as generally inaccessible to application programmers
  • Instead, operating-systems designers build software tools to solve the critical-section problem. The simplest of these tools is the mutex lock.
  • We use the mutex lock to protect critical regions and thus prevent race conditions.
  • That is, a process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section.
  • The acquire()function acquires the lock, and the release() function releases the lock
  • Calls to either acquire() or release() must be performed atomically.
  • Thus, mutex locks are often implemented using one of the hardware mechanisms

17 of 40

18 of 40

19 of 40

  • The main disadvantage of the implementation given here is that it requires busy waiting.
  • While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire().
  • In fact, this type of mutex lock is also called a spinlock because the process “spins” while waiting for the lock to become available

20 of 40

Semaphores

  • Behave similarly to a mutex lock but can also provide more sophisticated ways for processes to synchronize their activities.
  • A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait() and signal().
  • The value of a counting semaphore can range over an unrestricted domain.
  • The value of a binary semaphore can range only between 0 and 1

21 of 40

22 of 40

  • Counting semaphores can be used to control access to a given resource consisting of a finite number of instances.
  • The semaphore is initialized to the number of resources available.
  • Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby decrementing the count).
  • When a process releases a resource, it performs a signal() operation (incrementing the count).
  • When the count for the semaphore goes to 0, all resources are being used. After that, processes that wish to use a resource will block until the count becomes greater than 0.

23 of 40

  • We can also use semaphores to solve various synchronization problems.

24 of 40

Semaphore implementation

  • To overcome the need for busy waiting, we can modify the definition of the wait() and signal() operations as follows: When a process executes the wait() operation and finds that the semaphore value is not positive, it must wait. However, rather than engaging in busy waiting, the process can block itself.
  • The block operation places a process into a waiting queue associated with the semaphore, and the state of the process is switched to the waiting state.
  • Then control is transferred to the CPU scheduler, which selects another process to execute.

25 of 40

  • A process that is blocked, waiting on a semaphore S, should be restarted when some other process executes a signal() operation.
  • The process is restarted by a wakeup() operation, which changes the process from the waiting state to the ready state.
  • The process is then placed in the ready queue

26 of 40

27 of 40

  • Each semaphore has an integer value and a list of processes list.
  • When a process must wait on a semaphore, it is added to the list of processes
  • A signal() operation removes one process from the list of waiting processes and awakens that process.

28 of 40

Deadlock and starvation

29 of 40

  • Another problem related to deadlocks is indefinite blocking or starvation, a situation in which processes wait indefinitely within the semaphore.
  • Indefinite blocking may occur if we remove processes from the list associated with a semaphore in LIFO (last-in, first-out) order.

30 of 40

Bounded buffer problem

31 of 40

Readers writers problem

  • If two readers access the shared data simultaneously, no adverse effects will result.
  • However, if a writer and some other process (either a reader or a writer) access the database simultaneously, chaos may ensue.

32 of 40

33 of 40

34 of 40

Dining philosopher problem

35 of 40

Solution to dining philosopher problem

36 of 40

Monitors

  • Using semaphores incorrectly can result in timing errors that are difficult to detect, since these errors happen only if particular execution sequences take place and these sequences do not always occur
  • To deal with such errors, researchers have developed high-level language constructs
  • Monitor is an abstract data type—or ADT—encapsulates data with a set of functions to operate on that data that are independent of any specific implementation of the ADT

37 of 40

38 of 40

39 of 40

  • The monitor construct ensures that only one process at a time is active within the monitor
  • A programmer who needs to write a tailor-made synchronization scheme can define one or more variables of type condition
  • The only operations that can be invoked on a condition variable are wait() and signal().

40 of 40