1 of 100

Real Time Operating Systems

Dr. Anupriya Koneru,

Associate professor,

Department of IT

LBRCE

Reference : Douglas Wilhelm Harder, Jeff Zarnett, Vajih Montaghami and Allyson Giannikouris – A Practical Introduction to real time systems for undergraduate engineering

2 of 100

UNIT-III

  • Synchronization:
    • The need for synchronization
    • Petri nets—describing synchronizations graphically
    • Synchronization through token passing
    • Test-and-reset—a crude signal with polling
    • Semaphores—a better signal without polling
    • Problems in synchronization
    • Automatic synchronization
    • Summary of synchronization
  • Resource management:
    • Semaphores
    • Classification of resources
    • Device management
    • Resource managers
    • Priority and deadline inversion
    • Summary of resource management

3 of 100

Synchronization�

  • Given multiple tasks that are attempting to coordinate or synchronize (Greek: same time cf. synonym same name) their activities, this can lead to numerous issues simply because all are accessing memory.
  • There are two forms of synchronization we will investigate:

1. mutual exclusion: preventing two tasks from accessing the same data, and

2. serialization: having two events occur one-after-the-other.

  • In the first case, mutual exclusion generally refers to where only one task is allowed to access a specific memory location (or other resource) at a time.
  • If another task wants access to the same memory or resource, it must wait.
  • The most obvious example is where two tasks may want to use the same printer or write to the same document simultaneously.

4 of 100

Synchronization�

  • The second refers to having events occur in specified chronological orders:
  • Task B cannot continue executing until Task A has finished its execution.
  • The one means by which synchronization is not to be implemented is by hard-coding wait times, under the assumption that the problem has been solved by another task within the programmed time.
  • This is emphasized in Rule 7 of the JPL coding standard: Task synchronization shall not be performed through the use of task delays.
  • In this standard, they include the comment that “Specifically the use of task delays has been the cause of race conditions that have jeopardized the safety of spacecraft.
  • The use of a task delay for task synchronization requires a guess of how long certain actions will take.
  • If the guess is wrong, havoc, including deadlock, can be the result.”

5 of 100

The need for synchronization �

  • Let’s look at three problems that demonstrate synchronization issues:

1. Two tasks trying to use the same data structure,

2. Two tasks attempting to share information, and

3. Two tasks attempting to access information from a third

Two tasks trying to use the same data structure:

First, suppose we have a shared linked list:

  • If only one task ever accesses the data structure, we are fine; however, what could happen if there were two copies of Task 1 executing, call these copies 1.a and 1.b.
  • Now, suppose we started executing both copies at approximately the same time, and Task 1.a gets its data ready first, it calls push_front, but then a hardware interrupt occurs between the conditional statement and the auto-increment of the size field.
  • At this point:

6 of 100

The need for synchronization �

7 of 100

The need for synchronization �

Two tasks trying to use the same data structure:

  • At this point:

1. the new node was allocated and initialized,

2. the fields head and tail were updated, but

3. the size field is still zero.

  • Suppose the hardware interrupt is dealt with, and the scheduler decides that Task 1.b will continue executing.
  • It gets to the conditional statement, which is therefore executed as size is still zero, so both head and tail are updated.
  • It continues executing and the size field is incremented
  • At some point later, the scheduler is called, and Task 1.a will continue running.
  • At this point, all it does is increment the size field, and the state of the data structure is now which is in an inconsistent state.

8 of 100

The need for synchronization �

Two tasks trying to use the same data structure:

  • If we now try to call pop_front, the first node will be removed from the linked list and the associated memory freed; however, the tail pointer will not be updated, so it will still contain the address of a freed location—it is a dangling pointer.
  • Any attempt to access the object at the tail will result in either an invalid access (if we’re unlucky) or a segmentation fault (if we’re lucky).
  • Now, if this occurs in a program that is being run interactively, it is less of an issue: it will crash only once in a million times, or perhaps once in a billion times.
  • If the program is, however, embedded in a system, this may cause the system to fail.
  • If you’re lucky, the system may simply reset itself and start again. In a real-time system, however, this may result in deadlines being missed.

9 of 100

The need for synchronization �

Two tasks communicating information:

  • Second, consider these two tasks trying to share information: Task 1 is preparing data which is to be sent to and used by Task 2.
  • The result should not be overwritten by Task 1 until Task 2 has completed using it.
  • Task 2 should not try to access the result until Task 1 has finished writing to it.

10 of 100

The need for synchronization �

Two tasks communicating information:

  • Question: Is it possible that an error such as that shown in Example 1 will occur here?

No. Only after the data is set is the flag result_is_produced is set to true.

  • Question: Are there any problems here?

Yes. Suppose that we have only a single processor or core. In this case, while Task 2 is executing the while loop, Task 1 cannot spend any time actually preparing a result.

  • Similarly, any time Task 1 is waiting for Task 2 to access the data is time that Task 2 is not spending processing that data.

One solution to this problem is to call a yield() command.

  • This would make a system call which would then call the scheduler:

11 of 100

The need for synchronization �

Two tasks communicating information:

12 of 100

The need for synchronization �

Two tasks communicating information:

  • If we have a round-robin scheduler, this should not pose an issue; however, if we are in a real-time system where Task 1 has higher priority than Task 2 (or vice versa), then any call to yield() by Task 1 would always result in Task 1 being scheduled again before Task 2—thus we arrive at an infinite loop;
  • We cannot conclude this without also knowing the characteristics of the scheduler and the priorities of the tasks.
  • Note that this is an example of a producer-consumer problem.
  • A single producer with a single consumer will allow a solution, even if it is processor intensive.

13 of 100

The need for synchronization �

Multiple tasks communicating information:

  • Suppose instead we have two copies of Task 2 executing (call them Tasks 2.a and 2.b).
  • Now we have a different issue: we cannot have both consumers accessing the data structure simultaneously.
  • A first attempt at a solution: to try to prevent the other consumer from accessing the data, as soon as the while loop finishes, immediately set the flag result_is_produced state to false.

14 of 100

The need for synchronization �

Multiple tasks communicating information:

  • Question: Why does this not help us with respect to the data?
  • Now it may be possible that the producer, Task 1, accesses the data before Task 2.a finishes reading the previously stored value.
  • Let’s introduce a second flag:

15 of 100

The need for synchronization �

Multiple tasks communicating information:

  • Question: Why does this not really solve the problem?
  • At this point, an interrupt could occur between the checking that reading is false, and setting it to true.
  • Now, if you consider that this may only happen once in a million iterations, suppose we are in a reasonably simple embedded system where this happens millions of cycles—in this case, such an error is certain, as opposed to sporadic.

Summary of problems:

  • The problems of synchronization between tasks executing are:
  • Serialization: ensuring that one task is performed after another, and
  • Mutual exclusion: preventing more than one task from accessing data.
  • Flags are not a suitable solution to either problem as there are at least two operations that must occur when using flags:

1. checking a flag, and

2. setting that flag.

16 of 100

The need for synchronization �

Multiple tasks communicating information:

  • It is always possible that an interrupt can occur immediately between these two operations, and that a context switch may occur at that time.
  • Thus, this topic will first look at a graphical approach of displaying desired synchronizations. We will then look at four approaches to synchronization:

1. tokens,

2. a test-and-reset instruction,

3. semaphores, and

4. automatic equivalents to semaphores.

17 of 100

Petri nets—describing synchronizations graphically �

  • Petri Nets (PN) were developed 1939 by Carl Adam Petri, who started building an analog computer in 1941, at the age of 13.
  • A PN is a mathematical description of the state of a computer system. In his doctoral thesis, Philip Meir Merlin added a timing extension to PNs, which he called time Petri nets (TPNs), that made it appropriate for modeling real-time systems.
  • We will first discuss PNs and the next section will be on real-time extensions.
  • A PN is comprised of

1. places or conditions, and

2. transitions between conditions.

18 of 100

Petri nets—describing synchronizations graphically �

  • In many cases, “places” are conditions that must be satisfied, but at a higher level, it may simply indicate that a task is doing something.
  • Consequently, we use the terms “place” and “condition” interchangeably; “condition” where it is appropriate, and “place” when we are discussing a more abstract state.
  • For example, consider the PN shown in Figure 9-1.
  • There are two conditions:
    • memory is available and memory is required.
    • If both conditions are met, the memory is allocated, at which point, memory is no longer available.

19 of 100

Petri nets—describing synchronizations graphically �

20 of 100

21 of 100

Allocation resources-tokens

Fig:The resource server is not ready and a token indicating the server is ready.

To represent the current state, we will use tokens.

It is possible for a server to have, for example, four tasks ready to provide the service. Consequently, this could be represented by four tokens, as is shown in Fig.

Fig:A server with four tasks ready.

22 of 100

Synchronization scenarios using Petri nets

  • A transition can fire whenever every condition is satisfied.

There are several common scenarios in synchronization:

1. concurrency,

2. conflict,

3. synchronization, and

4. merging

23 of 100

Synchronization scenarios using Petri nets(cont’d)

24 of 100

Synchronization through token passing

  • Recall the producer-consumer problem where there are multiple consumers. It is potentially dangerous if two consumers attempt to simultaneously acquire the data produced. One solution for synchronizing the consumers is to essentially have a token that is passed between the consumers. The implementation has the properties that:

1. Each consumer has a unique identifier,

2. The token has the value of one of the identifiers, and

3. When a consumer has accessed the result, it updates the token to be the identifier of the next consumer.

25 of 100

Test-and-set—a crude signal with polling

  • As hardware designers became aware of the issues with synchronization, they moved forward to solving such problems by providing machine instructions that can support synchronization.
  • We will now look at a mechanism for achieving synchronization through a single variable and a single machine instruction. Recall our attempt to check and set a global variable to allow us to enter the critical region:

26 of 100

Test-and-set—a crude signal with polling (Cont’d)

The problem with this is that an interrupt can occur between the variable being tested and the variable being set to false. Instead, we must have some means of testing and setting the variable so that no interrupt can occur between the two operations. Thus, we require that if the variable is

1. false, it remains false, and

2. true, it is set to false, but its value must still appear to be false.

27 of 100

Test-and-set—a crude signal with polling (Cont’d)

  • The function test_and_reset(…) is a function that is translated to a single machine instruction which can be thought of as:

28 of 100

Test-and-set—a crude signal with polling (Cont’d)

  • It is important to remember that all of these operations must be performed by a single machine instruction. If this is not the case, an interrupt can occur between any pair of instructions and this will result in loss of synchronization.
  • If ready is false, the value false is returned, the value is unchanged, and when the function returns, we yield and loop.
  • If ready is true, the value true is returned, but ready is set to false, and when the function returns, we exit the loop.
  • In the second case, if ready is ever set to true, the first task that calls test_and_reset on it will immediately set it to false, so all other tasks will continue looping.
  • If any other task calls test_and_reset, even if this is immediately after, the value is again false, and will go back into the loop.

29 of 100

Test-and-set—a crude signal with polling (Cont’d)

  • One weakness of providing a test-and-reset mechanism is that we still have the issue of idle waiting—the loop could be called numerous times prior to the variable ready being set to true.
  • In addition, in a real-time system, the busy waiting on the variable ready may have higher priority than the process that successfully set ready to false and now needs to finish executing the critical region before it sets it to true; consequently, the process in the critical region will never execute.
  • Therefore, while a test-and-reset instruction is necessary step to provide synchronization, it is only a first step.
  • We will use this instruction to build a more robust data structure in the next section: semaphores.

30 of 100

Semaphores—a better signal without polling

  • As an alternate solution to synchronization, we will look at a more advanced data structure described by Dijkstra in 1965: the semaphore, a class of data structures for signalling. We will look at
  • Semaphore is simply a variable that is non-negative and shared between threads.
  • A semaphore is a signaling mechanism, and a thread that is waiting on a semaphore can be signaled by another thread.
  • It uses two atomic operations, 1) Wait, and 2) Signal for the process synchronization.
  • A semaphore either allows or disallows access to the resource, which depends on how it is set up.

31 of 100

Semaphores—a better signal without polling

  • The two common kinds of semaphores are
      • Counting semaphores
      • Binary semaphores.

Binary Semaphores

  • The binary semaphores are quite similar to counting semaphores, but their value is restricted to 0 and 1. In this type of semaphore, the wait operation works only if semaphore = 1, and the signal operation succeeds when semaphore= 0.
  • It is easy to implement than counting semaphores.

Wait and Signal Operations in Semaphores

  • Both of these operations are used to implement process synchronization. The goal of this semaphore operation is to get mutual exclusion.

32 of 100

Semaphores—a better signal without polling

Binary Semaphores

  • P operation is also called wait, sleep, or down operation, and V operation is also called signal, wake-up, or up operation.
  • Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means that variable on which read, modify and update happens at the same time/moment with no pre-emption i.e. in-between read, modify and update no other operation is performed that may change the variable.
  • A critical section is surrounded by both operations to implement process synchronization. See the below image. The critical section of Process P is in between P and V operation.� 

33 of 100

Semaphores—a better signal without polling

Binary Semaphores

P(semaphore s)

{

    if (s.value == 1) {

        s.value = 0;

    }

    else {

        // add the process to the waiting queue

        q.push(P)

        sleep();

    }

}

V(Semaphore s)

{

    if (s.q is empty) {

        s.value = 1;

    }

    else {

 

        // select a process from waiting queue

        Process p=q.pop();

        wakeup(p);

    }

}

34 of 100

Semaphores—a better signal without polling

  • P1

Down(s)

//CS

Up(S)

  • P2

Down(s)

//CS

Up(S)

Application 1:

35 of 100

Semaphores—a better signal without polling

  • Producer_item(item p)

Down(Empty)

Down(s)

//CS

Buffer[IN] =Item

In=(In+1)mod n

Up(s)

Up(Full)

    • Consumer

Down(Full)

Down(s)

//CS

Item c = Buffer[Out]

Out=(Out+1)mod n

Up(s)

Up(Empty)

Application 2:Producer Consumer Problem

0

A

1

B

2

C

3

4

5

6

7

N=8

Full =0= No of filled Slots

Empty = N= No of Empty slots

Empty = 5

Full =3

S=1

In=

Out=

36 of 100

Semaphores—a better signal without polling

Binary semaphores

  • A binary semaphore is a variable that takes on one of two values, 0 or 1
  • There are four operations on binary semaphores:

binary_semaphore_init( binary_semaphore_t *const p_this, unsigned int n )

Initialize the semaphore to either 0 or 1.

binary_semaphore_wait( binary_semaphore_t *const p_this )

If the semaphore is 1, set it to 0 and continue executing.

Otherwise, flag this task as being blocked on this semaphore and prevent it from being scheduled.

binary_semaphore_post( binary_semaphore_t *const p_this )

If the semaphore is 1, do nothing.

If the semaphore is 0 and there is at least one task blocked on this semaphore, unblock one of those tasks;

otherwise, set the semaphore to 1.

37 of 100

Semaphores—a better signal without polling

Binary semaphores

  • What differentiates a binary semaphore from a test-and-reset variable is that any task waiting on a binary semaphore that is 0 is blocked from even being scheduled.

  • When a post is issued and there are tasks waiting on the semaphore, one of those waiting tasks is unblocked.

  • If nothing else, the semaphore interface must communicate with the scheduler, being able to flag a task as being blocked.

  • The scheduler need not know why the task is blocked; only that it is not to be scheduled.

Applications of binary semaphores

  1. Multiple tasks sharing a data structure (mutual exclusion)
  2. A producer and consumer communicating via a common memory location, and
  3. Multiple producers and multiple consumers communicating via a common memory location.

38 of 100

Semaphores—a better signal without polling

Applications of binary semaphores

  1. Multiple tasks sharing a data structure (mutual exclusion)
  2. Recall the example of a linked list. Now, we could assign each linked list a semaphore.
  3. When a semaphore is used for mutual exclusion, the variable name usually contains the substring mutex.
  4. We will add such field to each single list data structure.

39 of 100

Semaphores—a better signal without polling

Applications of binary semaphores

40 of 100

Semaphores—a better signal without polling

Applications of binary semaphores

Two tasks communicating information:

  • Consider our second example, with a producer and a consumer.
  • If there is only a single producer and a single consumer, we have already seen that this can be simply solved with a Boolean-valued flag; however, this results in busy waiting. Instead, we will use semaphores.
  • Note, however, that unlike a Boolean flag, which can be checked for values of either “true” or “false”, semaphores can only be waited upon.
  • Therefore, we must consider any conditions under which a task may be required to wait:

1. The producer will be required to wait if the consumer has not yet processed the data, and

2. The consumer will be required to wait if the producer has not yet created the data.

  • Thus, we will require two separate semaphores:

1. ready_to_write: the data is ready to be written to by a producer, and

2. ready_to_read: the data is ready to be read by a consumer

41 of 100

Semaphores—a better signal without polling

Applications of binary semaphores

Two tasks communicating information:

42 of 100

Semaphores—a better signal without polling

Applications of binary semaphores

Multiple tasks communicating information:

    • The above example with one producer and one consumer is not actually that specific.
    • With multiple tasks acting as producers and/or multiple tasks acting as consumers, it is still possible to have all the tasks share and have exclusive access to the shared variable result.
    • There is one weakness, however: suppose after one producer has a assigned a value to shared_result, then any other producer must wait until a consumer comes along.

Implementation of binary semaphores

In this section, we will consider both

    • the implementation of semaphores,
    • data structures for priority queues in semaphores, and
    • updating the priorities of or killing tasks within priority queues.

We will look at each here.

43 of 100

Semaphores—a better signal without polling

Implementation of binary semaphores

  • The most basic information we need for a binary semaphore is:
  • 1. a value that is either true or false (1 or 0), and
  • 2. a queue for those tasks waiting on the semaphore.
  • If we go back to the topic on multiple threads, we would therefore see that the data structure must be

typedef struct {

bool is_acquirable; // 1 if it can be acquired, 0 otherwise tcb_queue_t waiting_queue;

} binary_semaphore_t;

  • In order to initialize this, we must set the value and initialize the queue:

void binary_semaphore_init( binary_semaphore_t *const p_this, bool state )

{

p_this->is_acquirable = state;

TCB_QUEUE_INIT( p_this->waiting_queue ); // Initialize the queue

}

44 of 100

Semaphores—a better signal without polling

Implementation of binary semaphores

  • Now, we will implement a wait function to request the semaphore.
  • This function will

1. call test-and-reset on the field is_acquirable,

2. if it returns false, we enter a loop that blocks the currently executing task and places it on the waiting queue associated with the tasks,

3. otherwise, we return: the task has been granted the semaphore.

  • The post will therefore have to ensure that if a task is waiting on the semaphore, then that task must be placed on the ready queue

Data structures for semaphores:

  • In our example, we used simple linked lists for our queue. This allows for a FCFS approach, but is sub-optimal for a real time system. The alternatives are using a
  • Using a linked-list queue with a searching pop - If we use a linked list, we must search through the linked list to find the task with highest priority. If there are multiple tasks of highest priority, the one closest to the front is the one that has been waiting the longest. The run time of this scheme is linear in the number of tasks in the queue: O(n). If it is guaranteed that there are not many tasks waiting on a particular semaphore, this may be appropriate.

45 of 100

Semaphores—a better signal without polling

Implementation of binary semaphores

Data structures for semaphores:

Using a binary min-heap - A binary min-heap is an array-based data structure. In this case, all required operations are O(ln(n)) with the push operation having an average run-time of O(1) if the insertions are uniformly distributed among the priorities. Unfortunately, the queue must also be stored as an array, and therefore the maximum size of the array must be known. For very small priority queue sizes, it is likely faster to use a linked-list queue as described in the previous section. If the queues are significantly bigger, there is a possibility of a lot of wasted space. Thus, we should consider a different approach.

A leftist heap - A leftist heap is similar to an AVL tree in that all operations are O(ln(n)), but like an AVL tree, it is necessary to track the null-path length of any node: that is, the shortest path to a non-full descendant node. Tracking and maintaining this data may be expensive.

A skew heap - A skew heap is similar to a leftist heap, but some operations are automatic as opposed to being based on null-path lengths: push into the right sub-heap, but then swap the left and right sub-heap. Consequently, the amortized run times are O(ln(n)), but the worst case run times may be O(n)

46 of 100

Semaphores—a better signal without polling

Implementation of binary semaphores

Data structures for semaphores: Analysis

  • In implementing a binary heap, this would require a fixed amount of memory.
  • For real-time applications, the worst-case access time may be the most interesting measure.
  • Of the tested priority queues, the implicit binary heap had the best worst-case performance for individual operations, which was better than O(n).
  • Thus, if guaranteed performance better than O(n) is required, this is the only choice.
  • The Splay Tree and the Skew Heap, however, showed better amortized worst cases and we did not observe any individual access to these queues with worse time complexity than O(log(n)).
  • Thus they are good alternatives for real-time systems without hard real time requirements.

47 of 100

Semaphores—a better signal without polling

Implementation of binary semaphores

Priority inversion

  • One issue we haven’t discussed up to this point is what happens if a high-priority task waits on a semaphore that is currently being held by a lower-priority task.

  • There are two situations:

1. a high-priority task may be waiting for a lower-priority task for serialization; that is, for it to complete some action (perhaps reaching a rendezvous), or

2. a high-priority task may be waiting on a semaphore for mutual exclusion.

We will consider both of these issues, as the solutions differ.

  • In the first case, where serialization is the issue at hand, this should be dealt with at the design phase: if it is known that a lower-priority task is going to be required to reach some point to allow the higher-priority task to continue executing, this should be understood during design, and priorities should be appropriately updated at that time.

48 of 100

Semaphores—a better signal without polling

Implementation of binary semaphores

Priority inversion

  • In the second, the problem becomes more difficult if the semaphore for mutual exclusion is used to restrict access to, for example, a resource.
  • In this case, when the resource is required by a higher-priority process may not be obvious at design time—the high-priority task may simply be responding to unpredictable external events.
  • In this case, it is quite likely that a high priority task may suddenly require access to a binary semaphore held by another task.
  • The easy option is to kill a low-priority task and restart it, thereby freeing the semaphore for the higher-priority task; however, this will fail, for example, if the semaphore is protecting a data structure that is being updated.
  • In the next chapter on resource management, we will consider a solution to where a high-priority task is waiting on a lower-priority task—a inversion in priorities.

49 of 100

Semaphores—a better signal without polling

Counting semaphores:

  • There are the scenarios in which more than one processes need to execute in critical section simultaneously.
  • However, counting semaphore can be used when we need to have more than one process in the critical section at the same time.
  • The programming code of semaphore implementation is shown below which includes the structure of semaphore and the logic using which the entry and the exit can be performed in the critical section.

50 of 100

Semaphores—a better signal without polling � Counting semaphores:

Down (Semaphore S)  

{  

    SS.value = S.value -1 

    if (S.value< 0)  

    {  

        put_process(PCB) in L;    

        Sleep();   

    }  

    else  

        return;  

}  

up (Semaphore s)  

{  

    SS.value = S.value+1;   

    if(S.value<=0)  

    {  

        select a process from L; 

        wake-up();  

    }  

    }  

51 of 100

Semaphores—a better signal without polling � Counting semaphores:

  • In this mechanism, the entry and exit in the critical section are performed on the basis of the value of counting semaphore.
  • The value of counting semaphore at any point of time indicates the maximum number of processes that can enter in the critical section at the same time.
  • A process which wants to enter in the critical section first decrease the semaphore value by 1 and then check whether it gets negative or not.
  • If it gets negative then the process is pushed in the list of blocked processes (i.e. q) otherwise it gets enter in the critical section.
  • When a process exits from the critical section, it increases the counting semaphore by 1 and then checks whether it is negative or zero. If it is negative then that means that at least one process is waiting in the blocked state hence, to ensure bounded waiting, the first process among the list of blocked processes will wake up and gets enter in the critical section.
  • The processes in the blocked list will get waked in the order in which they slept. If the value of counting semaphore is negative then it states the number of processes in the blocked state while if it is positive then it states the number of slots available in the critical section.

52 of 100

Semaphores—a better signal without polling � Counting semaphores:

  • The design of counting semaphores - This type of issue is so ubiquitous in real-time, embedded and operating systems that it is often easier to abstract such counting into a semaphore that facilitates this approach.
  • Such a semaphore is a counting semaphore with the following behaviour:

1. the counting semaphore has an integer value (as opposed to just a binary true-or-false) where the initial value is positive (0 or greater),

2. a wait decrements the value of the counting semaphore and if the resulting number is strictly negative, the waiting task is blocked on the semaphore, and

3. a post increments the value of the counting semaphore, and if the initial value was strictly negative, this indicates at least one task is waiting on the semaphore, and therefore one task must be made ready and placed on the ready queue.

  • If the value n of the counting semaphore is strictly negative, then –n is the number of tasks waiting on the semaphore.

53 of 100

Semaphores—a better signal without polling � Counting semaphores:

An application of counting semaphores

  • Suppose a queue has n entries in it and multiple tasks are attempting to access this queue simultaneously, some pushing items into the queue, others popping them.
  • The benefits of using a counting semaphore is that if the pipe is ever empty or full, any tasks attempting to either pop or push, respectively, will be blocked waiting on another task either insert a new entry into or remove an entry from the pipe.
  • It would be much more difficult to achieve this without a counting semaphore.

Implementation of counting semaphores:

  • We will look at semaphores in POSIX, the Keil RTX RTOS and CMSIS-RTOS RTX.

54 of 100

Semaphores—a better signal without polling � Counting semaphores:

POSIX semaphores

  • If you are not using shared memory (different processes do not share memory, but multiple threads in the same process do), then the semaphores must be set up in a shared memory location.

55 of 100

Semaphores—a better signal without polling � Counting semaphores:

Keil RTX RTOS semaphores:

  • The Keil RTX RTOS has both mutual exclusion and counting semaphores.
  • The binary semaphore is named after its primary application: achieving mutual exclusion:

56 of 100

Semaphores—a better signal without polling � Counting semaphores:

Keil RTX RTOS semaphores:

  • The OS_RESULT os_mut_wait( OS_MUT, U16 ) can take a second argument, a 16-bit unsigned integer, which indicates the number of system intervals that it will wait before it times out.
  • A system interval has a default value of 10 ms, but this is configurable. The argument 0xffff indicates to wait forever.
  • The return value indicates what occurred:

57 of 100

Semaphores—a better signal without polling � Counting semaphores:

Keil RTX RTOS semaphores:

58 of 100

Semaphores—a better signal without polling � Counting semaphores:

Counting Semaphore

Binary Semaphore

No mutual exclusion

Mutual exclusion

Any integer value

Value only 0 and 1

More than one slot

Only one slot

Provide a set of Processes

It has a mutual exclusion mechanism.

Counting Semaphore vs. Binary Semaphore

Here, are some major differences between counting and binary semaphore:

59 of 100

Problems in synchronization

There are three categories of problems in synchronization, including

1. basic,

Signaling,

Rendezvous

2.Intermediate problems in synchronization

1. multiplexing,

2. group rendezvous,

3. light-switch, and

4. condition variables.

3. advanced.

1. the dining philosophers’ problem, and

2. the readers-writers problem.

60 of 100

Problems in synchronization

Basic problems in synchronization

  • We have already discussed mutual exclusion as the first aspect of synchronization, but we will consider one further issue.
  • The second is serialization.
  • There are two basic serialization patterns that will form the basis of other solutions:

1. signalling, and

2. rendezvous.

  • We will look at solutions for this problem when there are two tasks.
  • Mutual exclusion - Binary semaphores can be used for mutual exclusion, but there is one weakness. Any task can issue a post to a binary semaphore; however, if the data structure tracks the task or thread that issues the wait on the binary semaphore and only allows that task or thread to post, such a data structure is often referred to as a mutual exclusion data structure, or mutex. and often it comes with an additional safeguard: the default value is always 1, and the only task or thread that can post is the one that waited on it.

61 of 100

Problems in synchronization

Basic problems in synchronization

Mutual exclusion

  • As a quick review, mutual exclusion with semaphores may be achieved through a shared binary semaphore initialized to 1:

binary_semaphore_t mutex;

binary_semaphore_init( &mutex, 1 );

  • Then, any critical region need only be preceded by a wait and followed by a post:

binary_semaphore_wait( &mutex ); {

// Critical region

binary_semaphore_post( &mutex );

62 of 100

Problems in synchronization

Basic problems in synchronization

Signalling - A semaphore can be used by one task to signal that an event has occurred. Any task waiting on that signal can then proceed in its execution. To achieve a signal, the semaphore is initialized to zero and the signalling semaphore will post to that semaphore to signal the event. Any task that waits on that signal before it is sent would be blocked for execution until the signal occurs. Any task that waits on that signal after it is sent would, likewise, continue in its execution.

binary_semaphore_t signal;

binary_semaphore_init( &signal, 1 );

void task_1( void ) {

// ...

// Prepare data

binary_semaphore_post( &signal );

// ...

}

void task_2( void ) {

// ...

binary_semaphore_wait( &signal );

// Process data

// ...

}

63 of 100

Problems in synchronization

Basic problems in synchronization

  • Rendezvous - Suppose we have two tasks executing, but neither should continue executing beyond a certain point until both have gotten to that point.

void task_0( void ) {

// Prepare data...

// Rendezvous

// Process data...

}

void task_1( void ) {

// Prepare data...

// Rendezvous

// Process data...

}

64 of 100

Problems in synchronization

Basic problems in synchronization

Rendezvous –

  • In order to solve this problem, let us recall that there are two events here that require signalling:

1. The first event is that Task 0 is ready for the rendezvous, and

2. The second event is that Task 1 is ready for the rendezvous.

  • As soon as soon as each task reaches its rendezvous point, it must signal on a separate semaphore that it is completed. Each then waits for the other:

65 of 100

Problems in synchronization

Basic problems in synchronization

Rendezvous –

binary_semaphore_t signal[2];

binary_semaphore_init( &signal[0], 0 );

binary_semaphore_init( &signal[1], 0 );

void task_0( void ) {

// Prepare data...

binary_semaphore_post( &signal[0] ); // Signal that this task is ready

binary_semaphore_wait( &signal[1] ); // Wait for the other task

// Process data...

}

void task_1( void ) {

// Prepare data ...

binary_semaphore_post( &signal[1] ); // Signal that this task is ready

binary_semaphore_wait( &signal[0] ); // Wait for the other task

// Process data ...

}

66 of 100

Intermediate problems in synchronization

Now we will look at

1. multiplexing,

2. group rendezvous,

3. light-switch, and

4. condition variables.

  • The first involves a generalization of mutual exclusion and the next two are two possible generalizations of rendezvous. In all three cases, we will consider multiple tasks.
  • Multiplexing - A multiplex is mutual exclusion where at most n items can access the critical section. This is a generalization of the concept of mutual exclusions, except now up to n tasks can access the area. Justification for this may be processor usage: if more than n tasks are performing a processor-intensive operation, it degrades the quality of service for all tasks. Restaurants providing services do this all the time: the building has a maximum capacity based on the number of available seats. Allowing more people into the restaurant than the number of seats available will only reduce the quality of service.

67 of 100

Intermediate problems in synchronization

Multiplexing –

  • To achieve this, we replace the binary semaphore with a counting semaphore:

counting_semaphore_t multiplex;

counting_semaphore_init( &multiplex, n );

counting_semaphore_wait( &multiplex ); {

// Shared critical region

} counting_semaphore_post( &multiplex );

  • We use the same formatting convention to have the shared critical region stand out.

Group rendezvous and turnstile - The rendezvous we discussed works well for synchronizing two tasks, but it does not work in general for an arbitrary number of tasks. You could have one semaphore for each task, but this would be excessive.

68 of 100

Intermediate problems in synchronization

Group rendezvous and turnstile –

  • Instead, we would like a general mechanism to allow n tasks rendezvousing, and only after all n tasks have gotten to that point are they all allowed to continue.

A sub-optimal solution –

  • Your first thought might be to have n – 1 wait on a semaphore, and the last to reach the rendezvous would issue n – 1 posts to allow the waiting tasks to pass through the turnstile.
  • The problem with this is that the last task is likely also the lowest in priority. It may happen that the first post requires a context switch to the highest priority task. Then, when it is finished, you must return to the posting task, which then issues its second post. This is unnecessarily expensive, as it requires, in the worst case, 2n context switches. Instead, we’d like to reduce this to n context switches.

69 of 100

Intermediate problems in synchronization

  • Group rendezvous and turnstile –

A better solution

  • To achieve this, there must be a global variable storing the number of tasks that are to rendezvous:

size_t tasks_not_at_rendezvous = ALL_TASKS;

// the number of tasks executing the rendezvous

  • One solution may be to have all the tasks wait until the last one gets there, and it will release all other tasks.
  • This, however, causes more problems. Instead, we will simply have the last task arriving let a task through, and then each subsequent task that is let through will signal the next, and so on.
  • Such an approach is said to be a turnstile.

70 of 100

Intermediate problems in synchronization

  • Light-switches - If a room is being used for an event, the first person who enters the room marks the room as used. Others there for the event may also enter the room, but the room is blocked from others not associated with the event. This can be thought of as light-switch: the first user entering turns on the light, the last user leaving turns off the light.
  • Events - A synchronization tool similar to a binary semaphore is an event (waiting for a situation, or condition, to occur), only an event does not have a memory—it never temporarily stores tokens. If a task posts to an event and there is no task waiting on that event, the occurrence of that event is lost.
  • The interface for an event is similar, but slightly different from that of a counting semaphore. While the wait function is similar, we are not posting tokens; instead, we are simply signalling any waiting tasks. In this case, we have two options:

1. signalling a single task waiting for the event to occur, or

2. signal all (broadcast to all) tasks waiting for the event to occur.

71 of 100

Advanced problems in synchronization

  • We will look at:

1. the dining philosophers’ problem, and

2. the readers-writers problem.

Dining philosophers’ problem

  • Five philosophers are in a room with a round table with five plates of rice with a chopstick between each of the plates.
  • The philosophers walk around the room, talk and think, and when they get hungry, they go to sit down, grab one of the chopsticks on either side of the plate, then grab the other chopstick on the other side of the plate, then eat, and then place both chopsticks back down again.
  • This works well until all philosophers sit down nearly simultaneously and each grabs the left chopstick.
  • As all chopsticks are currently in someone’s hand, nobody can pick up the second chopstick they would need to begin eating.
  • None of the philosophers can eat—that is, they will starve.

72 of 100

Advanced problems in synchronization

Dining philosophers’ problem

  • What strategies could the philosophers use to ensure that:

1. they do not end up in a deadlock where none can eat, and

2. none of the philosophers starve.

  • Humour: Some text books have this problem posed with five forks and spaghetti instead of chopsticks and rice.

First left, then right—deadlock

  • We could start by just having each philosopher lock the left chopstick, and then the right. If one is locked, that philosopher will wait until it is freed.
  • We immediately run into a problem: suppose that every philosopher attempts to grab a chopstick at approximately the same time, so that each manages to lock the left chopstick. Now, all philosophers are locked on the other chopstick, so none will eat and—even worse—none will have a chance to think.
  • Accessing both chopsticks simultaneously

73 of 100

Advanced problems in synchronization

Dining philosophers’ problem

Accessing both chopsticks simultaneously

  • One possible solution is to try to grab both chopsticks at the same time, using the rules:

1. Try to grab both chopsticks at the same time,

2. If you cannot, flag yourself as hungry and wait for someone to nudge you,

3. If you can, lock both chopsticks, eat, unlock both, and nudge any neighbor who may be hungry.

  • Thus, we cannot end up with the situation that each philosopher has exactly one chopstick, and thus we avoid deadlock.

Ordering the resources

Another possible solution is to observe that there is a cycle that can develop as a result of the various locks. Each philosopher is holding on to a resource which the previous philosopher requires to proceed. By ordering the resources and requiring that each philosopher picks up the resources in order, it is not possible to generate a cycle and therefore it is possible to avoid deadlock

74 of 100

Advanced problems in synchronization

Dining philosophers’ problem

Ordering the resources

Another possible solution is to observe that there is a cycle that can develop as a result of the various locks. Each philosopher is holding on to a resource which the previous philosopher requires to proceed. By ordering the resources and requiring that each philosopher picks up the resources in order, it is not possible to generate a cycle and therefore it is possible to avoid deadlock

  • Now, only one of Philosopher 0 and Philosopher 4 will be able to access Chopstick 0—the other will be blocked.
  • Consequently, there are now only four philosophers trying to lock five chopsticks, so by the pigeonhole principle at least one philosopher will be able to lock two chopsticks, eat, and release the chopsticks
  • Restricting access to the table - Using multiplexing, another solution is to restrict the number of philosophers that have access to the table to four. In this case, again, the pigeonhole principle tells us that at least one of the four philosophers at the table will have access to two chopsticks.

75 of 100

Advanced problems in synchronization

Dining philosophers’ problem

  • Starvation -One question we must ask is, is it possible that a hungry philosopher may not be fed? This is always possible if there is a priority among the philosophers; however, these philosophers are egalitarian. In that case, is it possible for a philosopher to never get a chance to eat?

1. Suppose Philosopher 1 and 3 are eating, and Philosopher 0 comes to the table and promptly goes to sleep.

2. Philosopher 4 comes to the table and goes to sleep, after which Philosopher 3 leaves, waking up Philosopher 4 and he starts eating.

3. Philosopher 2 comes to the table and goes to sleep, after which Philosopher 1 leaves. Philosopher 1 may try to wake up Philosopher 0, but Philosopher 4 has Philosopher 0’s other chopstick, so Philosopher 0 goes back to sleep. Philosopher 2 is woken up and starts eating.

4. Philosopher 1 comes to the table and goes to sleep, after which Philosopher 2 leaves, waking up Philosopher 1.

76 of 100

Advanced problems in synchronization

Dining philosophers’ problem

  • Starvation -

5. Philosopher 3 comes to the table and goes to sleep, after which Philosopher 4 leaves. Philosopher 4 may try to wake up Philosopher 0, but Philosopher 1 has Philosopher 0’s other chopstick, so Philosopher 0 goes back to sleep. Philosopher 3 is woken up and starts eating.

6. Go back to Step 2.

Readers-writers problem

  • When accessing any resource that may be both read and written to, it is often possible that any number of readers can have access to the resource simultaneously, but only one writer at a time can ever change the resource. This may apply to something as simple as a data structure, or something more complex such as a file or database.
  • Consider the C++ const modifier for member functions: there is no reason that two or more separate tasks could not call such functions simultaneously; however, as we have previously seen, it would be potentially disastrous if two different tasks tried to modify the data structure simultaneously.

77 of 100

Advanced problems in synchronization

Readers-writers problem

  • We will consider a number of variations on this scenario:

1. readers may access the resource so long as there is no writer accessing the resource, and writers may access the resource so long as there are no readers or writers accessing the resource;

2. FCFS manner, where any number of readers may appear and are granted access to the critical section, but as soon as a writer appears, it waits until all readers have finished accessing the resource and any subsequent readers or writers must wait until this writer completes;

3. reader-priority where all writers that are not currently accessing the resource must wait so long as even one reader is waiting to access the resource; and

4. writer-priority where all readers that are not currently accessing the resource must wait so long as even one writer is waiting to access the resource.

78 of 100

Advanced problems in synchronization

Readers-writers problem

  • We have two tasks, one reading a resource, the other writing to the resource.

What do we require?

1. When one writer is accessing the resource, nothing else can access the resource.

2. When one or more readers are accessing the resource, additional readers can access the resource, but no writers can access the resource.

79 of 100

Advanced problems in synchronization

Readers-writers problem

  • Readers wait for one writer Suppose n readers are currently accessing the resource and one writer appears. If the writer does not block new readers, the writer may have to wait forever.
  • Consequently, we will force the readers to pass through a turnstile. If a writer ever appears, the writer will lock the turnstile One semaphore can regulate access to the resource, and a light switch can be used to regulate access for readers.
  • Now, if there are multiple writers, and those writers have a higher priority than the readers, then writers will always be able to access the document as soon as the last reader gives up that resource.
  • If another writer appears while a writer has the resource, if its priority is higher than the priority of any readers that may have shown up between the two readers trying to access the document, the next writer—never-the-less—has priority.

80 of 100

Advanced problems in synchronization

Readers-writers problem

81 of 100

Automatic synchronization

  • Recall that with memory allocation, the optimal solution is to require explicit allocations and deallocations (manual memory management). This, however, significantly increases development costs, as it is much more difficult to ensure functional code; therefore, there are automatic alternatives to memory management such as garbage collection, but they come at an additional cost.
  • We will now consider

1. the weaknesses of semaphores, and

2. automatic alternatives.

  • Specifically, we will describe the synchronization tools in Java and Ada, but we will also see a result that indicates that for purposes of synchronization, semaphores are the standard.

82 of 100

Automatic synchronization

  • Weaknesses of semaphores - The strongest issue with semaphores is that, like memory allocation in C, they are manual.
  • It is up to the programmer to ensure that every wait and post to a semaphore is correctly in place and failure to have even one statement in the right place could cause a system failure: either simultaneous access to a critical section or deadlock.
  • We will consider automatic synchronization provided by Java. Languages such as Ada that target embedded and real-time applications have an even richer collection of automatic synchronization tools
  • Automatic alternatives to semaphores for mutual exclusion –
    • There are numerous solutions that are equivalent to the use of semaphores—that is, any synchronization achieved through semaphores may be achieved using protected objects in Ada, monitors in numerous other programming languages, and Java’s concept of synchronized blocks and methods, and vice versa.

83 of 100

Automatic synchronization

  • Automatic alternatives to semaphores for mutual exclusion –
    • That is, neither is more powerful than the other, only it may be easier to achieve synchronization using one paradigm over another.
    • We will describe Java’s implementation, as it provides a concrete version.
    • The Java synchronized keyword may be used either on a method or a block of code.
    • For example, in a singly linked list class that is intended to be shared by multiple tasks, the push Front method may be declared to be synchronized, indicating that if another thread attempts to call the same function, it will have to wait its turn.

84 of 100

Resource Management

  • Semaphores
  • Classification of resources
  • Device Management
  • Resource managers
  • Priority and Deadline Inversion
  • Summary Of Resource Management

85 of 100

Classification of Resources

  • Finite numbers of resources is available in the system. These resources are distributed among a number of competing processes.
  • Two general categories of resources can be resources.

1. Reusable resources.

2. Consumable resources.

86 of 100

Classification of resources (Cont’d)

Reusable Resource

  • A reusable resource is one can be safely used by only one process at a time and is not depleted by that use.
  • Processes obtain resource units that they later release for reuse by other processes.
  • Examples of reusable resources include processor, I/O channels, I/O devices, primary and secondary memory, files, database, semaphores etc.

Consumable Resources:

  • A Consumable resource is once that can be created and destroyed.
  • There is no limit on the number of interrupts, signals, messages and information inn I/O buffers.
  • A process must request a resource before using it, and must release the resource after using it.
  • The number of resources requested may not excessed the total number of resources available in the system.
  • If the system has 4 printers then the request for printer is equal to or less than 4.
  • A process may utilize a resource in only the following sequence.

.

87 of 100

Classification of resources (Cont’d)

Some resources cannot be shared between tasks and threads while others can: we classify these as

1. exclusive resources, and

2. shared resources

  • An exclusive resource is one that can be used by only one task at a time. A printer is the most obvious exclusive resource.

88 of 100

Classification of resources (Cont’d)

Reusable resources may either have to be dedicated to a single task or thread until it is finished with the resource, or it may be possible to temporarily borrow the resource while putting the task to sleep. We call these two types of resources

1. Pre-emptible

  • A pre-emptible resource is one that can temporarily be taken away from a task or thread and then returned back once it is used by another task.
  • A task or thread can be interrupted and other tasks, threads or interrupt service routines can execute.

2. Non-pre-emptible.

  • Non-preemptive Scheduling is used when a process terminates, or a process switches from running to waiting state.
  • In case of non-preemptive scheduling does not interrupt a process running CPU in middle of the execution.

89 of 100

Paramenter

PREEMPTIVE SCHEDULING

NON-PREEMPTIVE SCHEDULING

Basic

In this resources(CPU Cycle) are allocated to a process for a limited time.

Once resources(CPU Cycle) are allocated to a process, the process holds it till it completes its burst time or switches to waiting state.

Interrupt

Process can be interrupted in between.

Process can not be interrupted untill it terminates itself or its time is up.

Starvation

If a process having high priority frequently arrives in the ready queue, low priority process may starve.

If a process with long burst time is running CPU, then later coming process with less CPU burst time may starve.

Overhead

It has overheads of scheduling the processes.

It does not have overheads.

Flexibility

flexible

rigid

90 of 100

Key functions of an OS

  • From this discussion, the management of certain resources is sufficiently distinct from general resource management that they are given special consideration, including:

1. task or thread management,

2. memory management,

3. device management, and

4. file management

91 of 100

Memory Management�

  • Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address.
  • Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it must in the main memory. An Operating System does the following activities for memory management −
  • Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
  • In multiprogramming, the OS decides which process will get memory when and how much.
  • Allocates the memory when a process requests it to do so.
  • De-allocates the memory when a process no longer needs it or has been terminated.

92 of 100

File Management�

  • A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions.
  • An Operating System does the following activities for file management −
  • Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.
  • Decides who gets the resources.
  • Allocates the resources.
  • De-allocates the resources.

93 of 100

Device Management�

  • An Operating System manages device communication via their respective drivers. It does the following activities for device management −
  • Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
  • Decides which process gets the device when and for how much time.
  • Allocates the device in the efficient way.
  • De-allocates devices.

94 of 100

Device Management Representation

95 of 100

Functions of Device Manager

96 of 100

Resource Manager

  • operating system is essentially a resource manager.

  • One issue a resource manager will have to deal with is priority inversion: a lower priority task or thread may hold an exclusive resource that is required by a higher priority task or thread. Consequently, it is necessary to increase the priority of the lower priority task or thread to that of the higher priority one until the resource is released.

  • One benefit of a resource manager is that when a task or thread is terminated, the resource manager can immediately reclaim all resources associated with that task, making them once again available to other tasks and threads.

97 of 100

Priority Inversion�

  • Priority inversion is a operating system scenario in which a higher priority process is preempted by a lower priority process. This implies the inversion of the priorities of the two processes.

Problems due to Priority Inversion

  • Some of the problems that occur due to priority inversion are given as follows:
  • A system malfunction may occur if a high priority process is not provided the required resources.
  • Priority inversion may also lead to implementation of corrective measures. These may include the resetting of the entire system.
  • The performance of the system can be reduces due to priority inversion. This may happen because it is imperative for higher priority tasks to execute promptly.
  • System responsiveness decreases as high priority tasks may have strict time constraints or real time response guarantees.
  • Sometimes there is no harm caused by priority inversion as the late execution of the high priority process is not noticed by the system

98 of 100

Solutions of Priority Inversion

Some of the solutions to handle priority inversion are given as follows:

Priority Ceiling

  • All of the resources are assigned a priority that is equal to the highest priority of any task that may attempt to claim them. This helps in avoiding priority inversion.

Disabling Interrupts

  • There are only two priorities in this case i.e. interrupts disabled and pre-emptible. So priority inversion is impossible as there is no third option.

99 of 100

Solutions of Priority Inversion�

Priority Inheritance:

  • This solution temporarily elevates the priority of the low priority task that is executing to the highest priority task that needs the resource. This means that medium priority tasks cannot intervene and lead to priority inversion.

No Blocking:

  • Priority inversion can be avoided by avoiding blocking as the low priority task blocks the high priority task

Random Boosting:

  • The priority of the ready tasks can be randomly boosted until they exit the critical section.

100 of 100