Real Time Operating Systems
Dr. Anupriya Koneru,
Associate professor,
Department of IT
LBRCE
Reference : Douglas Wilhelm Harder, Jeff Zarnett, Vajih Montaghami and Allyson Giannikouris – A Practical Introduction to real time systems for undergraduate engineering
UNIT-III
Synchronization�
1. mutual exclusion: preventing two tasks from accessing the same data, and
2. serialization: having two events occur one-after-the-other.
Synchronization�
The need for synchronization �
1. Two tasks trying to use the same data structure,
2. Two tasks attempting to share information, and
3. Two tasks attempting to access information from a third
Two tasks trying to use the same data structure:
First, suppose we have a shared linked list:
The need for synchronization �
The need for synchronization �
Two tasks trying to use the same data structure:
1. the new node was allocated and initialized,
2. the fields head and tail were updated, but
3. the size field is still zero.
The need for synchronization �
Two tasks trying to use the same data structure:
The need for synchronization �
Two tasks communicating information:
The need for synchronization �
Two tasks communicating information:
No. Only after the data is set is the flag result_is_produced is set to true.
Yes. Suppose that we have only a single processor or core. In this case, while Task 2 is executing the while loop, Task 1 cannot spend any time actually preparing a result.
One solution to this problem is to call a yield() command.
The need for synchronization �
Two tasks communicating information:
The need for synchronization �
Two tasks communicating information:
The need for synchronization �
Multiple tasks communicating information:
The need for synchronization �
Multiple tasks communicating information:
The need for synchronization �
Multiple tasks communicating information:
Summary of problems:
1. checking a flag, and
2. setting that flag.
The need for synchronization �
Multiple tasks communicating information:
1. tokens,
2. a test-and-reset instruction,
3. semaphores, and
4. automatic equivalents to semaphores.
Petri nets—describing synchronizations graphically �
1. places or conditions, and
2. transitions between conditions.
Petri nets—describing synchronizations graphically �
Petri nets—describing synchronizations graphically �
Allocation resources-tokens
Fig:The resource server is not ready and a token indicating the server is ready.
To represent the current state, we will use tokens.
It is possible for a server to have, for example, four tasks ready to provide the service. Consequently, this could be represented by four tokens, as is shown in Fig.
Fig:A server with four tasks ready.
Synchronization scenarios using Petri nets
There are several common scenarios in synchronization:
1. concurrency,
2. conflict,
3. synchronization, and
4. merging
Synchronization scenarios using Petri nets(cont’d)
Synchronization through token passing
1. Each consumer has a unique identifier,
2. The token has the value of one of the identifiers, and
3. When a consumer has accessed the result, it updates the token to be the identifier of the next consumer.
Test-and-set—a crude signal with polling
Test-and-set—a crude signal with polling (Cont’d)
The problem with this is that an interrupt can occur between the variable being tested and the variable being set to false. Instead, we must have some means of testing and setting the variable so that no interrupt can occur between the two operations. Thus, we require that if the variable is
1. false, it remains false, and
2. true, it is set to false, but its value must still appear to be false.
Test-and-set—a crude signal with polling (Cont’d)
Test-and-set—a crude signal with polling (Cont’d)
Test-and-set—a crude signal with polling (Cont’d)
Semaphores—a better signal without polling
Semaphores—a better signal without polling
Binary Semaphores
Wait and Signal Operations in Semaphores
Semaphores—a better signal without polling
Binary Semaphores
Semaphores—a better signal without polling
Binary Semaphores
P(semaphore s)
{
if (s.value == 1) {
s.value = 0;
}
else {
// add the process to the waiting queue
q.push(P)
sleep();
}
}
V(Semaphore s)
{
if (s.q is empty) {
s.value = 1;
}
else {
// select a process from waiting queue
Process p=q.pop();
wakeup(p);
}
}
Semaphores—a better signal without polling
Down(s)
//CS
Up(S)
Down(s)
//CS
Up(S)
Application 1:
Semaphores—a better signal without polling
Down(Empty)
Down(s)
//CS
Buffer[IN] =Item
In=(In+1)mod n
Up(s)
Up(Full)
Down(Full)
Down(s)
//CS
Item c = Buffer[Out]
Out=(Out+1)mod n
Up(s)
Up(Empty)
Application 2:Producer Consumer Problem
0 | A |
1 | B |
2 | C |
3 |
|
4 | |
5 | |
6 | |
7 | |
N=8
Full =0= No of filled Slots
Empty = N= No of Empty slots
Empty = 5
Full =3
S=1
In=
Out=
Semaphores—a better signal without polling
Binary semaphores
binary_semaphore_init( binary_semaphore_t *const p_this, unsigned int n )
Initialize the semaphore to either 0 or 1.
binary_semaphore_wait( binary_semaphore_t *const p_this )
If the semaphore is 1, set it to 0 and continue executing.
Otherwise, flag this task as being blocked on this semaphore and prevent it from being scheduled.
binary_semaphore_post( binary_semaphore_t *const p_this )
If the semaphore is 1, do nothing.
If the semaphore is 0 and there is at least one task blocked on this semaphore, unblock one of those tasks;
otherwise, set the semaphore to 1.
Semaphores—a better signal without polling
Binary semaphores
Applications of binary semaphores
Semaphores—a better signal without polling
Applications of binary semaphores
Semaphores—a better signal without polling
Applications of binary semaphores
Semaphores—a better signal without polling
Applications of binary semaphores
Two tasks communicating information:
1. The producer will be required to wait if the consumer has not yet processed the data, and
2. The consumer will be required to wait if the producer has not yet created the data.
1. ready_to_write: the data is ready to be written to by a producer, and
2. ready_to_read: the data is ready to be read by a consumer
Semaphores—a better signal without polling
Applications of binary semaphores
Two tasks communicating information:
Semaphores—a better signal without polling
Applications of binary semaphores
Multiple tasks communicating information:
Implementation of binary semaphores
In this section, we will consider both
We will look at each here.
Semaphores—a better signal without polling
Implementation of binary semaphores
typedef struct {
bool is_acquirable; // 1 if it can be acquired, 0 otherwise tcb_queue_t waiting_queue;
} binary_semaphore_t;
void binary_semaphore_init( binary_semaphore_t *const p_this, bool state )
{
p_this->is_acquirable = state;
TCB_QUEUE_INIT( p_this->waiting_queue ); // Initialize the queue
}
Semaphores—a better signal without polling
Implementation of binary semaphores
1. call test-and-reset on the field is_acquirable,
2. if it returns false, we enter a loop that blocks the currently executing task and places it on the waiting queue associated with the tasks,
3. otherwise, we return: the task has been granted the semaphore.
Data structures for semaphores:
Semaphores—a better signal without polling
Implementation of binary semaphores
Data structures for semaphores:
Using a binary min-heap - A binary min-heap is an array-based data structure. In this case, all required operations are O(ln(n)) with the push operation having an average run-time of O(1) if the insertions are uniformly distributed among the priorities. Unfortunately, the queue must also be stored as an array, and therefore the maximum size of the array must be known. For very small priority queue sizes, it is likely faster to use a linked-list queue as described in the previous section. If the queues are significantly bigger, there is a possibility of a lot of wasted space. Thus, we should consider a different approach.
A leftist heap - A leftist heap is similar to an AVL tree in that all operations are O(ln(n)), but like an AVL tree, it is necessary to track the null-path length of any node: that is, the shortest path to a non-full descendant node. Tracking and maintaining this data may be expensive.
A skew heap - A skew heap is similar to a leftist heap, but some operations are automatic as opposed to being based on null-path lengths: push into the right sub-heap, but then swap the left and right sub-heap. Consequently, the amortized run times are O(ln(n)), but the worst case run times may be O(n)
Semaphores—a better signal without polling
Implementation of binary semaphores
Data structures for semaphores: Analysis
Semaphores—a better signal without polling
Implementation of binary semaphores
Priority inversion
1. a high-priority task may be waiting for a lower-priority task for serialization; that is, for it to complete some action (perhaps reaching a rendezvous), or
2. a high-priority task may be waiting on a semaphore for mutual exclusion.
We will consider both of these issues, as the solutions differ.
Semaphores—a better signal without polling
Implementation of binary semaphores
Priority inversion
Semaphores—a better signal without polling
Counting semaphores:
Semaphores—a better signal without polling � Counting semaphores:
Down (Semaphore S)
{
SS.value = S.value -1
if (S.value< 0)
{
put_process(PCB) in L;
Sleep();
}
else
return;
}
up (Semaphore s)
{
SS.value = S.value+1;
if(S.value<=0)
{
select a process from L;
wake-up();
}
}
Semaphores—a better signal without polling � Counting semaphores:
Semaphores—a better signal without polling � Counting semaphores:
1. the counting semaphore has an integer value (as opposed to just a binary true-or-false) where the initial value is positive (0 or greater),
2. a wait decrements the value of the counting semaphore and if the resulting number is strictly negative, the waiting task is blocked on the semaphore, and
3. a post increments the value of the counting semaphore, and if the initial value was strictly negative, this indicates at least one task is waiting on the semaphore, and therefore one task must be made ready and placed on the ready queue.
Semaphores—a better signal without polling � Counting semaphores:
An application of counting semaphores
Implementation of counting semaphores:
Semaphores—a better signal without polling � Counting semaphores:
POSIX semaphores
Semaphores—a better signal without polling � Counting semaphores:
Keil RTX RTOS semaphores:
Semaphores—a better signal without polling � Counting semaphores:
Keil RTX RTOS semaphores:
Semaphores—a better signal without polling � Counting semaphores:
Keil RTX RTOS semaphores:
Semaphores—a better signal without polling � Counting semaphores:
Counting Semaphore | Binary Semaphore |
No mutual exclusion | Mutual exclusion |
Any integer value | Value only 0 and 1 |
More than one slot | Only one slot |
Provide a set of Processes | It has a mutual exclusion mechanism. |
Counting Semaphore vs. Binary Semaphore
Here, are some major differences between counting and binary semaphore:
Problems in synchronization
There are three categories of problems in synchronization, including
1. basic,
Signaling,
Rendezvous
2.Intermediate problems in synchronization
1. multiplexing,
2. group rendezvous,
3. light-switch, and
4. condition variables.
3. advanced.
1. the dining philosophers’ problem, and
2. the readers-writers problem.
Problems in synchronization
Basic problems in synchronization
1. signalling, and
2. rendezvous.
Problems in synchronization
Basic problems in synchronization
Mutual exclusion
binary_semaphore_t mutex;
binary_semaphore_init( &mutex, 1 );
binary_semaphore_wait( &mutex ); {
// Critical region
binary_semaphore_post( &mutex );
Problems in synchronization
Basic problems in synchronization
Signalling - A semaphore can be used by one task to signal that an event has occurred. Any task waiting on that signal can then proceed in its execution. To achieve a signal, the semaphore is initialized to zero and the signalling semaphore will post to that semaphore to signal the event. Any task that waits on that signal before it is sent would be blocked for execution until the signal occurs. Any task that waits on that signal after it is sent would, likewise, continue in its execution.
binary_semaphore_t signal;
binary_semaphore_init( &signal, 1 );
void task_1( void ) {
// ...
// Prepare data
binary_semaphore_post( &signal );
// ...
}
void task_2( void ) {
// ...
binary_semaphore_wait( &signal );
// Process data
// ...
}
Problems in synchronization
Basic problems in synchronization
void task_0( void ) {
// Prepare data...
// Rendezvous
// Process data...
}
void task_1( void ) {
// Prepare data...
// Rendezvous
// Process data...
}
Problems in synchronization
Basic problems in synchronization
Rendezvous –
1. The first event is that Task 0 is ready for the rendezvous, and
2. The second event is that Task 1 is ready for the rendezvous.
Problems in synchronization
Basic problems in synchronization
Rendezvous –
binary_semaphore_t signal[2];
binary_semaphore_init( &signal[0], 0 );
binary_semaphore_init( &signal[1], 0 );
void task_0( void ) {
// Prepare data...
binary_semaphore_post( &signal[0] ); // Signal that this task is ready
binary_semaphore_wait( &signal[1] ); // Wait for the other task
// Process data...
}
void task_1( void ) {
// Prepare data ...
binary_semaphore_post( &signal[1] ); // Signal that this task is ready
binary_semaphore_wait( &signal[0] ); // Wait for the other task
// Process data ...
}
Intermediate problems in synchronization
Now we will look at
1. multiplexing,
2. group rendezvous,
3. light-switch, and
4. condition variables.
Intermediate problems in synchronization
Multiplexing –
counting_semaphore_t multiplex;
counting_semaphore_init( &multiplex, n );
counting_semaphore_wait( &multiplex ); {
// Shared critical region
} counting_semaphore_post( &multiplex );
Group rendezvous and turnstile - The rendezvous we discussed works well for synchronizing two tasks, but it does not work in general for an arbitrary number of tasks. You could have one semaphore for each task, but this would be excessive.
Intermediate problems in synchronization
Group rendezvous and turnstile –
A sub-optimal solution –
Intermediate problems in synchronization
A better solution
size_t tasks_not_at_rendezvous = ALL_TASKS;
// the number of tasks executing the rendezvous
Intermediate problems in synchronization
1. signalling a single task waiting for the event to occur, or
2. signal all (broadcast to all) tasks waiting for the event to occur.
Advanced problems in synchronization
1. the dining philosophers’ problem, and
2. the readers-writers problem.
Dining philosophers’ problem
Advanced problems in synchronization
Dining philosophers’ problem
1. they do not end up in a deadlock where none can eat, and
2. none of the philosophers starve.
First left, then right—deadlock
Advanced problems in synchronization
Dining philosophers’ problem
Accessing both chopsticks simultaneously
1. Try to grab both chopsticks at the same time,
2. If you cannot, flag yourself as hungry and wait for someone to nudge you,
3. If you can, lock both chopsticks, eat, unlock both, and nudge any neighbor who may be hungry.
Ordering the resources
Another possible solution is to observe that there is a cycle that can develop as a result of the various locks. Each philosopher is holding on to a resource which the previous philosopher requires to proceed. By ordering the resources and requiring that each philosopher picks up the resources in order, it is not possible to generate a cycle and therefore it is possible to avoid deadlock
Advanced problems in synchronization
Dining philosophers’ problem
Ordering the resources
Another possible solution is to observe that there is a cycle that can develop as a result of the various locks. Each philosopher is holding on to a resource which the previous philosopher requires to proceed. By ordering the resources and requiring that each philosopher picks up the resources in order, it is not possible to generate a cycle and therefore it is possible to avoid deadlock
Advanced problems in synchronization
Dining philosophers’ problem
1. Suppose Philosopher 1 and 3 are eating, and Philosopher 0 comes to the table and promptly goes to sleep.
2. Philosopher 4 comes to the table and goes to sleep, after which Philosopher 3 leaves, waking up Philosopher 4 and he starts eating.
3. Philosopher 2 comes to the table and goes to sleep, after which Philosopher 1 leaves. Philosopher 1 may try to wake up Philosopher 0, but Philosopher 4 has Philosopher 0’s other chopstick, so Philosopher 0 goes back to sleep. Philosopher 2 is woken up and starts eating.
4. Philosopher 1 comes to the table and goes to sleep, after which Philosopher 2 leaves, waking up Philosopher 1.
Advanced problems in synchronization
Dining philosophers’ problem
5. Philosopher 3 comes to the table and goes to sleep, after which Philosopher 4 leaves. Philosopher 4 may try to wake up Philosopher 0, but Philosopher 1 has Philosopher 0’s other chopstick, so Philosopher 0 goes back to sleep. Philosopher 3 is woken up and starts eating.
6. Go back to Step 2.
Readers-writers problem
Advanced problems in synchronization
Readers-writers problem
1. readers may access the resource so long as there is no writer accessing the resource, and writers may access the resource so long as there are no readers or writers accessing the resource;
2. FCFS manner, where any number of readers may appear and are granted access to the critical section, but as soon as a writer appears, it waits until all readers have finished accessing the resource and any subsequent readers or writers must wait until this writer completes;
3. reader-priority where all writers that are not currently accessing the resource must wait so long as even one reader is waiting to access the resource; and
4. writer-priority where all readers that are not currently accessing the resource must wait so long as even one writer is waiting to access the resource.
Advanced problems in synchronization
Readers-writers problem
What do we require?
1. When one writer is accessing the resource, nothing else can access the resource.
2. When one or more readers are accessing the resource, additional readers can access the resource, but no writers can access the resource.
Advanced problems in synchronization
Readers-writers problem
Advanced problems in synchronization
Readers-writers problem
Automatic synchronization
1. the weaknesses of semaphores, and
2. automatic alternatives.
Automatic synchronization
Automatic synchronization
�Resource Management�
Classification of Resources
1. Reusable resources.
2. Consumable resources.
Classification of resources (Cont’d)
Reusable Resource
Consumable Resources:
.
Classification of resources (Cont’d)
Some resources cannot be shared between tasks and threads while others can: we classify these as
1. exclusive resources, and
2. shared resources
Classification of resources (Cont’d)
Reusable resources may either have to be dedicated to a single task or thread until it is finished with the resource, or it may be possible to temporarily borrow the resource while putting the task to sleep. We call these two types of resources
1. Pre-emptible
2. Non-pre-emptible.
Paramenter | PREEMPTIVE SCHEDULING | NON-PREEMPTIVE SCHEDULING |
Basic | In this resources(CPU Cycle) are allocated to a process for a limited time. | Once resources(CPU Cycle) are allocated to a process, the process holds it till it completes its burst time or switches to waiting state. |
Interrupt | Process can be interrupted in between. | Process can not be interrupted untill it terminates itself or its time is up. |
Starvation | If a process having high priority frequently arrives in the ready queue, low priority process may starve. | If a process with long burst time is running CPU, then later coming process with less CPU burst time may starve. |
Overhead | It has overheads of scheduling the processes. | It does not have overheads. |
Flexibility | flexible | rigid |
Key functions of an OS
1. task or thread management,
2. memory management,
3. device management, and
4. file management
�Memory Management�
�File Management�
�Device Management�
Device Management Representation
Functions of Device Manager
Resource Manager
�Priority Inversion�
Problems due to Priority Inversion
Solutions of Priority Inversion
Some of the solutions to handle priority inversion are given as follows:
Priority Ceiling
Disabling Interrupts
�Solutions of Priority Inversion�
Priority Inheritance:
No Blocking:
Random Boosting: