ALL > Computer and Education > courses > university courses > graduate courses > modern operating system > ZSTU-(2019-2020-2) Class > student directories > Bhupesh(l20192e060101) >
homwrok2:write a review paper about process syncronization, submit one choice question for the exam database and one dialog for self tutor. Version 0
👤 Author: by bhupeshaawasthi952gmailcom 2020-05-25 10:14:33
write a review paper about process syncronization, submit one choice question for the exam database and one dialog for self tutor.

Process Synchronization

Process Synchronization is a way to coordinate processes that use shared data. It occurs in an operating system among cooperating processes. Cooperating processes are processes that share resources. While executing many concurrent processes, process synchronization helps to maintain shared data consistency and cooperating process execution. Processes have to be scheduled to ensure that concurrent access to shared data does not create inconsistencies. Data inconsistency can result in what is called a race condition. A race condition occurs when two or more operations are executed at the same time, not scheduled in the proper sequence, and not exited in the critical section correctly.

Critical Section

A critical section is a segment of code that can be accessed by only one signal process at a certain instance in time. This section consists of shared data resources that need to be accessed by other processes. The entry to the critical section is handled by the wait() function, represented as P(). The exit from a critical section is controlled by the signal() function, represented as V(). Only one process can be executed inside the critical section at a time. Other processes waiting to execute their critical sections have to wait until the current process finishes executing its critical section.

The Critical-Section Problem

  • The producer-consumer problem described above is a specific example of a more general situation known as the critical section problem. The general idea is that in a number of cooperating processes, each has a critical section of code, with the following conditions and terminologies:

    • Only one process in the group can be allowed to execute in their critical section at any one time. If one process is already executing their critical section and another process wishes to do so, then the second process must be made to wait until the first process has completed their critical section work.

    • The code preceding the critical section, and which controls access to the critical section, is termed the entry section. It acts like a carefully controlled locking door.

    • The code following the critical section is termed the exit section. It generally releases the lock on someone else's door, or at least lets the world know that they are no longer in their critical section.

    • The rest of the code not included in either the critical section or the entry or exit sections is termed the remainder section.




pastedGraphic.png

Figure 5.1 - General structure of a typical process Pi

  • A solution to the critical section problem must satisfy the following three conditions:

    1. Mutual Exclusion - Only one process at a time can be executing in their critical section.

    2. Progress - If no process is currently executing in their critical section, and one or more processes want to execute their critical section, then only the processes not in their remainder sections can participate in the decision, and the decision cannot be postponed indefinitely. ( I.e. processes cannot be blocked forever waiting to get into their critical sections. )

    3. Bounded Waiting - There exists a limit as to how many other processes can get into their critical sections after a process requests entry into their critical section and before that request is granted. ( I.e. a process requesting entry into their critical section will get a turn eventually, and there is a limit as to how many other processes get to go first. )



  • We assume that all processes proceed at a non-zero speed, but no assumptions can be made regarding the relative speed of one process versus another.

  • Kernel processes can also be subject to race conditions, which can be especially problematic when updating commonly shared kernel data structures such as open file tables or virtual memory management. Accordingly kernels can take on one of two forms:

    1. Non-preemptive kernels do not allow processes to be interrupted while in kernel mode. This eliminates the possibility of kernel-mode race conditions, but requires kernel mode operations to complete very quickly, and can be problematic for real-time systems, because timing cannot be guaranteed.

    2. Preemptive kernels allow for real-time operations, but must be carefully written to avoid race conditions. This can be especially tricky on SMP systems, in which multiple kernel processes may be running simultaneously on different processors.



  • Non-preemptive kernels include Windows XP, 2000, traditional UNIX, and Linux prior to 2.6; Preemptive kernels include Linux 2.6 and later, and some commercial UNIXes such as Solaris and IRIX. 


SEMAPHORES 

A critical section execution is handled by a semaphore. A semaphore is simply a variable that stores an integer value. This integer can be accessed by two operations: wait() and signal(). When a process enters the critical section, P(s) is invoked and the semaphore s is set to 1. After the process exits the critical section, s is re-initialized to 0. Below is an example of semaphore 

P(S): if S ≥ 1 then S := S - 1 

else <block and enqueue the process>; 

 

V(S): if <some process is blocked on the queue> 

then <unblock a process> 

else S := S + 1; 

There are two types of semaphores: Binary Semaphores and Counting Semaphores 

Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the locks can provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex semaphore 1 and start its critical section. When it completes its critical section, it can reset the value of mutex semaphore to 0 and some other process can enter its critical section. 

Counting Semaphores: They can have any value and are not restricted over a certain domain. They can be used to control access to a resource that has a limitation on the number of simultaneous accesses. The semaphore can be initialized to the number of instances of the resource. Whenever a process wants to use that resource, it checks if the number of remaining instances is more than zero, i.e., the process has an instance available. Then, the process can enter its critical section thereby decreasing the value of the counting semaphore by 1. After the process is over with the use of the instance of the resource, it can leave the critical section thereby adding 1 to the number of available instances of the resource. 

Limitations of Semaphores 

Priority Inversion is a big limitation of semaphores.

Their use is not enforced, but is by convention only.

With improper use, a process may block indefinitely. Such a situation is called Deadlock

Deadlocks and Starvation

  • One important problem that can arise when using semaphores to block processes waiting for a limited resource is the problem of deadlocks, which occur when multiple processes are blocked, each waiting for a resource that can only be freed by one of the other ( blocked ) processes, as illustrated in the following example.


pastedGraphic_1.png

  • Another problem to consider is that of starvation, in which one or more processes gets blocked forever, and never get a chance to take their turn in the critical section. For example, in the semaphores above, we did not specify the algorithms for adding processes to the waiting queue in the semaphore in the wait( ) call, or selecting one to be removed from the queue in the signal( ) call. If the method chosen is a FIFO queue, then every process will eventually get their turn, but if a LIFO queue is implemented instead, then the first process to start waiting could starve.


5.6.4 Priority Inversion

  • A challenging scheduling problem arises when a high-priority process gets blocked waiting for a resource that is currently held by a low-priority process.

  • If the low-priority process gets pre-empted by one or more medium-priority processes, then the high-priority process is essentially made to wait for the medium priority processes to finish before the low-priority process can release the needed resource, causing a priority inversion. If there are enough medium-priority processes, then the high-priority process may be forced to wait for a very long time.

  • One solution is a priority-inheritance protocol, in which a low-priority process holding a resource for which a high-priority process is waiting will temporarily inherit the high priority from the waiting process. This prevents the medium-priority processes from preempting the low-priority process until it releases the resource, blocking the priority inversion problem.

  • The book has an interesting discussion of how a priority inversion almost doomed the Mars Pathfinder mission, and how the problem was solved when the priority inversion was stopped.

Please login to reply. Login

Reversion History

Loading...
No reversions found.