Process synchronization in OS
Process Synchronization in OS is mainly a method to manage processes that use shared data. It takes place in an OS among cooperating processes. Cooperating processes are methods that generally share resources. Processes have to be planned to ensure that concurrent access to shared data does not make inconsistencies.
Data inconsistencies can consequence in what is known as a race condition. A race condition arises when two or more processes are performed at the same time, not scheduled in the proper arrangement, and not exited in the critical section correctly.
What is a Critical Section Problem?
A critical section is a section of code that can be retrieved by a signal process at a definite point in time. The section involves shared data resources that essential to be accessed by other processes.
- The entry to the critical section is measured by the wait () function and it is signified as P ().
- The exit from a critical section is measured by the signal () function, signified as V ().
In the critical section, only a single method can be executed. Other processes, waiting to execute their critical section, require waiting until the current process completes its implementation.
Rules for Critical Section
The critical section must require enforcing all three following rules:
Mutual Exclusion:
Mutual Exclusion is actually an unusual type of binary semaphore which is used for regulatory access to the shared resource. It comprises a priority inheritance tool to avoid extended priority inversion problems. Not more than one process can implement in its critical section at one time.
Progress:
This solution is primarily used when no one is in the critical section, and somebody wants in. Then those procedures not in their reminder section should adopt who should go in, in a finite time.
Bound Waiting:
When a procedure makes a request for getting into the critical section, there is an explicit limit about the number of processes that can get into their critical section. So, when the limit is extended, the system must allow the application to the process to get into its critical section.
Solutions to the Critical Section
Here Process Synchronization in OS, the critical section shows the main role so that the problem must be resolved.
Below are some extensively used methods to solve the critical section problem.
Peterson Solution
Peterson’s solution is an extensively used solution to critical problems. This algorithm was industrialized by computer scientist Peterson that’s why it is known as Peterson’s solution.
In this type of solution, when a process is implementing in a critical state, then the other process only performs the rest of the code, and the reverse can happen. This method also assists to make sure that only a particular process runs in the critical section at a specific time.
For example:
PROCESS Pi
FLAG[i] = true
While ((turn! = i) AND (CS is! Free)) {wait;
}
CRITICAL SECTION FLAG[i] = false
Turn = j; //select another method to go to CS
- Suppose that there are N processes (P1, P2, … PN) and all process at some point of time needs to enter the Critical Section
- A FLAG [] array of size N is continued which is by default false. So, whenever a process requires entering the critical section, it has to set its flag as true. For instance, If Pi wants to enter it will set FLAG[i] =TRUE.
- Another variable named TURN designates the process number which is currently waiting to enter into the CS.
- The procedure which enters into the critical section while exiting would alter the TURN to another number from the list of complete processes.
- For Instance: turn is 2 then P2 enters the Critical section and whereas exiting turn=3 and so P3 breaks out of wait loop.
Introduction of Deadlock in Operating System
A method in operating systems uses diverse resources and uses resources in subsequent ways.
1) Appeals a resource
2) Use the resource
2) Releases the resources
Deadlock (process synchronization in OS) is a position where a set of procedures are blocked since each process is holding a resource and waiting for another resource developed by some other process.
Example:
When two trains are coming to each other on the same track and there is only one track, no one of the trains can move once they are in forward-facing of each other. A comparable situation occurs in operating systems when there are two or more methods that hold some resources and wait for resources held by other(s).
Deadlock can occur if the following four situations hold instantaneously (Necessary Conditions)
Mutual Exclusion:
In mutual exclusion, one or more than one resource is non-shareable i.e. we can use only one process at a time
Hold and Wait:
In hold and wait, a process is holding at least one resource and waiting for other resources.
No Preemption:
Here in “No preemption”, a resource cannot be reserved from any other process unless that process releases the resource.
Circular Wait:
A set of procedures are waiting for each other in globular form.
Methods for handling deadlock
There are many three methods to handle deadlock
1) Deadlock prevention or avoidance
2) Deadlock detection and recovery
3) Ignore the problem altogether
See also what is compiler construction in the operating system?