Open In App

Solution to Critical Section Problem

Last Updated : 09 Jan, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

The critical section is a part of a program where shared resources like memory, data structures, CPU or I/O devices are accessed.

Only one process can execute the critical section at a time to prevent conflicts. The operating system faces challenges in deciding when to allow or block processes from entering the critical section. The critical section problem involves creating protocols to ensure that race conditions (where multiple processes interfere with each other) never occur.

To solve this, various synchronization techniques ensure only one process can access the critical section at a time. In this article, we’ll explore practical solutions to the Critical Section Problem, including mutual exclusion, progress, and bounded waiting, to maintain system reliability and avoid race conditions. These solutions are essential for efficient resource management in modern computing systems.

Types of Solutions to Critical Section Problem

There are three main types of solutions to the Critical Section Problem:

  • Software Based
  • Hardware Based
  • OS Based

Software Based Solutions

These solutions are implemented using programming logic and algorithms, without relying on hardware. They help processes work together and prevent problems when accessing shared resources in the critical section.

Various software solutions to the Critical Section Problem are:

  • Lock Variable
  • Strict Alternation
  • Peterson's Algorithm
  • Dekker's Algorithm

Lock Variable:
This is a simple synchronization method that works in user mode and is a busy-waiting solution for multiple processes. It uses a lock variable, lock to manage access to the critical section.

  • The lock variable can have two values:
    • 0: Indicates the critical section is free.
    • 1: Indicates the critical section is in use.

When a process wants to enter the critical section, it first checks the lock variable:

  • If the value is 0, the process sets it to 1 and enters the critical section.
  • If the value is 1, the process waits until it becomes 0.
Entry Section   
While (lock! = 0);   
Lock = 1;  
//Critical Section   
Exit Section 
Lock =0;  

Initially, the lock variable is set to 0. When a process wants to enter the critical section, it goes through an entry section and checks a condition in a while loop.

  • The process keeps waiting in the loop until the lock value changes to 1.
  • If the critical section is free (lock is 0), the process enters the critical section and sets the lock to 1 to indicate it is in use.

When the process finishes its work in the critical section, it goes through the exit section and sets the lock back to 0, making the critical section available for other processes.
Lock Variable fails to satisfy Bounded Wait. This may be because a process can enter CS multiple times successively while other processes are waiting for their turn to enter CS.

Read more about Lock Variable Synchronization Mechanism.

Strict Alternation:
The Turn Variable or Strict Alternation Approach is a simple software mechanism used in user mode. It is a busy-waiting solution designed specifically for two processes.

  • A turn variable acts as a lock, determining which process can access the critical section.
  • The turn alternates between the two processes, ensuring they take turns to enter the critical section.

This method works only for two processes and is not suitable for systems with more than two processes.

For Process Pi:
Non - CS
while (turn ! = i); 
Critical Section
turn = j;   
Non - CS  

For Process Pj:
Non - CS
while (turn ! = j); 
Critical Section
turn = i;   
Non - CS 

A process can enter the critical section only when the turn variable matches the process's ID (PID). The turn variable can have only two values: i or j.

  • In the entry section, process Pi cannot enter the critical section until the turn variable is i. Similarly, process Pj cannot enter until the turn variable is j.
  • Initially, the turn variable is set to i, so Pi gets the first chance to enter the critical section. The turn variable stays i while Pi is in the critical section.
  • Once Pi finishes its work in the critical section, it sets the turn variable to j. This allows Pj to enter the critical section, and the turn variable stays j until Pj finishes.

This ensures that the two processes alternate access to the critical section.

Strict Alternation never guarantees progress.

Peterson’s Algorithm:
To handle the problem of Critical Section (CS) Peterson gave an algorithm with a bounded waiting.
• Suppose there are N processes (P1, P2, … PN) and each of them at some point need to enter the Critical Section.
• A flag[] array of size N is maintained which is by default false and whenever a process need to enter the critical section it has to set its flag as true, i.e. suppose Pi wants to enter so it will set flag[i]=TRUE.
• There is another variable called turn which indicates the process number which is currently to enter into the CS. The process that enters into the CS while exiting would change the turn to another number from among the list of ready processes.

var flag[]: array [0..1] of boolean;
turn: 0..1;
%flag[k] means that process[k] is interested in the critical section
flag[0] := FALSE;
flag[1] := FALSE;
turn := random(0..1)
After initialization, each process, which is called process i in the code (the other process is process j), runs the following code:

repeat
flag[i] := TRUE;
turn := j;
while (flag[j] and turn=j) do no-op;
CRITICAL SECTION
flag[i] := FALSE;
REMAINDER SECTION
until FALSE;

Information common to both processes:
turn = 0
flag[0] = FALSE
flag[1] = FALSE

EXAMPLE

Process 0

Process 1

i = 0, j = 1

i = 1, j = 0

flag[0] := TRUE

turn := 1

check (flag[1] = TRUE and turn = 1)

-  Condition is false because flag[1] = FALSE

- Since condition is false, no waiting in while loop

-  Enter the critical section

-  Process 0 happens to lose the processor

 

 

flag[1] := TRUE

turn := 0

check (flag[0] = TRUE and turn = 0)

- Since condition is true, it keeps busy waiting until it loses the processor

- Process 0 resumes and continues until it finishes in the critical section

- Leave critical section

flag[0] := FALSE

- Start executing the remainder (anything else a process does besides using the critical section)

- Process 0 happens to lose the processor

 

 

check (flag[0] = TRUE and turn = 0)

- This condition fails because flag[0] = FALSE

- No more busy waiting

- Enter the critical section

Dekker's Algorithm
Dekker's Algorithm is a software-based solution to the critical section problem for two processes. It ensures mutual exclusion by using flags and a turn variable, allowing only one process to access the critical section at a time while preventing deadlock and race conditions.

For more detail : Refer Dekker's Algorithm in OS

Hardware Based Solutions

Hardware-based solutions to the critical section problem use special instructions like Test-and-Set and Swap. These instructions help manage access to shared resources by allowing only one process to enter the critical section at a time. They are fast and efficient, making them ideal for systems with advanced hardware support.

Various hardware solutions to the Critical Section Problem are:

  • Test and Set
  • Swap
  • Unlock and Lock

For more detail : Refer Hardware Synchronization Algorithms

OS Based Solutions

Operating system-based solutions to the critical section problem use tools like semaphores, Sleep-Wakeup and monitors. These mechanisms help processes synchronize and ensure only one process accesses the critical section at a time. They are widely used in multitasking systems for efficient resource management.

Different OS based solutions for critical section problem are:

  • Semaphores
  • Monitors
  • Sleep-Wakeup

Semaphores

Semaphores are synchronization tools provided by operating systems to manage process access to the critical section. A semaphore is a variable that helps coordinate processes by signaling when a shared resource is free or busy.

There are two types of semaphores:

  1. Binary Semaphore (Mutex):
    • Acts like a lock with values 0 or 1.
    • 0 indicates the resource is occupied, and 1 indicates it is free.
    • Used for mutual exclusion.
  2. Counting Semaphore:
    • Allows a fixed number of processes to access a shared resource simultaneously.
    • Commonly used for managing resource pools, like printers or database connections.

Working of Semaphore:

  • Wait (P Operation): Decrements the semaphore value if it’s greater than 0. If it’s 0, the process waits.
  • Signal (V Operation): Increments the semaphore value to indicate a resource is available.

For more information about Semaphore : Refer Semaphores in Process Synchronization.

Monitors

Monitors are advanced tools provided by operating systems to control access to shared resources. They group shared variables, procedures, and synchronization methods into one unit making sure that only one process can use the monitor's procedures at a time.

Key Features of Monitors:

  1. Automatic Mutual Exclusion:
    • Only one process can access the monitor at a time, eliminating the need for manual locks.
  2. Condition Variables:
    • Monitors use condition variables with operations like wait() and signal() to handle process synchronization:
      • Wait(): Makes a process wait until a certain condition is met.
      • Signal(): Wakes up a waiting process when the condition is satisfied.
  3. Built-In Synchronization:
    • The operating system ensures that no two processes are inside the monitor simultaneously, simplifying the synchronization process.

How Monitors Work:

  • Processes invoke monitor procedures to access shared resources.
  • The monitor ensures mutual exclusion by automatically blocking other processes until the current process finishes.

For more information about Monitors : Refer Monitors in Process Synchronization.

Sleep-Wakeup

The Sleep and Wakeup mechanism is an operating system-based solution to the critical section problem. It helps processes to avoid busy waiting while waiting for access to shared resources.

How It Works:

  1. Sleep:
    • When a process wants to enter the critical section but finds the resource is already being used by another process, it doesn’t keep checking in a loop.
    • Instead, it is put to sleep. This means the process temporarily stops running and frees up CPU resources, allowing other processes to execute.
  2. Wakeup:
    • Once the resource becomes available (e.g., the current process in the critical section finishes its work), the operating system or another process sends a wakeup signal to the sleeping process.
    • The sleeping process then resumes execution and checks if it can now enter the critical section.

Example:

  • Imagine two processes, P1 and P2, want to use a printer (a shared resource):
    • If P1 is already using the printer, P2 goes to sleep instead of repeatedly checking if the printer is free.
    • When P1 finishes printing, it sends a wakeup signal to P2 letting it know the printer is now available.

Benefits:

  • Avoids Busy Waiting: Processes don’t waste CPU time checking for resource availability.
  • Efficient Resource Usage: The CPU can focus on other processes while one waits.
  • Simplifies Synchronization: Processes are paused and resumed automatically based on resource availability.

Conclusion

The Critical Section Problem is essential to address for ensuring proper synchronization in systems with shared resources. Solutions like software-based methods (e.g., Lock Variables, Peterson’s Algorithm), hardware-based techniques (e.g., Test-and-Set, Swap), and OS-based mechanisms (e.g., Semaphores, Monitors, Sleep-Wakeup) provide efficient ways to manage access and avoid race conditions. These solutions maintain system reliability, fairness, and efficiency, making them vital for modern multitasking environments.


Similar Reads