INTRODUCTION
CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive.
It is a process which allows one process to use the CPU while the execution of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.
The Scheduling techniques are used everywhere where resources need to be allocated. Whenever there is dispute for a resource, a queue builds, and a scheduling technique is used to determine the sequence in which the resource should be allocated to satisfy requests. This happens almost everywhere in real lives. From food corners to banks, we queue for service in a number of ways. In many places people are served in First Come First Serve (FCFS) order. In some places customers with small number of items are dedicated to serve. On the other hand, in food corners everyone gets a little bit of service, which can be viewed as CPU sharing.
Relating it to computer world, complementary advances in storage, processing power, network bandwidth, and data compression techniques have enabled computers to run new kinds of applications, and to run combinations of applications that were previously infeasible. For example, a modern personal computer can simultaneously decode and display a high-quality video stream, encode an audio stream in real time, and accurately recognize continuous speech; any one of these would have been impossible on an inexpensive machine just a few years ago. Also, market pressure is encouraging vendors to migrate functionality previously performed in dedicated hardware onto the main processor; this includes real-time tasks such as sound mixing and modem signal processing . Furthermore, home and office networks, both wired and wireless, are making personal computers into attractive storage and processing servers for resource-limited networked devices such as stereos, digital still and video cameras, and personal digital assistants.
CPU SCHEDULING ALGORITHM
First Come First Serve(FCFS) Scheduling
In the "First come first serve" scheduling algorithm, as the name suggests, the process which arrives first, gets executed first, or we can say that the process which requests the CPU first, gets the CPU allocated first. First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data element which is added to the queue first, is the one who leaves the queue first. A perfect real life example of FCFS scheduling is buying tickets at ticket counter.
Let’s consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in the same order, with Arrival Time 0, and given Burst Time, let's find the average waiting time using the FCFS scheduling algorithm.

The average waiting time will be 18.75 ms
For the above given proccesses, first P1 will be provided with the CPU resources,
- Hence, waiting time for P1 will be 0
- P1 requires 21 ms for completion, hence waiting time for P2 will be 21 ms
- Similarly, waiting time for process P3 will be execution time of P1 + execution time for P2, which will be (21 + 3) ms = 24 ms.
- For process P4 it will be the sum of execution times of P1, P2 and P3.
The GANTT chart above perfectly represents the waiting time for each process.
Challenges of FCFS Algorithm
- It is Non Pre-emptive algorithm, which means the process priority doesn't matter.
If a process with very least priority is being executed, more like daily routine backup process, which takes more time, and all of a sudden some other high priority process arrives, like interrupt to avoid system crash, the high priority process will have to wait, and hence in this case, the system will crash, just because of improper process scheduling.
- Not optimal Average Waiting Time.
- Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor resource(CPU, I/O etc) utilization.
Shortest-Job-First(SJF) Scheduling
Shortest Job First scheduling works on the process with the shortest burst time or duration first.
- This is the best approach to minimize waiting time.
- This is used in Batch Systems.
- It is of two types:
- Non Pre-emptive
- Pre-emptive
- To successfully implement it, the burst time/duration time of the processes should be known to the processor in advance, which is practically not feasible all the time.
- This scheduling algorithm is optimal if all the jobs/processes are available at the same time. (either Arrival time is 0 for all, or Arrival time is same for all)
Consider the below processes available in the ready queue for execution, with arrival time as 0 for all and given burst times. This is a non pre-emptive short job first example

As you can see in the GANTT chart above, the process P4 will be picked up first as it has the shortest burst time, then P2, followed by P3 and at last P1.
Priority Scheduling
In the
Shortest Job First scheduling algorithm, the priority of a process is generally the inverse of the CPU burst time, i.e. the larger the burst time the lower is the priority of that process.
In case of priority scheduling the priority is not always set as the inverse of the CPU burst time, rather it can be internally or externally set, but yes the scheduling is done on the basis of priority of the process where the process which is most urgent is processed first, followed by the ones with lesser priority in order.
Processes with same priority are executed in FCFS manner.
Types of Priority Scheduling Algorithm
Priority scheduling can be of two types:
- Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher priority than the currently running process, the CPU is preempted, which means the processing of the current process is stoped and the incoming new process with higher priority gets the CPU for its execution.
- Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling algorithm if a new process arrives with a higher priority than the current running process, the incoming process is put at the head of the ready queue, which means after the execution of the current process it will be processed.
For example, let’s consider the below table fo processes with their respective CPU burst times and the priorities.

Challenges
In priority scheduling algorithm, the chances of indefinite blocking or starvation.
A process is considered blocked when it is ready to run but has to wait for the CPU as some other process is running currently.
But in case of priority scheduling if new higher priority processes keeps coming in the ready queue then the processes waiting in the ready queue with lower priority may have to wait for long durations before getting the CPU for execution.
In 1973, when the IBM 7904 machine was shut down at MIT, a low-priority process was found which was submitted in 1967 and had not yet been run.
Round Robin(RR) Scheduling
In this scheduling, a fixed time is allotted to each process, called quantum, for execution. Once a process is executed for given time period that process is preemptied and other process executes for given time period. Context switching is used to save states of preemptied processes.

Multilevel Queue Scheduling
Another class of scheduling algorithms has been created for situations in which processes are easily classified into different groups.
For example: A common division is made between foreground(or interactive) processes and background (or batch) processes. These two types of processes have different response-time requirements, and so might have different scheduling needs. In addition, foreground processes may have priority over background processes.
A multi-level queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has its own scheduling algorithm.
For example: separate queues might be used for foreground and background processes. The foreground queue might be scheduled by Round Robin algorithm, while the background queue is scheduled by an FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling. For example: The foreground queue may have absolute priority over the background queue.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
- System Processes
- Interactive Processes
- Interactive Editing Processes
- Batch Processes
- Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example, could run unless the queues for system processes, interactive processes, and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process will be preempted.

Multilevel Feedback Queue Scheduling
In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the system. Processes do not move between queues. This setup has the advantage of low scheduling overhead, but the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between queues. The idea is to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation.

An example of a multilevel feedback queue can be seen in the below figure.
In general, a multilevel feedback queue scheduler is defined by the following parameters:
- The number of queues.
- The scheduling algorithm for each queue.
- The method used to determine when to upgrade a process to a higher-priority queue.
- The method used to determine when to demote a process to a lower-priority queue.
- The method used to determine which queue a process will enter when that process needs service.
The definition of a multilevel feedback queue scheduler makes it the most general CPU-scheduling algorithm. It can be configured to match a specific system under design. Unfortunately, it also requires some means of selecting values for all the parameters to define the best scheduler. Although a multilevel feedback queue is the most general scheme, it is also the most complex.
CRITERIA FOR SELECTING CPU SCHEDULING ALGORITHMS
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the time(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total amount of work done in a unit of time. This may range from 10/second to 1/hour depending on the specific processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of the process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced. Remember, it is the time till the first response and not the completion of process execution(final response).
REFERENCES
Michael B. Jones and Stefan Saroiu. Predictability Requirements of a Soft Modem. In
Proc. of the ACM SIGMETRICS Conf. on Measurement and Modeling of Computer Systems, Cambridge, MA, June 2001.
Michael B. Jones, John Regehr, and Stefan Saroiu. Two Case Studies in Predictable Application Scheduling Using Rialto/NT. In
Proceedings of the 7th Real-Time Technology and Applications Symposium (RTAS 2001), Taipei, Taiwan, May 2001.
https://www.studytonight.com/operating-system/cpu-scheduling