Mohammad Saddam Mashuri's OS202
CPU Scheduling: Definition
I’m sorry if this is getting repetitive, but really, fundamentals are important. In a system with a single CPU core, only one process can be run at a time. The purpose of CPU scheduling is to maximize productivity in the CPU utilization. With the addition of multiprogramming, the scheduling becomes more complex. Thankfully, this week’s course should clarify on what we should do to solve these problems that arise when we schedule the processes in the CPU.
CPU Scheduling: CPU-I/O Burst Cycle and Scheduler
Process execution begins with a CPU burst. That is followed by an I/O burst. Then it alternates and so after. The success of CPU scheduling depends on the successive alternation of these two states. CPU scheduler selects from among the processes in the memory that are ready to execute, and allocates the CPU to one of them.
CPU Scheduling: Preemptive and Non-Preemptive Scheduling
Preemptive scheduling is used when a process switches from running state to ready state or from waiting state to ready state. The resources (mainly CPU cycles) are allocated to the process for the limited amount of time and then is taken away, and the process is again placed back in the ready queue if that process still has CPU burst time remaining. That process stays in ready queue till it gets next chance to execute. Non-preemptive scheduling is used when a process terminates, or a process switches from running to waiting state. In this scheduling, once the resources (CPU cycles) is allocated to a process, the process holds the CPU till it gets terminated or it reaches a waiting state. In case of non-preemptive scheduling does not interrupt a process running CPU in middle of the execution. Instead, it waits till the process complete its CPU burst time and then it can allocate the CPU to another process.
CPU Scheduling: Dispatcher
The dispatcher is the module that gives control of the CPU’s core to the process selected by the CPU scheduler. How the dispatcher works is that, the dispatcher, when triggered by an interrupt, receives the full register set of the program that was running at the time of interrupt (with the exception of the program counter, which is presumably propagated through a mutually-agreed-upon ‘volatile’ register or some such). Thus, the dispatcher must be carefully written to store the current state of register banks as its first operation upon being triggered.
Thread Scheduling
Thread scheduling differs depending on whether the OS uses user-level threads or kernel-level threads or both are supported. User-level threads means that the kernel is not aware of the existence of threads. That means, it operates as it always does, picking a process and giving that process control for its quantum. Kernel-level threads picks a particular thread to run. It doesn’t have to take into the account which process the thread belongs to, but it can, if it wants to. The thread is given a quantum and is forcibly suspended if it exceeds the quantum.
Multi-Processor Scheduling
In multiple-processor scheduling, multiple CPU’s are available and hence Load Sharing becomes possible. However multiple processor scheduling is more complex as compared to single processor scheduling. In multiple processor scheduling there are cases when the processors are identical. One approach to multiprocessor scheduling is when all the scheduling decisions and I/O processing are handled by a single processor which is called the Master Server and the other processors exectues on the user code. This entire scenario is called Asymmetric Multiprocessing. A second approach uses Symmetric Multiprocessing where each processor is self scheduling.
Real-Time CPU Scheduling
CPU scheduling for real-time operating systems involves special issues. In general, we can distinguish between soft real-time systems and hard real-time systems. Soft real-time systems provide no guarantee as to when a critical real-time process will be scheduled. They guarantee only that the process will be given preference over noncritical processes. Hard real-time systems have stricter requirements. A task must be serviced by its deadline; service after the deadline has expired is the same as no service at all.
Sources:
Abraham Silberschatz - Operating System Concepts -Wiley (2018)
http://pages.cs.wisc.edu/~remzi/OSTEP/
https://stackoverflow.com/questions/4782388/how-dispatcher-works
https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/
https://www.geeksforgeeks.org/multiple-processor-scheduling-in-operating-system/
https://codescracker.com/operating-system/multiprocessor-scheduling.htm
https://www.guru99.com/preemptive-vs-non-preemptive-scheduling.html