Mohammad Saddam Mashuri's OS202
What is Concurrency?
Concurrency is the execution of the multiple instruction sequences at the same time. It happens in the operating system when there are several process threads running in parallel. The running process threads always communicate with each other through shared memory or message passing. Concurrency results in sharing of resources result in problems like deadlocks and resources starvation.
Thread
The foundation of CPU. A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals.
Process Concept
A process is a program in execution. The execution of a process must progress in a sequential fashion. When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data.
Interrupts
This is something that has been studied before in the IDS / PSD course, so this is more like a review. Interrupt is a signal emitted by hardware or software when a process or an event needs immediate attention. It alerts the processor to a high priority process requiring interruption of the current working process. In a nutshell, when an interrupt happen, the processor calls the Interrupt Service Routine (ISR) and does the routine that is done there, then go back and do their own stuff.
Atomic Operations
Atomic operations are one of the most powerful underlying techniques in building computer systems, from the computer architecture, to concurrrent code, to file systems, database management systems, and even distributed systems. During an atomic operation, a processor can read and write a location during the same data transmission. In this way, another input/output mechanism or processor cannot perform memory reading or writing tasks until the atomic operation has finished.
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is a vital part of multiprogramming in OS. Usually, the OS’ PCB maintains scheduling in a queue manner.
Synchronization Primitive and Implementation
Synchronization primitives are simple software mechanisms provided by the OS to its users for the purposes of supporting thread or process synchronization. One of the ways to implement this in code can be used with ThreadQueue or Semaphore which both are very general synchronization primitive. They’re both however, are too simple to be used effectively. Other way is to use Lock or a condition variable, which is frankly, more useful.
Thread Libraries
Thankfully, we don’t need to code from scratch for creating and managing threads. Thread libraries exist to aid the programmer with an API for creating and managing threads. This is a more coding involved thing. In order to get good and get used to it, practice is necessary. The practice provided in the slides, demos, and the textbook, should suffice.
Process Control Block
Process Control Block is one of the data structure that manages processes in OS. The process control stores many data items that are needed for efficient process management.
Process State Diagram
The Process State Diagram illustrates the states in which a process will be in. Also, it shows the flow of where a particular state can be reached by the process.
Sources:
Abraham Silberschatz - Operating System Concepts -Wiley (2018)
http://pages.cs.wisc.edu/~remzi/OSTEP/
https://people.eecs.berkeley.edu/~kubitron/courses/cs162-F09/hand-outs/synch.html