Processes exist in multiple states in order to best utilize system resources so that if one process is waiting, another can take its place in the CPU.
Waiting was a bit of a simplification for the sake of the previous lesson, as there is much to consider for how these processes are stored in certain states. For example, many processes may have to wait in the ready state, and the way that they are stored while waiting can differ greatly from how processes may be stored while in the blocked state. This is because each state has different goals that the chosen storage format can help achieve.
Waiting is commonly done in a queue which is a first in, first out data structure. The first process to be added to the queue is the first to be executed by the processor down the line. This works great for the blocked state, where many processes may want to write to the same file, as the most intuitive procedure is to just have them go ahead one at a time in order of their request. However, this data structure may need some modifications to best accommodate the ready queue.
Different processes may have different priorities, and with that, a priority queue becomes the more relevant abstraction. Here, processes are organized in order of their priority instead of when they first arrived, with the process that has the highest priority always being executed first. How this priority is calculated is determined by the scheduling algorithm and is represented by some integer value or category such as low, medium, or high.
How could the
Blocked state be further broken down to be more descriptive?