While there are a set of common queues and states for processes, how these processes move within these data structures depends on the algorithm used and the goals for the system.
The most basic type of scheduling algorithm is first come, first served, in which processes are simply put into a standard queue and then executed in the order that they arrived. An example of processes being executed by their arrival time can be seen on the right.
This algorithm does have some drawbacks that reserve it only for special use cases such as generally low throughput due to the convoy effect. This is where a long process can solely occupy the CPU while doing minimal computations. Similarly, there is no concept of priority, so latency and wait times can be excessively long as a process’s execution depends solely on its arrival in the queue and the arbitrary amount of time a previous process takes.
However, the simplicity does have some benefits such as minimal scheduling overhead from only context switching when a process ends. Also, assuming each process eventually completes, every process should be able to run and not have to suffer from starvation by never being executed.
In this example animation of First Come, First Served to the right, the processes with the earliest arrival times are executed by the processor first. The arrival time of P1 is 1:00, so it is the earliest of all of the processes and is the first to be sent to the CPU by the scheduler. P2 is the next earliest arrival at 1:15, so it is sent to CPU after P1 finishes executing. Following that, P3 arrived at 1:25 so it is the next to run, and with P4 being the last to arrive at 1:40, it is the last to run.