Computers have a limited amount of resources, be it processor cores, hard drives, or network links, but the number of tasks that it may need to run are ever-growing. Therefore, a form of rationing is needed to maintain the feasibility of the system and provide each process with the resources it needs.

The method of assigning these resources can differ greatly on the intended goal:

  • Maximizing the total amount of processes completed per unit of time, or throughput.
  • Maximizing the equality of distribution of computer resources.
  • Minimizing the amount of time until a process becomes ready after being started, or wait time.
  • Minimizing the amount of time a process finishes after becoming ready, also known as latency or response time.
  • Completing all tasks before a set deadline.

Not all of these goals are possible as some of them, such as throughput and latency, fundamentally conflict. For example, in a machine (like a car) where each component has a strict deadline on how quickly it should be processed, there will likely always be a buffer of compute resources available to fill these last second commands. This buffer allows for completing these high priority tasks when needed, but lowers the throughput of the system by just waiting to be used.

In this lesson we will build upon the previous knowledge of process states to discuss the tradeoffs made by the different schedulers and algorithms used in process scheduling.


Get excited to get started!

Take this course for free

Mini Info Outline Icon
By signing up for Codecademy, you agree to Codecademy's Terms of Service & Privacy Policy.

Or sign up using:

Already have an account?