Articles

Greedy Algorithm Explained

Learn greedy algorithm, its key traits, working, and real-world uses like Coin Change, Fractional Knapsack, and Dijkstra’s Algorithm.

When solving problems, we often rely on strategies that seem the most efficient at the moment. Imagine we are collecting coins to reach a specific amount, and we always pick the highest-value coin available. This might seem like a smart approach; grabbing the highest-value coin first should get us to the total faster. This idea represents what a greedy algorithm is all about.

In this article we will look at what is a greedy algorithm , how it works and some common problems that can be solved by the greedy. First, let’s understand what a greedy algorithm is.

What is a greedy algorithm

A greedy algorithm is an optimization technique that solves problems step by step, always selecting the best possible choice at each moment. It follows a straightforward strategy: at every stage, it picks the option that looks most favorable right now without considering the long-term impact of that decision.

Greedy algorithms work by choosing the best option at each step, hoping that these choices lead to the best overall solution.

Unlike more complex algorithms that use backtracking or dynamic programming, greedy algorithms do not revisit past decisions or adjust their approach based on future possibilities. Once a choice is made, it is never reconsidered. This makes greedy algorithms fast and efficient, often running in linear or logarithmic time, depending on the problem. However, this same property also means the greedy algorithm does not always guarantee the best possible solution in every case.

Related Course

Learn Data Structures and Algorithms with Python

Learn what data structures and algorithms are, why they are useful, and how you can use them effectively in Python.Try it for free

How does a greedy algorithm work

To understand how a greedy algorithm works, let’s break it down into simple steps:

  1. Make a choice – At each step, pick the best available option based on a specific criterion.
  2. Proceed to the next step – Move forward and repeat the process until the problem is solved.
  3. Check the final outcome – The algorithm arrives at a solution that is either optimal or close to optimal.

For example, consider the following tree structure, where each node represents a weight, and our goal is to find the longest path (maximum sum) from root to leaf. So, the criterion in this example is selecting the largest weight at each step.

Greedy Algorithm Tree Example

In this tree, each number represents a weight, and we need to find the path from the root (10) to a leaf node that yields the highest sum.

Applying the greedy algorithm

  • Start at the root (10).
    • Left child: 15
    • Right child: 12
    • Greedy choice: Pick 15 (since it’s the largest).
  • Move to node (15).
    • Left child: 18
    • Right child: 14
    • Greedy choice: Pick 18 (since it’s the largest).
  • Move to node (18).
    • Left child: 25
    • No right child.
    • Greedy choice: Pick 25 (since it’s the only option).

Final Path Chosen by Greedy Algorithm:

  • 10 → 15 → 18 → 25
  • Total sum: 10 + 15 + 18 + 25 = 68

Does the greedy approach find the optimal solution?

Let’s check all possible paths to confirm that the greedy algorithm was correct:

  • Path: 10 → 15 → 18 → 25 → Sum = 68
  • Path: 10 → 15 → 14 → 30 → Sum = 69 (Not possible since 30 is unreachable)
  • Path: 10 → 12 → 20 → Sum = 42

Since 68 is the highest sum possible, the greedy algorithm successfully found the best solution in this case!

The greedy algorithm can occasionally fail based on the choices it makes. It is effective when making locally optimal choices that consistently lead to the best overall solution. However, it may not succeed when a short-term decision blocks access to a better long-term option. It can fail sometimes, depending upon the choices it makes.

Let’s change the structure a bit to show how making a local greedy choice can lead to a less optimal overall solution.

Greedy Algorithm Example that fails

Applying the greedy algorithm to see if it fails

  • Start at the root (10)
    • Left child: 15
    • Right child: 12
    • Greedy choice: Pick 15 (since it’s larger)
  • Move to node (15)
    • Left child: 18
    • Right child: 14
    • Greedy choice: Pick 18 (since it’s larger)
  • Move to node (18)
    • Left child: 25
    • No right child
    • Greedy choice: Pick 25 (only option)

The final path chosen:

  • 10 → 15 → 18 → 25 Total sum = 68

Checking the optimal path

Now, let’s explore all possible paths:

  • Greedy path:
    • 10 → 15 → 18 → 25
    • Sum = 68
  • Alternative path:
    • 10 → 15 → 14 → 30
    • Sum = 10 + 15 + 14 + 30 = 69 Higher than greedy choice!
  • Other path:
    • 10 → 12 → 20
    • Sum = 42 (Lower)

Why did the greedy approach fail?

The greedy algorithm made local optimal choices:

  • Chose 15 over 12 (Correct choice here)
  • Chose 18 over 14 (Mistake! Because 14 leads to 30, which gives a better sum!)

The best global solution was 10 → 15 → 14 → 30, with a sum of 69, but the greedy algorithm missed it because it made decisions greedily at each step instead of considering the full path. Now, let’s discuss these characteristics of the greedy algorithm in the next section.

Key characteristics of greedy algorithms

As we know, greedy algorithms always make the best possible decision at each step based on the available information. The following are the key characteristics of the greedy algorithm:

  • Once a decision is made, greedy algorithms do not revisit or revise it. Each choice is final, which makes the algorithm faster but less flexible.
  • Greedy algorithms build their solutions step by step, with each new step depending on the previous one. This incremental approach makes them efficient in solving problems that can be solved with a series of local optimal choices.
  • They are mainly used for optimization problems, where the goal is to find the best solution among many possible choices, typically maximizing or minimizing a value.

Problems solved with greedy algorithm

Greedy algorithms are effective tools for optimization. Some well-known problems that are commonly solved using greedy strategies include the Coin Change Problem, the 0/1 Knapsack Problem, the Fractional Knapsack Problem, and Dijkstra’s Shortest Path Algorithm. Let’s explore each of these problems to understand how we can solve them using greedy algorithms.

Coin change problem

The Coin Change problem is a classic example of a greedy algorithm in action. The task is: given a set of coin denominations, determine the minimum number of coins required to make a specific amount.

Greedy Approach:

To solve this, the greedy algorithm picks the largest coin denomination first and reduces the target amount by that coin’s value. This process repeats until the entire amount is made up using the least number of coins possible.

Example:

Imagine we have coin denominations of {1, 5, 10, 25}, and we need to make 30.

  • Start with the largest coin (25). Select one 25-cent coin.
  • Remaining amount: 30 - 25 = 5
  • Next, choose the largest coin less than or equal to the remaining amount (5). Select one 5-cent coin.
  • Remaining amount: 5 - 5 = 0

In this case, the greedy algorithm successfully uses just two coins (one 25-cent coin and one 5-cent coin) to make 30.

The greedy approach works here because the problem’s coin set allows for the largest coins to provide the optimal solution. However, it’s important to note that this strategy doesn’t always yield the best result.

Fractional knapsack

The Fractional Knapsack problem is another classic problem where a greedy approach is used. In this problem, we’re given a set of items, each with a weight and a value, and we need to determine the maximum value that can fit into a knapsack with a limited capacity. Unlike the 0/1 knapsack problem, the Fractional Knapsack allows us to take fractions of items.

Greedy Approach:

To solve this problem, the greedy algorithm selects items based on their value-to-weight ratio, starting with the highest ratio and continuing until the knapsack is full.

Example:

Suppose we have the following items:

  1. Item 1: Value = 60, Weight = 10
  2. Item 2: Value = 100, Weight = 20
  3. Item 3: Value = 120, Weight = 30
  4. Knapsack Capacity = 50
  • Calculate the value-to-weight ratio for each item:

    • Item 1: 60/10 = 6
    • Item 2: 100/20 = 5
    • Item 3: 120/30 = 4
  • Sort the items based on their value-to-weight ratio (highest first):

    • Item 1: 6, Item 2: 5, Item 3: 4
  • Start with Item 1. The knapsack can hold 50 weight units, and Item 1 weighs 10. Select the entire Item 1.

    • Remaining capacity: 50 - 10 = 40
  • Move to Item 2. The knapsack can still hold 40 weight units, and Item 2 weighs 20. Select the entire Item 2.

    • Remaining capacity: 40 - 20 = 20
  • Move to Item 3. The knapsack can hold 20 more weight units, and Item 3 weighs 30. Take a fraction of Item 3 (20/30 of Item 3’s value).

    • Remaining capacity: 0

In this case, the greedy algorithm selects Item 1, Item 2, and a fraction of Item 3, giving a total value of 240.

The greedy algorithm is effective here because it maximizes the value in the knapsack by focusing on items with the highest value-to-weight ratio. Since we can take fractions of items, the greedy approach ensures that the knapsack is filled in the most valuable way possible.

0/1 Knapsack

The 0/1 Knapsack problem is a problem where we determine the maximum value that can fit into a knapsack with a limited capacity. Unlike the Fractional Knapsack problem, in the 0/1 Knapsack problem, we must take all items (0 or 1), and we cannot take fractions.

Greedy Approach:

To solve this problem using a greedy approach, we select items based on their value-to-weight ratio, starting with the highest ratio and continuing until the knapsack is full.

Example:

Suppose we have the following items:

  1. Item 1: Value = 60, Weight = 10
  2. Item 2: Value = 100, Weight = 20
  3. Item 3: Value = 120, Weight = 30
  4. Knapsack Capacity = 50
  • Calculate the value-to-weight ratio for each item:

    • Item 1: 60/10 = 6
    • Item 2: 100/20 = 5
    • Item 3: 120/30 = 4
  • Sort the items based on their value-to-weight ratio (highest first):

    • Item 1: 6
    • Item 2: 5
    • Item 3: 4
  • Step-by-Step selection:

    • Start with Item 1:
      • The knapsack can hold 50 weight units, and Item 1 weighs 10.
      • Select the entire Item 1.
      • Remaining capacity: 50 - 10 = 40
    • Move to Item 2:
      • The knapsack can still hold 40 weight units, and Item 2 weighs 20.
      • Select the entire Item 2.
      • Remaining capacity: 40 - 20 = 20
    • Move to Item 3:
      • The knapsack can hold 20 more weight units, but Item 3 weighs 30.
      • Since fractions are not allowed in 0/1 Knapsack, we skip Item 3 because it doesn’t fit.

Final selection:

  • Item 1 → Value = 60
  • Item 2 → Value = 100
  • Total Value = 60 + 100 = 160

The greedy algorithm successfully selects the best items that maximize the value while fitting within the knapsack’s capacity. The final optimal value is 160.

This example works fine, but the greedy method does not guarantee the optimal solution for all instances of 0/1 knapsack problem. Dynamic programming or branch and bound algorithms should be used instead for optimal solutions.

Dijkstra’s shortest path algorithm

Dijkstra’s Shortest Path Algorithm is one of the most famous examples of a greedy approach. It’s used to find the shortest path from a starting node to all other nodes in a weighted graph. The key idea behind Dijkstra’s algorithm is to always choose the node with the smallest known distance, making it a prime example of how a greedy algorithm can be applied to graph traversal.

Greedy Approach:

Dijkstra’s algorithm begins at the source node and iteratively selects the node with the smallest tentative distance. It then explores its neighbours, updating their distances based on the current node’s known distance. This process continues until the shortest paths to all nodes are found.

Example:

Consider the following weighted graph:

Dijkstra's shortest path algorithm Example

  • Start from node A. Set the initial distances: A = 0, B = ∞, C = ∞, D = ∞.
  • The nearest node to A is B with a distance of 10. Update distances to B, C, and D:
    • Distance from A to C = 5
    • Distance from A to B = 10
    • Distance from A to D = ∞
  • Move to node C, as it has the smallest tentative distance (5). The nearest node to C is D, with a total distance of 6 (A to C + C to D = 5 + 1).
  • Move to node D. All the shortest paths are now found.

In this case, the shortest path from A to D is 6 (via C), from A to C is 5, and from A to B is 10.

Dijkstra’s algorithm uses a greedy approach by always selecting the node with the smallest known distance. This ensures that the shortest path is found efficiently, and no unnecessary suboptimal paths are explored.

Greedy approach is used in real scenarios too. For example, we use greedy algorithm in Huffman coding which is used in data compression algorithms like ZIP, GZIP, and JPEG.

Huffman Encoding

Huffman Encoding is a lossless compression algorithm that reduces the size of files by encoding characters based on their frequency. It uses a greedy approach to build an optimal prefix tree for encoding data efficiently.

How the Greedy Algorithm Works in Huffman Encoding

  • Count Character Frequencies:
    • Given a text, count how many times each character appears.
  • Build a Min-Heap:
    • Insert each character (along with its frequency) into a min-heap (priority queue).
  • Construct the Huffman Tree:
    • Repeatedly remove the two smallest frequency nodes.
    • Merge them into a new node with their combined frequency.
    • Insert this new node back into the heap.
    • Repeat until there is only one node left (the root of the Huffman Tree).
  • Assign Binary Codes:
    • Traverse the tree:
      • Left branch = 0
      • Right branch = 1
    • Assign unique binary codes to each character.
  • Encode the Data:
    • Replace each character with its binary code, significantly reducing the file size.

Example:

  • Input String: “AAABBCCCCDDDD”

Step 1: Character Frequency Count

Character Frequency
A 3
B 2
C 4
D 4

Step 2: Build the Min-Heap (Greedy Choice)

  • Insert all characters into a priority queue (min-heap) sorted by frequency.
  • Pick two smallest elements and merge them into a new node with their combined frequency.
  • Repeat until a single tree is formed.

Step 3: Generate Huffman Codes

Example Huffman tree:

HUffman Coding Tree Example

From this tree, we generate the Huffman codes:

Character Huffman Code
A 00
B 01
C 10
D 11

Step 4: Encode the Data

  • Original Text: “AAABBCCCCDDDD”
  • Huffman Encoded: “00000001011010101011111111”

Compressed data size is reduced!

Conclusion

In this article, we learned about:

  • A greedy algorithm solves problems by making the best possible choice at each step, aiming for a globally optimal solution.
  • Greedy algorithms are fast and efficient, often running in linear or logarithmic time, but they do not always guarantee the best possible solution for all problems.
  • Once a choice is made, it is final and not revisited. This makes greedy algorithms faster but less flexible.
  • Greedy algorithms are used for optimization problems like the Coin Change problem, Fractional Knapsack, and Dijkstra’s Shortest Path Algorithm.
  • Greedy algorithms are commonly applied to problems such as resource allocation, pathfinding, and making decisions under constraints.
  • They can fail if a locally optimal choice blocks access to a better long-term solution.

Now that you know how greedy algorithm works, you can learn more about data structures and algorithms with Python course on Codecademy for free.

Author

Codecademy Team

'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'

Meet the full team