The K-Means clustering algorithm is more than half a century old, but it is not falling out of fashion; it is still the most popular clustering algorithm for Machine Learning.
However, there can be some problems with its first step. In the traditional K-Means algorithms, the starting postitions of the centroids are intialized completely randomly. This can result in suboptimal clusters.
In this lesson, we will go over another version of K-Means, known as the K-Means++ algorithm. K-Means++ changes the way centroids are initalized to try to fix this problem.
Run the program in script.py to cluster Codecademy learners into two groups using K-Means and K-Means++.
The only difference between each algorithm is how the cluster centroids are initialized.
It’s hard to see, but the clusters are different. Look at the point at
y=1. On the top graph you should see a purple point, but on the bottom graph a yellow point.
Which one of these clusters is better? We have printed the model of each inertia in the workspace. The model with the lower inertia has more coherent clusters. You can think of the model with the lower inertia as being “better”.
Which model performs better clustering?
Continue to the next exercise to see why random initialization of centroids can result in poorer clusters.