You used K-Means and found three clusters of the
samples data. But it gets cooler!
Since you have created a model that computed K-Means clustering, you can now feed new data samples into it and obtain the cluster labels using the
So, suppose we went to the florist and bought 3 more Irises with the measurements:
[[ 5.1 3.5 1.4 0.2 ] [ 3.4 3.1 1.6 0.3 ] [ 4.9 3. 1.4 0.2 ]]
We can feed this new data into the model and obtain the labels for them.
First, store the 2D matrix:
new_samples = np.array([[5.7, 4.4, 1.5, 0.4], [6.5, 3. , 5.5, 0.4], [5.8, 2.7, 5.1, 1.9]])
To test if it worked, print the
Use the model to predict labels for the
new_samples, and print the predictions.
The output might look like:
[0 2 2]
Those are the predicted labels for our three new flowers. If you are seeing different labels, don’t worry! Since the cluster centroids are randomly initialized, running the model repeatedly can produce different clusters with the same input data.