Stochastic Gradient Descent
Stochastic Gradient Descent (SGD) is an optimization algorithm used to minimize the loss function in machine learning and deep learning models. It is a variant of the traditional Gradient Descent (GD) algorithm. SGD updates the weights and biases of a model, such as those in an Artificial Neural Network (ANN), during the backpropagation process.
The term stochastic
refers to the randomness involved in the algorithm. Instead of using the entire dataset to compute gradients as in batch gradient descent
, SGD uses a randomly selected data point (or a small mini-batch) to perform each update. For instance, if the dataset contains 500 rows, SGD will update the model parameters 500 times in one epoch, each time using a different randomly chosen data point (or small batch).
This approach significantly reduces computation time, especially for large datasets, making SGD faster and more scalable. SGD is used for training models like neural networks, support vector machines (SVMs), and logistic regression. However, it introduces more noise into the learning process, which can lead to less stable convergence but also helps escape local minima, making it suitable for non-convex problems.
Algorithms Step
- At each iteration, a random sample is selected from the training dataset.
- The gradient of the cost function with respect to the model parameters is computed based on the selected sample.
- The model parameters are updated using the computed gradient and the learning rate.
- The process is repeated for multiple iterations until convergence or a specified number of epochs.
Formula
$$ \large \theta = \theta - \alpha \cdot \nabla J(\theta ; x_i, y_i) $$
Where:
θ
represents the model parameter (weight or bias) being updated.α
is the learning rate, a hyperparameter that controls the step size of the update.∇J(θ;xi,yi)
is the gradient of the cost or loss functionJ
with respect to the model parameterθ
, computed based on a single training sample(xi,yi)
.
Advantages
- Faster convergence: SGD updates parameters more frequently hence it takes less time to converge especially for large datasets.
- Reduced Computation Time: SGD takes only a subset of dataset or batch for each update. This makes it easy to handle large datasets and compute faster.
- Avoid Local Minima: The noise introduced by updating parameters with individual data points or small batches can help escape local minima.This can potentially lead to better solutions in complex, non-convex optimization problems.
- Online Learning: SGD can be used in scenarios where data is arriving sequentially (online learning).- It allows models to be updated continuously as new data comes in.
Disadvantages
- Noisy Updates: Updates are based on a single data point or small batch, which introduces variability in the gradient estimates.This noise can cause the algorithm to converge more slowly or oscillate around the optimal solution.
- Convergence Issues: The noisy updates can lead to less stable convergence and might make it harder to reach the exact minimum of the loss function.Fine-tuning the learning rate and other hyperparameters becomes crucial to achieving good results.
- Hyperparameter Sensitivity: - SGD’s performance is sensitive to the choice of learning rate and other hyperparameters.Finding the right set of hyperparameters often requires experimentation and tuning.
Example
The following code demonstrates Stochastic Gradient Descent (SGD) to fit a line to data points. Starting with initial guesses for the slope (m
) and intercept (b
), it updates these values iteratively by calculating the gradients of the Mean Squared Error (MSE) loss. The parameters are adjusted step-by-step based on the gradients, reducing the error between predicted and actual values:
import numpy as np# Data points (x, y) where the true line is y = 2xx = np.array([1, 2, 3, 4, 5])y = np.array([2, 4, 6, 8, 10])# Initial guess for parameters (slope, intercept)params = np.array([0.0, 0.0])# Learning rate and epochslearning_rate = 0.01epochs = 1000# Model: y = mx + bdef model(params, x):m, b = paramsreturn m * x + b# MSE loss functiondef loss(pred, actual):return np.mean((pred - actual) ** 2) # Using mean instead of sum# Compute gradients (partial derivatives)def gradients(params, x, y):m, b = paramspred = model(params, x)grad_m = 2 * (pred - y) * x # Gradient for mgrad_b = 2 * (pred - y) # Gradient for breturn np.array([grad_m, grad_b])# Training historyhistory = []# SGD: Update parametersfor epoch in range(epochs):total_loss = 0# Shuffle dataindices = np.random.permutation(len(x))x_shuffled = x[indices]y_shuffled = y[indices]for i in range(len(x)):# Forward passpred = model(params, x_shuffled[i])loss_value = loss(pred, y_shuffled[i])# Compute gradientsgrads = gradients(params, x_shuffled[i], y_shuffled[i])# Update parametersparams -= learning_rate * gradstotal_loss += loss_value# Store loss for plottingavg_loss = total_loss / len(x)history.append(avg_loss)if epoch % 100 == 0: # Print loss every 100 epochsprint(f"Epoch {epoch}, Loss: {avg_loss:.6f}")print(f"Final parameters: m = {params[0]:.4f}, b = {params[1]:.4f}")
The output of the code is as follows:
Epoch 0, Loss: 22.414958Epoch 100, Loss: 0.001293Epoch 200, Loss: 0.000037Epoch 300, Loss: 0.000001Epoch 400, Loss: 0.000000Epoch 500, Loss: 0.000000Epoch 600, Loss: 0.000000Epoch 700, Loss: 0.000000Epoch 800, Loss: 0.000000Epoch 900, Loss: 0.000000Final parameters: m = 2.0000, b = 0.0000
Note: The output may vary depending on factors like the initial parameter values, learning rate, and number of epochs.
codebyte Example
Here’s a Python code snippet demonstrating how to implement SGD for linear regression:
All contributors
- Anonymous contributor
Contribute to Docs
- Learn more about how to get involved.
- Edit this page on GitHub to fix an error or make an improvement.
- Submit feedback to let us know how we can improve Docs.
Learn AI on Codecademy
- Career path
Computer Science
Looking for an introduction to the theory behind programming? Master Python while learning data structures, algorithms, and more!Includes 6 CoursesWith Professional CertificationBeginner Friendly75 hours - Career path
Data Scientist: Machine Learning Specialist
Machine Learning Data Scientists solve problems at scale, make predictions, find patterns, and more! They use Python, SQL, and algorithms.Includes 27 CoursesWith Professional CertificationBeginner Friendly90 hours