Research suggests that individuals put more into retirement accounts or pension plans if their employer matches a portion of their contributions.
Imagine that a country’s legislature passes a law that requires employers with more than 300 employees to provide a retirement contribution matching program. Using tax data, lawmakers compile a dataset with the following information from each of 200 different companies:
size
: Number of employeesgroup
: Contribution Matching Program Group (“No Program” vs. “Program”)contribution
: Average monthly employee contributions (in dollars)
Number of employees — acting as the forcing variable with a cutoff at 300 employees — dictates whether or not a company has a contribution matching program.
We want to assess whether the policy caused an increase in average monthly retirement contributions by employees. This example is a perfect candidate for a new technique, called regression discontinuity design.
RDD works by only focusing on the points near the cutoff. Companies with close to 300 employees are probably very similar to one another. So the ONLY difference, on average, between companies with 299 employees and those with 301 employees should be whether they have the contribution matching program. This means we should have a treatment and control group that look a lot like those of a randomized experiment!
Instructions
Take a look at the plot in the learning environment, which shows the contribution matching program data. The number of employees (forcing variable) is on the x-axis, and the average monthly contribution (outcome variable) is on the y-axis.
Do you think that companies with 5 employees are similar to companies with 600? What about companies with 200 employees versus 400 employees?