Learn

So why are there two different types for decimals in C? The short answer is different types for different situations.

A float has less precision than a double, 6 vs 15 possible decimal places respectively, and therefore takes up less memory (4 vs 8 bytes). However, a double run faster, so you gain speed at the cost of more memory usage.

The other thing to be aware of is that the system is rounding the values you store in either. This can cause unexpected results, especially with a float as they have less precision. This is why you will see double being used any time accuracy is important, such as in scientific, medical or financial applications.

Instructions

1.

In this example much of the code is already in place, don’t worry at this point if you can’t follow everything that is going on yet, you will learn about loops in a later lesson. For now, you will want to experiment with numOfLoops and keep increasing its value until something unusual happens with the output, specifically the float value.

In general, the program takes a double and a float and adds 0.1 to each over and over again numOfLoops times. So if you set it to 10, that means it’s adding 0.1 ten times, or 0.1 x 10 so the output would be 1.0. float and double both give you this value, but keep making numOfLoops higher and a float will start to give unexpected results, showing their lower precision and rounding issues.

Take this course for free

Mini Info Outline Icon
By signing up for Codecademy, you agree to Codecademy's Terms of Service & Privacy Policy.
Already have an account?