1, and so on. Depending on the language, numbers with decimals are of a different type. In floating-point arithmetic, there is a trade-off in precision for range. The amount of bits allocated for floating-point representation affects the range in values that can be used.
.1 + .2 would evaluate to
0.30000000000000004, whereas adding whole numbers wouldn’t do this. But you need to keep in mind the integer limit. It is the number where the compiler won’t understand the numbers anymore.