Basically, precision.

Float: 7 digits (32-bit)

Double: 15-16 digits (64-bit)

Decimal: 28-29 digits (128-bit)

Float and double work with rounding values. For that reason, they are recommended when you don't care if there is a rounding here or there. They are widely used for scientific calculations.

Decimal is different: we use it when we want exact precision of values. We usually want this when we are working with money, right?

Because of the high accuracy, working with decimals is slower.

In terms of magnitude, we were able to store larger numbers in a double, but with less precision.

In a decimal we keep smaller, but with more precision