The go-to for any financial unit is decimal
- it was designed for the purpose of holding a larger range of digits with higher precision. The main issue with using float
or double
(apart from them being represented by a quarter & a half the amount of bits of double, respectively,) is that they're stored internally as binary (base 2 fractions), whereas decimal
stores values as base 10 fraction, which is more congruent with money. For example 0.1 in decimal stored as a base 2 fraction is an irrational number, so it's not representable and will be truncated. These rounding errors can accumulate and become unfit for purpose.
This article is worth reading to dig deeper into the main differences between these types:
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types#characteristics-of-the-floating-point-types
Particularly this paragraph:
The decimal type is appropriate when the required degree of precision is determined by the number of digits to the right of the decimal point. Such numbers are commonly used in financial applications, for currency amounts (for example, $1.00), interest rates (for example, 2.625%), and so forth. Even numbers that are precise to only one decimal digit are handled more accurately by the decimal type: 0.1, for example, can be exactly represented by a decimal instance, while there's no double or float instance that exactly represents 0.1