Is there anyway to fix the precision problems in C# without using the decimal datatype?

Hrishikesh P 0 Reputation points

So I'm using double datatype for most of my calculations and when I compare my results with the calculations from an online calculator sometimes there are differences in the decimal part of the number. Also even during calculations 3 * 3.5 = 9.600000000000001. Is there anyway I can fix this mistakes without using decimal as my datatype?

An object-oriented and type-safe programming language that has its roots in the C family of languages and includes support for component-oriented programming.
9,015 questions
.NET: Microsoft Technologies based on the .NET software framework.F#: A strongly typed, multi-paradigm programming language developed by the F# Software Foundation, Microsoft, and open contributors.
63 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Olaf Helper 34,286 Reputation points

    "Double" is an un-precise float-point data type and and calculation results can have more decimals as expected.

    Also even during calculations 3 * 3.5 = 9.600000000000001.

    3 * 3,5 = 10.5; if you get 9.6, then you are doing something completley wrong.

    0 comments No comments

  2. Bruce ( 44,631 Reputation points

    the issue is that double is floating point number, but in base 2 instead of base 10. you probably know in base 10, 1/3 = .33333..... in base 2 there are other repeating numbers. 1/10 in base 2 is a repeating number. which make floating bad for money calculations.

    the decimal datatype gets around this but storing a numeric value and an implied decimal point. The math operations are done as integer match, than rounded to the scaling factor

    the usual fix if using floating point is to first multiple by the scaling factor (number of decimal places you want), do the calc and then divide by the scale factor.

    0 comments No comments