Share via


Precision and accuracy of DateTime

StopwatchThe DateTime struct represents dates as a 64 bit number that measures the number of “ticks” since a particular start date. Ten million ticks equals one second.

That’s a quite high degree of precision. You can represent dates and times to sub-microsecond accuracy with a DateTime, which is typically more precision than you need. Not always, of course; on modern hardware you can probably execute a couple hundred instructions in one tick, and therefore if you want timings that are at the level of precision needed to talk about individual instructions, the tick is too coarse a measure.

The problem that arises with having that much precision is of course that it is very easy to assume that a given value is as accurate as it is precise. But that’s not warranted at all! I can represent my height in a double-precision floating point number as 1.799992352094 metres; though precise to a trillionth of a metre, it’s only accurate to about a hundredth of a metre because I do not have a device which can actually measure my height to a trillionth of a meter, or even a thousandth of a metre. There is way more precision than accuracy here.

The same goes for dates and times. Your DateTime might have precision down to the sub-microsecond level, but does it have accuracy? I synchronize my computers with time.gov fairly regulary. But if I don’t do so, their clocks wander by a couple of seconds a year typically. Suppose my clock loses one second a year. There are 31.5 million seconds in a year and 10 million ticks in a second, so therefore it is losing one tick every 3.15 seconds. Even if my clock was miraculously accurate down to the level of a tick at some point, within ten seconds, it’s already well off. Within a day much of the precision will be garbage.

If you do a little experiment you’ll see that the operating system actually gives you thousands of times less accuracy than precision when asked “what time is it?”

long ticks = DateTime.Now.Ticks;
while(true)
{
if (ticks != DateTime.Now.Ticks)
{
ticks = DateTime.Now.Ticks;
Console.WriteLine(ticks);
}
else
{
Console.WriteLine("same");
}
}

On my machine this says “same” eight or nine times, and then suddenly the Ticks property jumps by about 160000, which is 16 milliseconds, a 64th of a second. (Different flavours of Windows might give you different results, depending on details of their thread timing algorithms and other implementation details.)

As you can see, the clock appears to be precise to the sub-microsecond level but it is in practice only precise to 16 milliseconds. (And of course whether it is accurate to that level depends on how accurately the clock is synchronized to the official time signal.)

Is this a flaw in DateTime.Now? Not really. The purpose of the “wall clock” timer is to produce dates and times for typical real-world uses, like “what time does Doctor Who start?” or “when do we change to daylight savings time?” or “show me the documents I edited last Thursday after lunch.”  These are not operations that require submicrosecond accuracy.

(And incidentally, in VBScript the “wall clock” timer methods built in to the language actually round off times we get from the operating system to the nearest second, not the nearest 64th of a second.)

In short, the question “what time is it?” really should only be answered to a level of precision that reflects the level of accuracy inherent in the system. Most computer clocks are not accurately synchronized to even within a millisecond of official time, and therefore precision beyond that level of accuracy is a lie. It is rather unfortunate, in my opinion, that the DateTime structure does surface as much precision as it does, because it makes it seem like operations on that structure ought to be accurate to that level too. But they almost certainly are not that accurate.

Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.

Remember, you don’t need to know what time it is to know how much time has elapsed. Those can be two different things entirely.

Comments

  • Anonymous
    April 07, 2010
    The comment has been removed

  • Anonymous
    April 07, 2010
    The problem with the StopWatch class is that, while it is extremely precise, it is not guaranteed to be accurate. The source it uses for its tick count may be different on different CPUs, causing incorrect results when you stop the clock on a different CPU than you start it on. Furthermore, it may count at a different frequency in power-saving modes, which could be perfect for microbenchmarking code, but useless as an indicator of when an Ethernet packet arrived. I would also add that I rather like that DateTime has so much precision built in (even if it implies that DateTime.Now has more precision than it does), so I can use the same data structures and functions on data that represents birthdays, system times, and when Ethernet packets arrived. This is much preferable to other systems that require different representations and thus different libraries for each of those situations.

  • Anonymous
    April 08, 2010
    I don't know that I would go so far as to say DateTime supporting more preciscion than is provided by the hosting hardwareplatform is a bad thing. At no extra cost to developers DateTime can support more precise hosts in the future, in my book that's a good thing. --Ifeanyi Echeruo

  • Anonymous
    April 08, 2010
    Stopwatch is definitely one of those great little utility classes that many people are under-aware of. Being able to roll your own code performance timer using Stopwatch is invaluable in cases where you want to profile a very narrow area of code and you don't have time to break out an actual code profiler. Something that often goes hand in hand with Stopwatch is the MethodBase.GetCurrentMethod() which reports the reflection info of the currently executing method. Unfortunately, you can't centralize this into a utility helper method - since GetCurrentMethod() reports the method actually running. What would be nice would be a GetCallingMethod() method that looks at the call stack frame just above the current one. You can of course write your own stack-frame walking code ... but who wants to do that :)

  • Anonymous
    April 08, 2010
    @Leo Bushkin: new StackFrame(1).GetMethod()

  • Anonymous
    April 08, 2010
    The irony is that we've taken this on board for Noda Time as well - as the common "smallest unit of time" in .NET is a tick, we felt we needed to support that in Noda too. Joda Time - which we're porting from - only supports down to millisecond precision. On the other hand, I suppose it means we can use the same types for stopwatch measurements and other measurements. My biggest gripe about Stopwatch is that it uses ticks as well - but to mean something entirely different from ticks in DateTime/TimeSpan. Grrr. Fun fact: for time zones, we actually represent offsets in milliseconds. It's possible that that's overkill - seconds would probably have been okay. Minutes wouldn't have been, however - there have been time zones with offsets from UTC of "9 minutes and 21 seconds" and similar. I don't have too much problem with DateTime having too much accuracy, so long as everyone knows it.

  • Anonymous
    April 08, 2010
    Doh - amendment to final comment... I don't have a problem with DateTime having too much precision so long as everyone knows it. Precision, not accuracy.

  • Anonymous
    April 08, 2010
    Pass GetCurrentMethod() as a parameter to the logging function.

  • Anonymous
    April 08, 2010
    The comment has been removed

  • Anonymous
    April 08, 2010
    An excellent (as usual) post. One additional item to mention is to NOT use DateTime.Now for calculations. Two reasons. (in no particular order)

  1. it is MUCH higher overhead than DateTime.UtcNow
  2. It WILL give you errrors in most US locations twice a year. I actually was involved with one company (Eastern US) who used "Bogota" time to avoid the time jumping as Daylight Savings kicked on/off. The side effect was that ALL compuiter clocks were off by 1 hour during the summer.....DELIBERATELY!!!
  • Anonymous
    April 08, 2010
    About DateTime - very old thing :) Richter mentioned it in his book, as i remember it's because of standard windows win32 timer, not because of .net or datetime.

  • Anonymous
    April 08, 2010
    Something that is really missing is a way to get a precision counter (HPET) readingat the precise moment when the system datetime counter was last incremented (other than looping to check whether the datetime has changed).  This would make it far easier to implement a good time synchronization scheme.

  • Anonymous
    April 08, 2010
    "The problem with the StopWatch class is that, while it is extremely precise, it is not guaranteed to be accurate." Thankfully, this isn't quite true. There are computer systems with faulty BIOS for which StopWatch can suffer the problems described. But that's not a normal affair. On a correctly working system, StopWatch is fine within the documented limits of the class.

  • Anonymous
    April 08, 2010
    How about this? Let's say I have to do something every 1 minute. Here are two possible ways that I can solve the problem:  Solution A  DO it  WAIT 1 minute  DO it  WAIT 1 minute  ...  Solution B (assume I start at 12:00)  DO it  WAIT until 12:01  DO it  WAIT until 12:02  ... But, if what has to be done takes a noticeable amount of time, let's say 30 seconds, then the result from Solution A will be very different from Solution B.  Time          Solution A      Solution B  --------      ----------      ----------  12:00:00      DO it           Do it        12:00:30      WAIT 1 min      WAIT until 12:01  12:01:00                      DO it  12:01:30      DO it           WAIT until 12:02  12:02:00      WAIT 1 min      DO it  12:02:30                      WAIT until 12:03  12:03:00      DO it           DO it  12:03:30      WAIT 1 min      WAIT until 12:04  12:04:00                      DO it  .             .               .  .             .               .  .             .               .  much later    gets worse      still on schedule I've met someone who had a similar problem, used Solution A (or something like it), but was expecting to have results similar to Solution B. When he told me his story, I thought to myself, hey, that's kind of like the problem with dead reckoning. But how else can this problem of drift be minimized? I thought a clock would be a good point of reference to use to get back on course.

  • Anonymous
    April 08, 2010
    The comment has been removed

  • Anonymous
    April 08, 2010
    "I don't know that I would go so far as to say DateTime supporting more preciscion than is provided by the hosting hardwareplatform is a bad thing. At no extra cost to developers DateTime can support more precise hosts in the future, in my book that's a good thing." Yeah, big ++ to this (and glad that Datetime2 exists now in sql server) It is entirely possible to get better precision from other devices, having the precision in the standard struct is so much nicer, and meant that our use of such devices didn't involve a tedious replacement of DateTime with some other one everywhere... I viewed MS's provision of DateTime with this level of precision in storage as very forward thinking and extremely sensible, I'm surprised you don't think it was a good idea. Note that if MS ever start using the HPET timer in modern systems for the system time (as many linux distributions now do) they will almost instantly start hitting well beyond millisecond level precision. Even when the accuracy is off the offset is likely to remain within this level of precision over the course of a day so it is still useful. In fact given there truly being no one true time in a relativistic sense the ability to have decent local precision right now is really why having it baked into the struct is a good thing. The next issue will be people assuming this level of precision in DateTimes/TimeSpans means that they can request waits/pauses of OS level constructs with that level of granularity. That they can do this already with milliseconds being the precision 'exposed' by Sleep() but the scheduler only giving 10-15ms of actually granularity as it stands suggests that it is these constructs that need either better documentation or more 'obviousness' in the type of their arguments.

  • Anonymous
    April 08, 2010
    The comment has been removed

  • Anonymous
    April 08, 2010
    > what time does Doctor Who start? Given The Doctor can't distinguish 5 minutes and 10 years, I think there is a whole different accuracy argument when it comes to Time Lords. (And the answer is 2010-04-10T18:15 BST.)

  • Anonymous
    April 09, 2010
    The comment has been removed

  • Anonymous
    April 09, 2010
    The comment has been removed

  • Anonymous
    April 09, 2010
    I think most of these issues would go away when they finally start installing atomic clocks on the motherboard. http://en.wikipedia.org/wiki/Atomic_clock

  • Anonymous
    April 09, 2010
    The comment has been removed

  • Anonymous
    April 12, 2010
    Unfortunately, Stopwatch does not exist in Silverlight 3 or 4, and the only way I see to implement it is using DateTime.Now. Is there another way to get a more precise elapsed time measurement on Silverlight? You'll have to ask someone who is an expert on Silverlight. I wouldn't know. -- Eric

  • Anonymous
    April 12, 2010
    The comment has been removed

  • Anonymous
    April 12, 2010
    The comment has been removed

  • Anonymous
    April 14, 2010
    @francis d You're not considering the time it takes to "DO it"; thus you're accumulating drift each time you perform a "DO it" operation.

  • Anonymous
    April 14, 2010
    Hi Eric! What you write is right, but I'd see the accuracy of DateTime fron another viewpoint. I've build an app displaying historical data (generated by a SCADA), that records thousands of samples marked by a timestamp. Well, obviously neither our hw is capable to guarantee tenths of us, but I may have to record samples generated by a 10MHz (and over) source, for example. This isn't so special case...In that way, probably neither the 0.1 us resolution would be enough. Good article, anyway. Cheers

  • Anonymous
    April 23, 2010
    The comment has been removed

  • Anonymous
    May 28, 2010
    francis: its because your friends process doesn't start waiting 1 minute from the beginning of the minute, but AFTER the process runs, which is indeterminate, ie, the process that runs may take 1 second one time (no work performed) then 30 seconds the next time.  Your 'drift' is due to the process run time, and using a method like "Sleep(1min)" will cause your process to drift indeterminately because of it. Process A 12:00:00 run. 12:01:01 first run complete [WAIT 1 MINUTE] 12:02:01 run. 12:02:31 second run complete [WAIT 1 MINUTE] 12:03:31 run.

  • Anonymous
    June 24, 2010
    Do not use DateTime; use DateTimeOffset instead. "DateTimeOffset should be considered the default date and time type for application development" msdn.microsoft.com/.../bb384267.aspx