Writing

Feed Software, technology, sysadmin war stories, and more.

Monday, January 1, 2024

C++ time_point wackiness across platforms

It's a new year, so let's talk about some more time-related shenanigans. This one comes from the world of writing C++ for multiple platforms. A couple of weeks ago, I was looking at some of my code and couldn't remember why it did something goofy-looking. It's a utility function that runs stat() on a target path and returns the mtime as a std::chrono::system_clock::time_point. This is nicer than using a time_t since it has sub-second precision.

The trick is getting it out of a "struct stat" and into that time_point. The integer part is simple enough: you use from_time_t on the tv_sec field. But then you have to get the nanoseconds (tv_nsec) from that struct into your time_point. What do you do?

The "obvious" answer sounds something like this: add std::chrono::nanoseconds(foo.tv_nsec) to your time_point. It even works in a few places! It just doesn't work everywhere. On a Mac, it'll blow up with a nasty compiler error. Good luck trying to make sense of this the first time you see it:

exp/tp.cc:14:6: error: no viable overloaded '+='
  tp += std::chrono::nanoseconds(2345);
  ~~ ^  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/__chrono/time_point.h:65:73: note: candidate function not viable: no known conversion from 'duration<[...], ratio<[...], 1000000000>>' to 'const duration<[...], ratio<[...], 1000000>>' for 1st argument
    _LIBCPP_INLINE_VISIBILITY _LIBCPP_CONSTEXPR_SINCE_CXX17 time_point& operator+=(const duration& __d) {__d_ += __d; return *this;}

Nice, right? It tells you that there's something wrong, but the chances of someone figuring that out quickly are pretty slim. For the benefit of anyone else who encounters this, it's basically this: a system_clock::time_point on that platform isn't fine enough to represent nanoseconds, and they're keeping you from throwing away precision.

To make it happy, you have to jam it through a duration_cast and just accept the lack of precision - you're basically shaving off the last three digits, so instead of something like 0.111222333 seconds, your time will appear as 0.111222 seconds. The nanoseconds are gone.

I assume you might find other platforms out there which don't support microseconds or even milliseconds, and so you'd hit the same problem with trying to "just add" them to system clock time point.

At any rate, here's a little bit of demo code to show what I'm talking about. As-is, it'll run on Linux boxes and Macs, and it'll show slightly different results.

#include <stdio.h>

#include <chrono>

int main() {
  std::chrono::system_clock::time_point tp =
      std::chrono::system_clock::from_time_t(1234567890);

  // Okay.
  tp += std::chrono::milliseconds(1);

  // No problem here so far.
  tp += std::chrono::microseconds(1);

  // But... this fails on Macs:
  // tp += std::chrono::nanoseconds(123);

  // So you adapt, and this works everywhere.  It slices off some of that
  // precision without any hint as to why or when, and it's ugly too!

  tp += std::chrono::duration_cast<std::chrono::system_clock::duration>(
      std::chrono::nanoseconds(123));

  // Something like this swaps the horizontal verbosity for vertical
  // stretchiness (and still slices off that precision).

  using std::chrono::duration_cast;
  using std::chrono::system_clock;
  using std::chrono::nanoseconds;

  tp += duration_cast<system_clock::duration>(nanoseconds(123));

  // This is what you ended up with:

  auto tse = tp.time_since_epoch();

  printf("%lld\n", (long long) duration_cast<nanoseconds>(tse).count());

  // Output meaning when split up:
  //
  //        sec        ms  us  ns
  //
  // macOS: 1234567890 001 001 000  <-- 000 = loss of precision (246 ns)
  //
  // Linux: 1234567890 001 001 246  <-- 246 = 123 + 123 (expected)
  //

  return 0;
}

To bring this full-circle, that's why I have that ugly thing in my code to handle the addition of the tv_nsec field. Without it, the code doesn't even compile on a Mac.

Stuff like this is why comments can be very important after the fact.