High-Resolution Timer vs System Clock: When Millisecond Precision Isn’t Enough
What they are
- System clock (absolute clock): provides wall-clock time (e.g., system time / time-of-day). Typical APIs: CLOCK_REALTIME, GetSystemTimeAsFileTime, gettimeofday(). Resolution often microseconds or worse; subject to adjustments (NTP, manual set) and can jump.
- High-resolution timer (performance/difference clock): provides a monotonic counter for measuring intervals. Typical APIs: clock_gettime(CLOCK_MONOTONIC / CLOCK_MONOTONIC_RAW) on Unix, QueryPerformanceCounter on Windows, clock_gettime(CLOCK_HIGHRES) on some systems. Higher resolution (µs–ns range), monotonic, not adjusted by NTP.
Key differences (practical)
- Purpose: system clock = current date/time; high-res timer = precise elapsed intervals. Use high-res for benchmarking, animations, scheduling short timeouts.
- Monotonicity: high-res timers are monotonic (won’t go backwards when system time changes). System clock can jump.
- Resolution vs accuracy vs stability:
- Resolution = smallest distinguishable unit (ticks). High-res counters often nanoseconds or sub-microsecond.
- Accuracy = closeness to true time; system clock accuracy depends on sync to external reference.
- Stability (drift) can differ: some high-res sources (e.g., QPC) may drift relative to system time or be affected by CPU frequency changes or virtualization quirks.
- Access/overhead: reading a high-res counter has nonzero access time; effective precision = max(resolution, read overhead).
- Power/suspend behavior: some clocks stop during suspend (CLOCK_MONOTONIC) while variants like CLOCK_BOOTTIME include suspend; QueryPerformanceCounter may include time spent in sleep depending on implementation.
Platform notes and pitfalls
- Windows: QueryPerformanceCounter (QPC) offers high resolution and is monotonic, but historical bugs and virtualization or CPU-frequency effects have caused anomalies. QueryPerformanceFrequency gives tick rate; precision limited by access time and hardware. Legacy APIs (GetTickCount/GetTickCount64) have ~15 ms resolution on some systems.
- Linux/Unix: clock_gettime with CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW or CLOCK_BOOTTIME recommended for intervals; CLOCK_MONOTONIC_RAW avoids kernel time-warp adjustments. CLOCK_MONOTONIC_COARSE trades precision for speed.
- Virtualized environments: hypervisors may virtualize timers with reduced resolution or unexpected leaps; test on target environment.
- APIs and language bindings: prefer language-standard high-resolution monotonic clocks (e.g., time.perf_counter in Python, std::chrono::steady_clock in C++) rather than system-time calls for interval timing.
When to use which
- Use system clock when you need timestamps, logging with human-readable date/time, or synchronization with external time sources.
- Use high-resolution / monotonic timers for:
- Microbenchmarks and profiling
- Game loops and animation timing
- Precise timeouts, low-latency scheduling
- Measuring short-duration intervals (sub-millisecond)
Practical recommendations
- Use a monotonic high-resolution clock (platform-appropriate) for interval timing: std::chrono::steady_clock (C++), clock_gettime(CLOCK_MONOTONIC[_RAW]) (POSIX), QueryPerformanceCounter / timeGetTime/timeBeginPeriod carefully on Windows, time.perf_counter (Python).
- Read frequency/resolution once (if API exposes it) and account for access overhead when measuring very short intervals.
- Prefer CLOCK_MONOTONIC_RAW or QPC for pure interval accuracy; use BOOTTIME variants if you need to include suspend time.
- Test on target hardware and virtualized setups; watch for known platform-specific bugs and document fallback behavior.
- Avoid using system clock for timeout logic; use monotonic timers so time adjustments don’t break deadlines.
If you want, I can convert this into a short code example for C++, Python, or Rust showing correct high-resolution timing.
Leave a Reply