Faking Better-Than-Millisecond-Resolution with a QTimer

If you’ve ever used a QTimer you may know that they have a maximum time resolution of ~1 millisecond, and that they’re fairly inaccurate (even on a Linux box). Here’s how I got a little bit better performance out of a QTimer while retaining its ease of implementation and Qt-ness.

I was working on a project that needed to send UDP packets over the network at a rate of 114 Hz. That’s one message every ~8.77193 ms. QTimer’s setInterval() method, though, only accepts an integer millisecond parameter. Connecting a SendMessage() slot to the timer’s timeout signal on a 9 ms interval would mean I was only sending messages at a rate of ~111 Hz. Neglecting three messages per second seemed bad.

Being as in love with Qt’s Signals & Slots as I am, I didn’t want to abandon the simple brevity of QTimer’s timeout signal. Nor did I want to delve into possibly subclassing that object. My solution (erm, maybe workaround) is described below.

I created a QTimer with an interval of 8 ms. I connected the timeout signal of this timer to a slot brilliantly named SendMessage(). The timer starts. After about 8 ms, the timeout signal is emitted and the SendMessage() slot is entered. Then! At the beginning of SendMessage() I stop the timer. Before I actually send the UDP packet, though, I enter a tight while loop. In this loop I make use of Linux’s gettimeofday(2) function, where I compare the delta between a very high-res current time and the exact time I last sent a message with my 8.77193 figure. When I’m close, I leave the loop, send the message, record the current high-resolution time, and restart my QTimer. Phew. And then do that ad infinitum.

That while-loop is still holding up the thread, but it’s only for less than a millisecond per call to SendMessage(). The performance I saw was much better than using a QTimer with a 9 ms timeout. Most of the time deltas were coming out to 8.772 ms–over a large sample, about 2.5% of the deltas were more than 100 microseconds off the mark. I was effectively seeing messages transferred 113.999 Hz.

But wait–2.5% of 114 Hz being sent at the wrong time is pretty close to three per second. Which is equal to how many messages I was neglecting by simply using a QTimer with a 9 ms timeout interval.

The messages would only be sent late, though, if the QTimer timed out longer than 8.772 ms after the previous timeout (which happens sometimes, a testament to QTimer’s inaccuracy at this scale). So while a message might be late, it wouldn’t typically affect when the next message would be sent.

Consider this: a message is sent late. For ease of explanation I’ll say it’s the first message. It’s sent at 9 ms into the run instead of 8.772 ms. At this time, too, the QTimer is started, and ~8 ms later, SendMessage() is entered again. Now we’re at 17 ms into the run. The second message should be sent at 17.544 ms into the run (8.772 + 8.772), so we have about 500 microseconds of buffer time. This means that, on average, about 3 of the messages per second are sent at the incorrect time, but they’re still sent.

Our packet headers have indexes, so I think this will be copacetic. If I needed any more accuracy than this I would probably start looking for a solution without QTimers :,( and would probably have to turn on CPU shielding.