Thursday, October 6, 2011

Throttling via the event queue

Here's a solution to a common problem that has some interesting advantages.

The problem is as follows. In a reactive system, such as a user interface, the incoming stream of events can sometimes be overwhelming. The most common example is mouse-move events. If the OS sends an application a hundred mouse-move events per second, and if the processing of each event takes more than ten milliseconds, then the application will drift further and further behind. To avoid this, the application should discard enough events that it stays caught up. That is, it should throttle the event stream. How should it do so?

The solutions I have run into do one of two things. They either delay the processing of events based on wall-clock time, or they require some sophisticated support from the event queue such as the ability to look ahead in the queue. The solutions that use time have the problem that they often introduce a delay that isn't necessary; the user will stop moving the mouse, but the application won't know it, so it will add in a delay anyway. The solutions using fancy event queues are not always possible, depending on the event queue, and anyway they make the application behavior more difficult to understand and test.

An alternative solution is as follows. Give the application a notion of being paused, and have the application queue Unpause events to itself to get out of the paused state. The first time an event arrives, process it as normal, but also pause the application and queue an Unpause event. If any other events arrive event while the application is paused, simply queue them on the side. Once the Unpause event arrives, if there are any events on the side queue, drain the side queue, process the last event, and queue another Unpause event. If an Unpause event arrives before any other events are queued, then simply mark the application unpaused.

This approach has much of the advantages of looking ahead in the event queue, but it doesn't require any direct support for doing so. It also has as good of responsiveness under system load as appears possible to achieve. If the system is lightly loaded, then every event is processed, and the system is just as responsive as it would be without the throttling. If the system is loaded, then enough events are skipped that the application avoids backlogging further and further behind. If the load is temporarily high, and then stops, then the last event will be processed promptly.

The one tricky part of implementing this kind of solution is posting the Unpause event from the application back to the application itself. That event needs to be on the same queue that the other work is queuing up on, or the approach will not work. How to do this depends on the particular event queue in question. For the case of a web browser, the best technique I know is to use setTimeout with a timeout of one millisecond.

No comments: