Recent Weblogs

Links I like

Dreadful Datasets

When developing RIAs there will be a data set so grand that synchronous processing will freeze the user interface. Tarnishing the experience with lag in actions, sluggish display and a sense of dread that at any moment the application is about to lock up completely. This is usually followed by the "Stop Script" dialog. The stop script dialog is basically the Game Over dialog for a web programmer, you lose.

Veni, Vidi, Vici Vestri Notitia

When your enemy has gained too much strength, do as the Romans did, divide and conquer. Splicing the dataset into smaller chunks allows you to take advantage of Javascript's ability to release processing, if only for a moment, which will allow normal UI behavior to resume and appear uninterrupted.

The Demo

What's happening down there?!

Below is a demonstration I set up of massive data harvesting (from NYT's RSS Feeds) and display. The harvesting goes on seperately, but all data that is harvested via the RSS feeds is sent through AsyncExecutioner to delegate its display as resources allow. You'll notice you're still able to scroll and select text while this is processing.

Duration in Seconds:    Number of items displayed:

AsyncExecutioner

AsyncExecutioner is a new Prototype.js plugin designed to assist in the dellima described above. It reminds me of a mix between Enumerable and PeriodicalExecuter. It takes a large dataset and some call back functions, mainly onExecution which can be thought of as the primary callback for an Enumerable execution such as Enumerable.each. It's similar to PeriodicalExecuter in that it handles interval based execution. The large difference between the two is AsyncExecutioner uses window.setTimeout and Periodical uses window.setInterval.

Async handles the large dataset and sends chunks of the dataset, as small as a single item to the onExecution method, after execution it checks the dataset. If there is more to be processed it sends off the next batch of data to be processed. By configuring the options you can allow for maximum efficiency. With large datasets with simple processing and not very important you can afford larger chunk sizes and a longer defer time. If your dataset's display is critical and very intensive processing you're going to need a small chunk size and a shorter defer time, as this is critical data!

Resources and References