IEEE Visualization 2005 Contest

Tasks

The theme of the 2005 IEEE Visualization Contest was Rendering Revolution. With recent advances in non-photorealistic visualization, global illumination models for visualization, and multi-modal depictions of data and visualization parameters, the field stands at the cusp of leveraging these novel methods for more effective visualization. To encourage this development, the 2005 IEEE Visualization Contest focused this year on these new techniques.

There were three tasks for the 2005 IEEE Visualization Contest. Two of the tasks were independent of the given data set, and were meant as a starting point for an entries efforts. Each data set had its own third task to better illustrate how the "rendering revolution" assists in the visualization of that data. Entries were evaluated on their strengths in each category individually—it was possible that a very strong entry in one category to win over a weak entry over all the categories.

Interactive Exploration

For the first task, the goal was to facilitate visualization exploration via interactive rendering and manipulation of the data. Creative and effective means to control the novel visualization parameters were encouraged---novel parameter rendering and manipulation was an aspect of the contest. Specific sub-tasks of this task were:

Static Presentation

For the second task, it was assumed that the data had been pre-explored and the goal of the visualization was to communicate the data's content in one (or a few) static images. Control of the visualization parameters and interactivity was not paramount in this setting, but compelling depictions were. In fact, without the depth and other cues provided by interaction, effective communication of the visualization was designed to be a significant challenge. Specific sub-tasks were:

Data-Specific Tasks

Please see the data documentation for information about the data-specific tasks.

Evaluation

There were two metrics for evaluation: the completeness of the supplied visualization and the effectiveness of the visualization. The effective measure had more weight than the completeness measure—a compelling, static rendering of one of the data sets is more valuable than an interactive and static presentation of both data sets with a less compelling suite of techniques.

Note that the various tasks for the contests set up competing requirements. Interactivity is often facilitated by smaller data sets with fewer visualization primitives, while the static presentation is better served by higher-fidelity but complex techniques.

Evaluating Interactive Exploration

Interactive exploration was evaluated based upon the video submitted as part of the contest submission. The video was required to show the visualization "running through its paces"—i.e., interactively stepping through the data.

A complete solution to the interactive exploration criterion was a visualization capable of rendering at real-time rates (10 frames-per-second) while depicting the entire data set; all the parameters which control the visualization must have been demonstratably editable. An effective solution might of had to compromise by sub-sampling the data either in space or in time or compromise on the rate of interaction or the amount of exploration afforded.

Evaluating Static Presentation

The static presentation was evaluated based upon the submitted image(s)/storyboard(s) and the corresponding description text. A complete solution discussed the significant features of the data and how they were depicted by the visualization. An effective solution provided a compelling image that is both insightful and intriguing.

Evaluating Data-Specific Tasks

This criteria measured how well the novel visualization and interaction techniques addressed the issues of the specific data sets. A complete solution visualized all the properties discussed for each data, though all may not be displayed at once. An effective solution generated results which effectively communicate the variables under display—such a display clearly told the story of what occurred within the data set. This part of the contest was evaluated based upon the submitted video and web-page.