IEEE Visualization 2008 Design Contest


A toolmaker succeeds as, and only as, the users of his tool succeed with his aid. However shining the blade, however jeweled the hilt, however perfect the heft, a sword is tested only by cutting. That swordsmith is successful whose clients die of old age. -- Frederick P. Brooks Jr.

The goal of this year's contest is to design a visualization that is effective at answering real domain-science questions on real data sets. The use of existing visualization tools and research prototypes, and combinations of such tools are perfectly acceptible in arriving at an effective design. You will of course want to give credit to those who built the tools you used, but the focus is on effective visualization.

Domain-Science Questions

We asked the scientists what they want to know about the data -- not how to display it, but what they hope to learn from the visualizations. Here's what they want to know, with each question tagged by its relative importance. The questions are ordered from simpler to more difficult to visualize:

  1. (10 pts) The shadow instability forms in the ionization front when it encounters a spherical bump in the gas that is centered on the X axis. Is the instability symmetric around this axis? If not, how is the symmetry broken?

  2. (10 pts) Over 100 chemical reactions occur in primordial H and He (many of which are driven by radiation in the I-front) but what most interests those studying first structure formation in the universe is H_2. It allowed primeval gas clouds to collapse and form the first stars before galaxies later coalesced. Where is H_2 most prevalent in the simulation?

  3. (10 pts) How thick are the first fingers of radiation that break through cracks in the shock? Radiation is present wherever there is ionized gas (H+ or 20,000 K temperatures) while shocked gas is nearly neutral and at temperatures of a only few thousand K (the ambient undisturbed gas is at 72 K).

  4. (20 pts) Turbulence can be thought of as a cascade of fluid flow from large to small scales that tends to dissipate the bulk motion of flow. What usually triggers the cascade are shear motions in which fluid parcels slide past one another. The magnitude of the vorticity of the flow (defined to be the curl of the velocity field) is a useful measure of shear and indicates where turbulence is likely (the cascades themselves are often so small that numerical simulations do not usually resolve them).

    Turbulence is of great interest to astrophysicists for many reasons, partly because it sets the time scales of star formation in the galaxy today and in the early universe. Stars form when clumps in molecular clouds collapse due to gravity; if only gravity acts, stars would form so quickly that all the gas in the galaxy would have been converted into stars eons ago, with none being formed today. One agent thought to slow star formation to the rates observed in the Milky Way is turbulence, which supports cloud cores against collapse.

    The origin of tubulence in molecular clouds is therefore important. It has recently been suggested that dynamical instabilities in the ionization fronts of massive stars in the clouds stir up turbulence. Is there any evidence of this in the shadow instability?

  5. (25 pts) Turbulent flows by themselves do not form H_2, only the presence of free electrons can. However, if free electrons are present (as signified by H+ fractions) can turbulence enhance H_2 formation? If so, is it because turbulent eddies create overdensities in which reactions occur more rapidly? If not, is it that even though free electrons are present that the turbulence disrupts H- and H_2+ formation (the key precursors of H_2)?

  6. (25 pts) Question 5posed a very specific hypothesis about the cause of turbulence. The broader question of interest, and the one for which visualization offers the most promise of displaying something unexpected, is "What is causing the turbulence?" Can you do an open-ended visualization of all variables to try and help answer this question? This is the "seeing the unexpected" question that will hopefully provide new hypotheses.

Notes on questions 1,3: Ambient gas is very cool (72 Kelvin). Shocked gas is around 2000-3000 K. Ionized gas is much hotter: 20,000 Kelvin). Temperature thus indicates where shock waves and radiation are present.

Notes on questions 4,5: There is not a straightforward "turbulence" calculation, but it is known that areas of high turbulence will have high curl (at the scale of the turbulence). Curl can be computed from the velocity field. A description of how to compute curl magnitude and an example curl-magnitude-computing program can be found on the data description page. Teams are welcome to come up with their own turbulence-estimating data set derived from velocity; be sure to document the calculation being used.


There are two metrics for evaluation: the the effectiveness of the visualization and the completeness of the visualization. An effective solution clearly communicates the variables under display; such a display clearly tells the story of what occurred within the data set and helps an expert viewer answer the domain questions. A complete solution discusses the significant features of the data and how they are depicted by the visualization. It includes legends and color maps to indicate quantitative measurements to an uninformed viewer. It describes the techniques and software systems used to produce the visualization. The effective measure counts for 80% of the total points and the completeness measure 20%.

Evaluating Effectiveness (80% of total score)

The judges for this part of the score will be the domain scientists who submitted the data and questions.

Effectiveness on each of the five quesrtions will be evaluated on a five-point scale, with 5 being "I could see the answer immediately and clearly" and 1 being "I know the answer already, but I still can't see it in the visualization." To randomize learning effects, we intend to have each judge view the submissions in a different, randomly-selected order. Each judge will read the PDF file accompying the submission before judging the video and/or still-image submissions, so that they will be familiar with the techniques and with how the authors believe the visualization is best viewed to answer the questions.

The total effectiveness score will be the sum of the individual scores, weighted by the relative-importance values placed on the questions (the point scores). These point scores reflect the relative importance of the questions to the scientists, not the relative ease with which each can be displayed.

The mean total score from all judges will be used as the effectiveness score.

Evaluating Completeness (20% of total score)

The judges for this part of the score will be practicing visualization researchers.

Completeness will be evaluated on a five-point scale, with 5 being "I could implement this and get these same pictures and know what settings to put on all of the parameters" and 1 being "I have no idea how to make this picture."

The mean total score from all judges will be used as the completeness score.

Determining winning entries

The final score for each team will be determined by adding 80% of the effectiveness score to 20% of the completeness score. The scores will be sorted from highest to lowest.

The highest-scoring entry will be evaluated by a group consisting of the current judges, the conference chair, and judges of past contests to determine if it is of sufficient merit to deserve an IEEE Visualization award. If so, the first-place prize will be awarded to the team that submitted this entry.

The entry with the second-highest score will be similarly evaluated and if it is of sufficient merit the team that submitted it will be awarded second place.

The entry with the third-highest score will be similarly evaluated and if it is of sufficient merit the team that submitted it will be awarded third place.

Breaking Ties: In the case of identical numerical final scores, the team with the higher effectiveness score will be selected. In the case of identical total and effectiveness scores, the team with the higher score on the question with the largest relative-imporance score will be selected. In the case of identical scores in all questions, a coin toss will be used.