|
IEEE Visualization 2006 Design ContestTasks
The goal of this year's contest is to design a visualization that is effective at answering real domain-science questions on real data sets. The use of existing visualization tools and research prototypes, and combinations of such tools are perfectly acceptible in arriving at an effective design. You will of course want to give credit to those who built the tools you used, but the focus is on effective visualization. Domain-Science QuestionsWe asked the scientists what they want to know about the data -- not how to display it, but what they hope to learn from the visualizations. Here's what they want to know:
Notes on question 3: What are the possible wave types and how do we tell them apart? Compressional (P), shear (S), Love (surface), and Raleigh (surface) waves are the main ones of interest to the scientists. The first two can be deeper waves and second two occur only at a free surface. A good description of these wave types can be found at http://web.ics.purdue.edu/~braile/edumod/waves/WaveDemo.htm. Region of particular interestIn the end, the scientists care most about what happens at the surface in highly-populated areas (in particular, LA). But they also need to understand how strong waves got there or what was blocking them. Concentrating on the upper 20km of the simulation makes sense so long as this does not mask reflections coming from deeper in the simulation. Visualizations that depict a region including LA and the fault line may capture most of the important action so long as they don't clip out wave behavior that explains what happens in that area.
JudgingThere are two metrics for evaluation: the the effectiveness of the visualization and the completeness of the visualization. An effective solution clearly communicates the variables under display; such a display clearly tells the story of what occurred within the data set and helps an expert viewer answer the domain questions. A complete solution discusses the significant features of the data and how they are depicted by the visualization. It includes legends and color maps to indicate quantitative measurements to an uninformed viewer. It describes the techniques and software systems used to produce the visualization. The effective measure counts for 80% of the total points and the completeness measure 20%. Evaluating Effectiveness (80% of total score)The judges for this part of the score will be scientists studying earthquakes. Effectiveness on each of the five quesrtions will be evaluated on a five-point scale, with 5 being "I could see the answer immediately and clearly" and 1 being "I know the answer already, but I still can't see it in the visualization." To randomize learning effects, we intend to have each judge view the submissions in a different, randomly-selected order. Each judge will read the PDF file accompying the submission before judging the video and/or still-image submissions, so that they will be familiar with the techniques and with how the authors believe the visualization is best viewed to answer the questions. The total effectiveness score will be the sum of the individual scores, weighted by the relative-importance values placed on the questions (the point scores). These point scores reflect the relative importance of the questions to the scientists, not the relative ease with which each can be displayed. The mean total score from all judges will be used as the effectiveness score. Evaluating Completeness (20% of total score)The judges for this part of the score will be practicing visualization researchers. Completeness will be evaluated on a five-point scale, with 5 being "I could implement this and get these same pictures and know what settings to put on all of the parameters" and 1 being "I have no idea how to make this picture." The mean total score from all judges will be used as the completeness score. Determining winning entriesThe final score for each team will be determined by adding 80% of the effectiveness score to 20% of the completeness score. The scores will be sorted from highest to lowest. The highest-scoring entry will be evaluated by a group consisting of the current judges, the conference chair, and judges of past contests to determine if it is of sufficient merit to deserve an IEEE Visualization award. If so, the first-place prize will be awarded to the team that submitted this entry. The entry with the second-highest score will be similarly evaluated and if it is of sufficient merit the team that submitted it will be awarded second place. The entry with the third-highest score will be similarly evaluated and if it is of sufficient merit the team that submitted it will be awarded third place. Breaking Ties: In the case of identical numerical final scores, the team with the higher effectiveness score will be selected. In the case of identical total and effectiveness scores, the team with the higher score on the question with the largest relative-imporance score will be selected. In the case of identical scores in all questions, a coin toss will be used. |