Evaluation
Evaluating InfoVis: Been There, Done That?
The usefulness of an InfoVis tool is not as predictable as with ‘classic’ software problems, because of the remarkable influence of human reasoning processes on success in application. So even after participative design and faithful development the outcome has to be evaluated to a high extent.
Usability not only matters but may become vital due to the interactive and explorative nature of many tasks users will perform. Therefore on the one hand, one has to pay particular attention to usability questions in an iterative design process. On the other hand, a severe examination is also essential for the assessment of the InfoVis technique because of its interdependency with usability.
Ecological validity and external validity: looking for the gemstones.
As necessary it is to conduct ‘classic’ controlled experiments as essential is it to break out of laboratories and do some kind of field observation.In many cases this is the one and only way to evaluate the usefulness for the ‘real world’ and ensure ecological validity. The same applies to the question of generalization. Sometimes only other populations are able to decide for themselves wether a technique makes
sense in their setting and for their data and tasks and thus allowing profound assessment of external validity.
Related Pages
Useful Resources
- Catherine Plaisant: "The challenge of information visualization evaluation". AVI 2004: 109-116
- A list of papers dealing with evaluation of InfoVis by Enrico Bertini: http://www.dis.uniroma1.it/~beliv06/infovis-eval.html
- Anita Komlodi et al.: "Information Visualization Evaluation Review Bibliography" with over 200 entries.