Evaluation

From InfoVis:Wiki
Revision as of 10:52, 17 February 2006 by Markus (talk | contribs)
Jump to navigation Jump to search

Evaluating InfoVis: Been There, Done That?

The usefulness of an InfoVis tool is not as predictable as with ‘classic’ software problems, because of the remarkable influence of human reasoning processes on success in application. So even after participative design and faithful development the outcome has to be evaluated to a high extent.

Usability not only matters but may become vital due to the interactive and explorative nature of many tasks users will perform. Therefore on the one hand, one has to pay particular attention to usability questions in an iterative design process. On the other hand, a severe examination is also essential for the assessment of the InfoVis technique because of its interdependency with usability.

Ecological validity and external validity: looking for the gemstones.
As necessary it is to conduct ‘classic’ controlled experiments as essential is it to break out of laboratories and do some kind of field observation.In many cases this is the one and only way to evaluate the usefulness for the ‘real world’ and ensure ecological validity. The same applies to the question of generalization. Sometimes only other populations are able to decide for themselves wether a technique makes sense in their setting and for their data and tasks and thus allowing profound assessment of external validity.


Related Pages

2005-09-30: Evaluating Visualizations: Do Expert Reviews Work?