Evaluation: Difference between revisions

From InfoVis:Wiki
Jump to navigation Jump to search
mNo edit summary
m (moved resources to extra page)
 
(14 intermediate revisions by 5 users not shown)
Line 1: Line 1:
'''Evaluating InfoVis: Been There, Done That?'''
'''Evaluating InfoVis: Been There, Done That?'''
{{Quotation|The purpose of visualization is insight. The purpose of visualization evaluation is to determine to what degree visualizations achieve this purpose.|[North, 2006]}}


The usefulness of an InfoVis tool is not as predictable as with ‘classic’ software problems, because of the remarkable influence of human reasoning processes on success in application. So even after participative design and faithful development the outcome has to be evaluated to a high extent.
The usefulness of an InfoVis tool is not as predictable as with ‘classic’ software problems, because of the remarkable influence of human reasoning processes on success in application. So even after participative design and faithful development the outcome has to be evaluated to a high extent.
Line 6: Line 8:


'''[[Ecological validity]] and [[external validity]]''': looking for the gemstones.<br>
'''[[Ecological validity]] and [[external validity]]''': looking for the gemstones.<br>
As necessary it is to conduct ‘classic’ controlled experiments as essential is it to break out of laboratories and do some kind of field observation.In many cases this is the one and only way to evaluate the usefulness for the ‘real world’ and ensure ecological validity. The same applies to the question of generalization. Sometimes only other populations are able to decide for themselves wether a technique makes  
As necessary it is to conduct ‘classic’ controlled experiments as essential is it to break out of laboratories and do some kind of field observation.In many cases this is the one and only way to evaluate the usefulness for the ‘real world’ and ensure ecological validity. The same applies to the question of generalization. Sometimes only other populations are able to decide for themselves whether a technique makes  
sense in their setting and for their data and tasks and thus allowing profound assessment of external validity.
sense in their setting and for their data and tasks and thus allowing profound assessment of external validity.


== Related Pages ==


== Related Pages ==
*[[2005-09-30: Evaluating Visualizations: Do Expert Reviews Work?]]
*[[BELIV'06]]
*[[Tasks Taxonomy for Graphs]]


[[2005-09-30: Evaluating Visualizations: Do Expert Reviews Work?]]
== Useful Resources ==
see [[Evaluation and Usability Links]]




[[Category: Glossary]]
[[Category: Glossary]]

Latest revision as of 14:55, 20 September 2007

Evaluating InfoVis: Been There, Done That?

The purpose of visualization is insight. The purpose of visualization evaluation is to determine to what degree visualizations achieve this purpose.
[North, 2006]


The usefulness of an InfoVis tool is not as predictable as with ‘classic’ software problems, because of the remarkable influence of human reasoning processes on success in application. So even after participative design and faithful development the outcome has to be evaluated to a high extent.

Usability not only matters but may become vital due to the interactive and explorative nature of many tasks users will perform. Therefore on the one hand, one has to pay particular attention to usability questions in an iterative design process. On the other hand, a severe examination is also essential for the assessment of the InfoVis technique because of its interdependency with usability.

Ecological validity and external validity: looking for the gemstones.
As necessary it is to conduct ‘classic’ controlled experiments as essential is it to break out of laboratories and do some kind of field observation.In many cases this is the one and only way to evaluate the usefulness for the ‘real world’ and ensure ecological validity. The same applies to the question of generalization. Sometimes only other populations are able to decide for themselves whether a technique makes sense in their setting and for their data and tasks and thus allowing profound assessment of external validity.

Related Pages[edit]

Useful Resources[edit]

see Evaluation and Usability Links