<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://infovis-wiki.net/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ares</id>
	<title>InfoVis:Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://infovis-wiki.net/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ares"/>
	<link rel="alternate" type="text/html" href="https://infovis-wiki.net/wiki/Special:Contributions/Ares"/>
	<updated>2026-04-21T17:40:12Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_4&amp;diff=23998</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 4</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_4&amp;diff=23998"/>
		<updated>2010-01-07T18:51:41Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe4.html Beschreibung der Aufgabe 4]&lt;br /&gt;
=== Zu erstellende Visualisierung ===&lt;br /&gt;
-------------------------------&lt;br /&gt;
* Stammbaum der Nachkommen von Lisa und Bart Simpson*&lt;br /&gt;
 &lt;br /&gt;
...Visualisierung der Nachkommen von Lisa Simpson sowie der Nachkommen von Bart Simpson. Dabei sollen  zwei Stammbäume entstehen - einer von Bart und einer von Lisa - die dann miteinander verglichen werden können. Zuerst kommen Lisa und Bart, dann deren Kinder, ihre Enkel, etc. (mind 4 Generationen). Da es noch keine Nachkommen gibt, können diese frei erfunden werden.&lt;br /&gt;
 &lt;br /&gt;
Die Visualisierung soll folgende Informationen darstellen:&lt;br /&gt;
 &lt;br /&gt;
- Verwandtschaftsverhältnisse (zumindest Eltern-Kinder),&lt;br /&gt;
 &lt;br /&gt;
- Unterscheidung zwischen Blutsverwandtschaft und angeheirateten Familienmitgliedern,&lt;br /&gt;
 &lt;br /&gt;
- Geburts- und Todestag sowie Lebensdauer von allen Familienmitgliedern,&lt;br /&gt;
 &lt;br /&gt;
- wichtige Ereignisse im Leben jedes Familienmitglieds (z.B., Anzeigen, Gefängnisaufenthalte, Schulzeit, Studienzeit, Nobelpreise, Arbeitslosigkeit etc.)&lt;br /&gt;
 &lt;br /&gt;
- Zufriedenheit jedes Familienmitglieds (Skala: sehr niedrig - niedrig - mittel - hoch - sehr  hoch); kann sich im Laufe des Lebens ändern.&lt;br /&gt;
 &lt;br /&gt;
Die Visualisierung soll die interaktive Auseinandersetzung mit den Daten ermöglichen.&lt;br /&gt;
Verpflichtend:&lt;br /&gt;
Möglichkeiten zum besseren Vergleich von einzelnen Abschnitten der Stammbäume bzw. Vergleich von Ausschnitten aus Lisas und Barts Stammbäumen.&lt;br /&gt;
+ mind. 2 weitere Interaktionsmöglichkeiten (z.B., Details on Demand, Filteroptionen)&lt;br /&gt;
 &lt;br /&gt;
Allgemein:&lt;br /&gt;
 &lt;br /&gt;
- Die Daten sollen zur Analyse von Zusammenhängen zwischen Familienverhältnissen, wichtigen Ereignissen und Zufriedenheit visualisiert werden (die Anwendungsgebiets- und Zielgruppenanalyse kann kurz gehalten werden).&lt;br /&gt;
 &lt;br /&gt;
- Die bisher erlernten Design-Prinzipien sollen umgesetzt werden z.B.: Optimierung der Data-ink ratio (keine Comics!), visuelle Attribute (Größe, Farbe, Position, etc.) sollen sinnvoll eingesetzt werden (Information darstellen).&lt;br /&gt;
 &lt;br /&gt;
- Die Mockups sollten zumindest 1) die beiden Stammbäume im Überblick  und 2) eine detaillierte Vergleichsansicht von 2 Teil-Stammbäumen wiedergeben.&lt;br /&gt;
 &lt;br /&gt;
- Alle nicht angeführten Daten können frei erfunden werden.&lt;br /&gt;
&lt;br /&gt;
------------------------------&lt;br /&gt;
&lt;br /&gt;
=== Analysis ===&lt;br /&gt;
==== Field of Application and Analysis of the Data Set ====&lt;br /&gt;
===== Field of Application =====&lt;br /&gt;
Family trees were originally used to show how certain royal families are intertwined and to show whom a person in the tree is heir to. Nowadays they are mostly used for genealogical research. The most common visualization of a family tree is, as the name suggests, a tree which displays various persons as nodes and those relations by edges connecting the respective nodes. The current generation is usually represented by the leaves of the tree. In order to visualize a marriage, either both persons are represented by a node, or a special marriage edge connecting the respective persons and the eventually resulting children is used.&lt;br /&gt;
&lt;br /&gt;
The main drawback of a family tree is the incapability to display temporal data. For this reason special considerations have been made to display the temporal data just as well as the relational data, while maintaining the advantages of a conventional family tree.&lt;br /&gt;
===== Analysis of the Data Set=====&lt;br /&gt;
{|cellpadding=&amp;quot;2&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Variable !! Granularity !! Scale !! Data type&lt;br /&gt;
|-&lt;br /&gt;
|Family relationship || Discrete || Nominal || Tree&lt;br /&gt;
|-&lt;br /&gt;
|Consanguinity || Discrete || Nominal || Tree&lt;br /&gt;
|-&lt;br /&gt;
|Events or periods of life || Discrete || Interval || Temporal&lt;br /&gt;
|-&lt;br /&gt;
|Satisfaction level || Discrete || Ordinal || 1-dimensional&lt;br /&gt;
|}&lt;br /&gt;
The data set is subject to an one dimensional, multivariate data structure. The family relationships, consanguinities, events, periods of life and satisfaction levels are dependent on the temporal dimension and for this reason multivariate.&lt;br /&gt;
==== Analysis of the Target Audience ====&lt;br /&gt;
The target audience mainly consists of genealogical researchers, either professionals or hobbyists. Additionally the visualization can even be of use for psychological, social and medical research due to the additional level of information which is offered by the time-dependent satisfaction level. The research of inheritable diseases or rather the risk to be affected by those or more generally the genetic research seems also conceivable.&lt;br /&gt;
&lt;br /&gt;
The variety of possible application areas demands a visualization which is designed in such a way that it can be used by inexperienced hobbyists, as well as sophisticated professionals from all areas in an intuitive, yet powerful manner.&lt;br /&gt;
==== Purpose of the Visualization ====&lt;br /&gt;
The visualization provides a representation of the genealogies of several persons by enriching the conventional family tree with interactive features. It&#039;s main purpose is the detailed depiction, just as well the comparison of family relationships which can be of use for genealogical research.&lt;br /&gt;
&lt;br /&gt;
The visualization is furthermore capable to display important events (such as birthdays, days of death, etc.), life periods (such as schooldays) and the satisfaction level dependent on the period of life offering an additional depth of information which can be of interest for psychological, medical and social researchers (like mentioned above).&lt;br /&gt;
=== Concept ===&lt;br /&gt;
==== Type of Visualization ====&lt;br /&gt;
The main view of the visualization consists of time-lines, each visualizing the personal history (including important events, periods of life and the satisfaction level) of a person dependent on time. The integration of relational data has been realized with relational events, such as marriages and births. These can be regarded as the edges of the time-lines, comparable to the edges of a conventional tree. Related time-lines (comparable to nodes), and therefore persons, are connected by those relational events implying the temporal occurrence of the event in question.&lt;br /&gt;
&lt;br /&gt;
The visualization can be regarded as conventional hierarchical family tree rotated by 90 degrees with its nodes and edges positioned and shaped according to the temporal data of the respective persons and events.&lt;br /&gt;
&lt;br /&gt;
The comparison of multiple family trees is realized in a straightforward manner. It is possible to display multiple trees one below the other, each having it&#039;s own timescale.&lt;br /&gt;
==== Visual Mapping ====&lt;br /&gt;
{|cellpadding=&amp;quot;2&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Variable !! Visual attribute&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Person attributes&lt;br /&gt;
|-&lt;br /&gt;
|Name || Text&lt;br /&gt;
|-&lt;br /&gt;
|Details || Text (when interactively invoked)&lt;br /&gt;
|-&lt;br /&gt;
|Gender || Color of timeline&lt;br /&gt;
|-&lt;br /&gt;
|Date of birth || Begin position of timeline&lt;br /&gt;
|-&lt;br /&gt;
|Date of death || End position of timeline&lt;br /&gt;
|-&lt;br /&gt;
|Lifespan || Length of timeline&lt;br /&gt;
|-&lt;br /&gt;
|Satisfaction level || Height of bar&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Event attributes&lt;br /&gt;
|-&lt;br /&gt;
|Event type || Icon&lt;br /&gt;
|-&lt;br /&gt;
|Relational event || Icon and vertical line (e.g. implies parent child relation)&lt;br /&gt;
|-&lt;br /&gt;
|Event Details || Text (when interactively invoked)&lt;br /&gt;
|}&lt;br /&gt;
==== Applied Techniques ====&lt;br /&gt;
===== Non-interactive Techniques =====&lt;br /&gt;
* &#039;&#039;&#039;Time-lines&#039;&#039;&#039; [Aigner, 2009a] representing the personal history of a person.&lt;br /&gt;
* &#039;&#039;&#039;Hierarchical trees&#039;&#039;&#039; [Aigner, 2009b] implied by relational events.&lt;br /&gt;
* &#039;&#039;&#039;Symbols&#039;&#039;&#039; [Miksch, 2009a] for the recognition of major event types.&lt;br /&gt;
* &#039;&#039;&#039;Bar charts&#039;&#039;&#039; representing the satisfaction level of a person dependent on time.&lt;br /&gt;
===== Interactive Techniques =====&lt;br /&gt;
* &#039;&#039;&#039;Zooming and panning&#039;&#039;&#039; [Aigner, 2009c] enabling the user to view the data at a higher resolution.&lt;br /&gt;
* &#039;&#039;&#039;Details on demand&#039;&#039;&#039; [Aigner, 2009d] enable the user to display detailed information about a data case.&lt;br /&gt;
* &#039;&#039;&#039;Semantic depth of field&#039;&#039;&#039; [Miksch, 2009b] for the highlighting of time-lines and events on demand.&lt;br /&gt;
* &#039;&#039;&#039;Filter options&#039;&#039;&#039; [Aigner, 2009e] to show the user only the data he or she&#039;s interested in.&lt;br /&gt;
==== Interactivity ====&lt;br /&gt;
{|cellpadding=&amp;quot;2&amp;quot; border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Interaction !! Purpose&lt;br /&gt;
|-&lt;br /&gt;
|Mouse over timeline || Shows detailed information about the respective person&lt;br /&gt;
|-&lt;br /&gt;
|Mouse out timeline || Hides detailed information about the respective person&lt;br /&gt;
|-&lt;br /&gt;
|Mouse over event || Shows detailed information about the respective event&lt;br /&gt;
|-&lt;br /&gt;
|Mouse out event || Hides detailed information about the respective event&lt;br /&gt;
|-&lt;br /&gt;
|Click on timeline || Highlights the person, aligns the birthday of the respective person to the jump mark, enlarges the satisfaction bar chart, highlights the consanguinity path.&lt;br /&gt;
|-&lt;br /&gt;
|Click on event || Shows detailed information about the respective event&lt;br /&gt;
|-&lt;br /&gt;
|Click on relational event || Applies a semantic depth of field on the related persons and the respective event&lt;br /&gt;
|-&lt;br /&gt;
|Clicking somewhere else || Deselects selected persons or events&lt;br /&gt;
|-&lt;br /&gt;
|Drag and drop timeline || Allows to manually sort a timeline vertically&lt;br /&gt;
|-&lt;br /&gt;
|Filter || Allows the selection of relevant data (e.g. certain names, event types, etc.)&lt;br /&gt;
|-&lt;br /&gt;
|Sort || Allows the (vertical) sorting of the time-lines (e.g. after name, birthday, etc.)&lt;br /&gt;
|-&lt;br /&gt;
|Compare || Displays an additional canvas for an additional tree&lt;br /&gt;
|}&lt;br /&gt;
The interactive features above should be regarded as the default behaviour of the visualization. A settings dialog where the users can change this behaviour is conceivable.&lt;br /&gt;
==== Mockups ====&lt;br /&gt;
===== Timeline View =====&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
[[Image:Mockup_overview_lisa.png|thumb|200px|left|Overview Lisa]]&lt;br /&gt;
[[Image:Mockup_overview_bart.png|thumb|200px|left|Overview Bart]]&lt;br /&gt;
[[Image:Mockup_eventdetail.png|thumb|200px|left|Eventdetail]]&lt;br /&gt;
[[Image:Mockup_bloodline.png|thumb|200px|left|Bloodline]]&lt;br /&gt;
[[Image:Mockup_comparison.png|thumb|200px|Comparison view]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%&amp;quot;&amp;gt;&lt;br /&gt;
[[Image:Lisa-blood.png|thumb|200px|left|Bloodline Lisa]]&lt;br /&gt;
[[Image:Bart-blood.png|thumb|200px|left|Bloodline Bart]]&lt;br /&gt;
[[Image:Lisa-info.png|thumb|200px|left|Detail Lisa]]&lt;br /&gt;
[[Image:Bart-info.png|thumb|200px|left|Detail Bart]]&lt;br /&gt;
[[Image:Vergleich.png|thumb|200px|Comparison view]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
==== Features ====&lt;br /&gt;
The main view of the visualization is relatively clutter-less since only essential information is shown. The user has nevertheless the possibility to filter the data according to his personal relevance and to improve the visual appeal of the visualization by manually sorting the time-lines vertically. The visualization furthermore assist the user to keep focus by applying techniques such as semantic depth of field and highlighting.&lt;br /&gt;
==== Advantages ====&lt;br /&gt;
* Implicit and well known representation of temporal data&lt;br /&gt;
* Simple representation of hierarchical data&lt;br /&gt;
* Precise display of satisfaction levels&lt;br /&gt;
* Avoidance of visual clutter by features such as details on demand and filters&lt;br /&gt;
* Relatively easy to comprehend&lt;br /&gt;
==== Possible Improvements and Extensions ====&lt;br /&gt;
* The possibility to attach media files to persons or events to offer an additional depth of information (e.g. photos, video clips, documents, etc.).&lt;br /&gt;
* The possibility to display global events and periods (such as depressions, climate changes, pandemics, etc.) to offer more insights.&lt;br /&gt;
== References ==&lt;br /&gt;
* [Aigner, 2009a] Wolfgang Aigner. Visualization of Time-Oriented Data: Visualization Techniques. Created at: December 15, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/20091214_timevis_techniques_1up.pdf. pages 33-36.&lt;br /&gt;
* [Aigner, 2009b] Wolfgang Aigner. Hierarchical Techniques. Created at: November 30, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/20091130_hierarchical-techniques_1up.pdf. pages 16-17.&lt;br /&gt;
* [Miksch, 2009a] Silvia Miksch. Icon-based Techniques. Created at: November 9, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/InfoVis-3.1up.pdf. page 8.&lt;br /&gt;
* [Miksch, 2009b] Silvia Miksch. Focus+Context &amp;amp; Distortion Techniques. Created at: November 24, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/InfoVis-5_1up.pdf. page 84-110.&lt;br /&gt;
* [Aigner, 2009c] Wolfgang Aigner. Interaction and Visual Analytics. Created at: December 10, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/20091210_interaction-va_1up.pdf. page 28.&lt;br /&gt;
* [Aigner, 2009d] Wolfgang Aigner. Interaction and Visual Analytics. Created at: December 10, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/20091210_interaction-va_1up.pdf. page 37.&lt;br /&gt;
* [Aigner, 2009e] Wolfgang Aigner. Interaction and Visual Analytics. Created at: December 10, 2009. Retrieved at: January 4, 2010. http://www.ifs.tuwien.ac.at/~silvia/wien/vu-infovis/PDF-Files/20091210_interaction-va_1up.pdf. pages 38-47. &lt;br /&gt;
&lt;br /&gt;
------------------------------&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Vergleich.png&amp;diff=23996</id>
		<title>File:Vergleich.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Vergleich.png&amp;diff=23996"/>
		<updated>2010-01-07T18:38:18Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Lisa-info.png&amp;diff=23995</id>
		<title>File:Lisa-info.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Lisa-info.png&amp;diff=23995"/>
		<updated>2010-01-07T18:36:48Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Lisa-blood.png&amp;diff=23994</id>
		<title>File:Lisa-blood.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Lisa-blood.png&amp;diff=23994"/>
		<updated>2010-01-07T18:35:36Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Bart-info.png&amp;diff=23992</id>
		<title>File:Bart-info.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Bart-info.png&amp;diff=23992"/>
		<updated>2010-01-07T18:34:59Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Bart-blood.png&amp;diff=23991</id>
		<title>File:Bart-blood.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Bart-blood.png&amp;diff=23991"/>
		<updated>2010-01-07T18:34:47Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23664</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23664"/>
		<updated>2009-12-08T16:34:56Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Assignment ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe2.html Description of the second exercise (German)]&lt;br /&gt;
=== The original table ===&lt;br /&gt;
[[Image:leh50e.gif]]&lt;br /&gt;
==== Critique of the given table ====&lt;br /&gt;
* It is not apparent for which region this statistic was made&lt;br /&gt;
* The table lacks a concise and meaningful title, which would offer the reader a short description of the table&lt;br /&gt;
* The table reflects various contexts:&lt;br /&gt;
** The absolute number of the employed people in various industries&lt;br /&gt;
** The absolute number of the unemployed people and the unemployment rate&lt;br /&gt;
** The labour force participation rate&lt;br /&gt;
** The absolute number of the people who are not in labour force&lt;br /&gt;
* The subgroups aren&#039;t comparable due to the differing contexts mentioned above&lt;br /&gt;
* Mixed percentage and absolute values (row- and column-wise)&lt;br /&gt;
* References to so-called &#039;&#039;&amp;quot;units&amp;quot;&#039;&#039; for percentage values&lt;br /&gt;
* The lack of percent signs for percentage values, which would make the cognition of the percentage values easier&lt;br /&gt;
* The alignment of the columns suggest constant time intervals between the time instants&lt;br /&gt;
* The age ranges are inconsistent and therefore not comparable&lt;br /&gt;
* Inconsistent row headers (e.g. &#039;&#039;&amp;quot;Employed, total&amp;quot;&#039;&#039; and &#039;&#039;&amp;quot;Unemployed&amp;quot;&#039;&#039;)&lt;br /&gt;
* Inconsistent declarations for age ranges (&#039;&#039;&amp;quot;years of age&amp;quot;&#039;&#039; and &#039;&#039;&amp;quot;of those aged&amp;quot;&#039;&#039;)&lt;br /&gt;
* Inconsistent date declaration for the column header &#039;&#039;&amp;quot;CHANGE&amp;quot;&#039;&#039;&lt;br /&gt;
* Inconsistent formatting (block letters, various text alignments, italic type, etc.)&lt;br /&gt;
* Too little vertical whitespace between rows&lt;br /&gt;
=== The revised table ===&lt;br /&gt;
[[Image:Table_Main-verbessert.png]]&lt;br /&gt;
==== Description of the undertaken improvements ====&lt;br /&gt;
* A concise and meaningful title has been added, which offers the reader a short description of the table&lt;br /&gt;
* The varying contexts or rather groups have been separated more clearly by applying bold row headers and alternating the fill color&lt;br /&gt;
* The declarations of the rows have been put in a designated column to separate the groups more clearly and to enhance readability&lt;br /&gt;
* Misleading references to &#039;&#039;&amp;quot;units&amp;quot;&#039;&#039; have been removed&lt;br /&gt;
* Percentage signs have been added to denote percentage values&lt;br /&gt;
* Special considerations have been made regarding the consistency of row headers, declarations, date formats and formatting to enable fast processing of information&lt;br /&gt;
* The years in the header have been aggregated to avoid distracting redundancy&lt;br /&gt;
* A different font and more vertical whitespace have been used to increase readability&lt;br /&gt;
* The number of lines has been reduced to avoid distraction from the content&lt;br /&gt;
* A comma have been placed to the left of every three whole-number digits&lt;br /&gt;
* Sums and summarizing values have been placed at the bottom of each group&lt;br /&gt;
* Missing values have been calculated wherever possible&lt;br /&gt;
== Links ==&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
* [[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23643</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23643"/>
		<updated>2009-12-07T17:17:07Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Assignment ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe2.html Description of the second exercise (German)]&lt;br /&gt;
=== The original table ===&lt;br /&gt;
[[Image:leh50e.gif]]&lt;br /&gt;
==== Critique of the given table ====&lt;br /&gt;
* It is not apparent for which region this statistic was made&lt;br /&gt;
* The table lacks a concise and meaningful title, which would offer the reader a short description of the table&lt;br /&gt;
* The table reflects various contexts:&lt;br /&gt;
** The absolute number of the employed people in various industries&lt;br /&gt;
** The absolute number of the unemployed people and the unemployment rate&lt;br /&gt;
** The labour force participation rate&lt;br /&gt;
** The absolute number of the people who are not in labour force&lt;br /&gt;
* The subgroups aren&#039;t comparable due to the differing contexts mentioned above&lt;br /&gt;
* Mixed percentage and absolute values (row- and column-wise)&lt;br /&gt;
* References to so-called &#039;&#039;&amp;quot;units&amp;quot;&#039;&#039; for percentage values&lt;br /&gt;
* The lack of percent signs for percentage values, which would make the cognition of the percentage values easier&lt;br /&gt;
* The alignment of the columns suggest constant time intervals between the time instants&lt;br /&gt;
* The age ranges are inconsistent and therefore not comparable&lt;br /&gt;
* Inconsistent row headers (e.g. &#039;&#039;&amp;quot;Employed, total&amp;quot;&#039;&#039; and &#039;&#039;&amp;quot;Unemployed&amp;quot;&#039;&#039;)&lt;br /&gt;
* Inconsistent declarations for age ranges (&#039;&#039;&amp;quot;years of age&amp;quot;&#039;&#039; and &#039;&#039;&amp;quot;of those aged&amp;quot;&#039;&#039;)&lt;br /&gt;
* Inconsistent date declaration for the column header &#039;&#039;&amp;quot;CHANGE&amp;quot;&#039;&#039;&lt;br /&gt;
* Inconsistent formatting (block letters, various text alignments, italic type, etc.)&lt;br /&gt;
* Too little vertical whitespace between rows&lt;br /&gt;
=== The revised table ===&lt;br /&gt;
[[Image:Table_Main.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
[[Image:Table_Main-verbessert.png]]&lt;br /&gt;
==== Description of the undertaken improvements ====&lt;br /&gt;
* A concise and meaningful title has been added, which offers the reader a short description of the table&lt;br /&gt;
* The varying contexts or rather groups have been separated more clearly by applying bold row headers and alternating the fill color&lt;br /&gt;
* The declarations of the rows have been put in a designated column to separate the groups more clearly and to enhance readability&lt;br /&gt;
* Misleading references to &#039;&#039;&amp;quot;units&amp;quot;&#039;&#039; have been removed&lt;br /&gt;
* Percentage signs have been added to denote percentage values&lt;br /&gt;
* Special considerations have been made regarding the consistency of row headers, declarations, date formats and formatting to enable fast processing of information&lt;br /&gt;
* The years in the header have been aggregated to avoid distracting redundancy&lt;br /&gt;
* A different font and more vertical whitespace have been used to increase readability&lt;br /&gt;
* The number of lines has been reduced to avoid distraction from the content&lt;br /&gt;
* A comma have been placed to the left of every three whole-number digits&lt;br /&gt;
== Links ==&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
* [[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_Main-verbessert.png&amp;diff=23642</id>
		<title>File:Table Main-verbessert.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_Main-verbessert.png&amp;diff=23642"/>
		<updated>2009-12-07T17:15:44Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23406</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23406"/>
		<updated>2009-11-20T19:01:00Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe2.html Beschreibung der Aufgabe 2]&lt;br /&gt;
=== Zu beurteilende Tabelle ===&lt;br /&gt;
[[Image:leh50e.gif]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Table_Main.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23405</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23405"/>
		<updated>2009-11-20T18:50:47Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe2.html Beschreibung der Aufgabe 2]&lt;br /&gt;
=== Zu beurteilende Tabelle ===&lt;br /&gt;
[[Image:leh50e.gif]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Table_Main.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Table_Employment.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
[[Image:Table_Unemployed.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
[[Image:Table_Unemployment_rate.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
[[Image:Table_LFP.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
[[Image:Table_NILF.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23404</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23404"/>
		<updated>2009-11-20T18:48:57Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe2.html Beschreibung der Aufgabe 2]&lt;br /&gt;
=== Zu beurteilende Tabelle ===&lt;br /&gt;
[[Image:leh50e.gif]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Table_Main.png]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Table_Employment.png]]&lt;br /&gt;
[[Image:Table_Unemployed.png]]&lt;br /&gt;
[[Image:Table_Unemployment_rate.png]]&lt;br /&gt;
[[Image:Table_LFP.png]]&lt;br /&gt;
[[Image:Table_NILF.png]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_NILF.png&amp;diff=23403</id>
		<title>File:Table NILF.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_NILF.png&amp;diff=23403"/>
		<updated>2009-11-20T18:48:48Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_LFP.png&amp;diff=23402</id>
		<title>File:Table LFP.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_LFP.png&amp;diff=23402"/>
		<updated>2009-11-20T18:48:17Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_Unemployment_rate.png&amp;diff=23401</id>
		<title>File:Table Unemployment rate.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_Unemployment_rate.png&amp;diff=23401"/>
		<updated>2009-11-20T18:47:39Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_Unemployed.png&amp;diff=23400</id>
		<title>File:Table Unemployed.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_Unemployed.png&amp;diff=23400"/>
		<updated>2009-11-20T18:47:12Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source ==&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_Employment.png&amp;diff=23399</id>
		<title>File:Table Employment.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_Employment.png&amp;diff=23399"/>
		<updated>2009-11-20T18:46:24Z</updated>

		<summary type="html">&lt;p&gt;Ares: Employment&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
Employment&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23398</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_2&amp;diff=23398"/>
		<updated>2009-11-20T18:45:15Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe2.html Beschreibung der Aufgabe 2]&lt;br /&gt;
=== Zu beurteilende Tabelle ===&lt;br /&gt;
[[Image:leh50e.gif]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Table_Main.png]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Table_Main.png&amp;diff=23397</id>
		<title>File:Table Main.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Table_Main.png&amp;diff=23397"/>
		<updated>2009-11-20T18:44:51Z</updated>

		<summary type="html">&lt;p&gt;Ares: Employment and Unemployment Statistics, Feb 1995 - 1996&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
Employment and Unemployment Statistics, Feb 1995 - 1996&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23152</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23152"/>
		<updated>2009-11-10T19:16:53Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
== Preattentive features==&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;The orientation in a 3D space can also be used as a cue for preattentive processing.&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for preattentive processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Bild_302.png]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the red object preattentively. [Healey et al., 1996] One visual variable which is very easy to find.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Bild_303.png]]&lt;br /&gt;
&lt;br /&gt;
Detecting the circle preattentively. [Chipman, 1996] It is more difficult but still preantentive.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Terget_detection.png]][[Image:Terget_detection1.png]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possesses the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Hue_shape_P.gif]][[Image:Shape_hue_P.gif]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called preattentive. [Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Bild_302.png&amp;diff=23151</id>
		<title>File:Bild 302.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Bild_302.png&amp;diff=23151"/>
		<updated>2009-11-10T19:16:01Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Bild_303.png&amp;diff=23150</id>
		<title>File:Bild 303.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Bild_303.png&amp;diff=23150"/>
		<updated>2009-11-10T19:14:41Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preatentive1.png&amp;diff=23149</id>
		<title>File:Preatentive1.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preatentive1.png&amp;diff=23149"/>
		<updated>2009-11-10T19:14:30Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23148</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23148"/>
		<updated>2009-11-10T19:05:36Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
== Preattentive features==&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;The orientation in a 3D space can also be used as a cue for preattentive processing.&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for preattentive processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the red object preattentively. [Healey et al., 1996] One visual variable which is very easy to find.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the circle preattentively. [Chipman, 1996] It is more difficult but still preantentive.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Terget_detection.png]][[Image:Terget_detection1.png]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possesses the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Hue_shape_P.gif]][[Image:Shape_hue_P.gif]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called preattentive. [Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Terget_detection1.png&amp;diff=23147</id>
		<title>File:Terget detection1.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Terget_detection1.png&amp;diff=23147"/>
		<updated>2009-11-10T19:04:17Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Terget_detection.png&amp;diff=23146</id>
		<title>File:Terget detection.png</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Terget_detection.png&amp;diff=23146"/>
		<updated>2009-11-10T19:03:57Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23145</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23145"/>
		<updated>2009-11-10T19:02:29Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
== Preattentive features==&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;The orientation in a 3D space can also be used as a cue for preattentive processing.&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for preattentive processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the red object preattentively. [Healey et al., 1996] One visual variable which is very easy to find.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the circle preattentively. [Chipman, 1996] It is more difficult but still preantentive.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possesses the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Hue_shape_P.gif]][[Image:Shape_hue_P.gif]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called preattentive. [Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23144</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=23144"/>
		<updated>2009-11-10T19:00:47Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
== Preattentive features==&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;The orientation in a 3D space can also be used as a cue for preattentive processing.&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for preattentive processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the red object preattentively. [Healey et al., 1996] One visual variable which is very easy to find.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the circle preattentively. [Chipman, 1996] It is more difficult but still preantentive.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possesses the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Hue_shape.gif]][[Image:Shape_hue.gif]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form. [Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called preattentive. [Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Shape_hue_P.gif&amp;diff=23143</id>
		<title>File:Shape hue P.gif</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Shape_hue_P.gif&amp;diff=23143"/>
		<updated>2009-11-10T18:57:48Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Hue_shape_P.gif&amp;diff=23142</id>
		<title>File:Hue shape P.gif</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Hue_shape_P.gif&amp;diff=23142"/>
		<updated>2009-11-10T18:57:32Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: == Beschreibung ==  == Copyright status ==  == Source == http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Beschreibung ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_1&amp;diff=22961</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 1</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_1&amp;diff=22961"/>
		<updated>2009-11-06T17:06:43Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe1.html Beschreibung der Aufgabe 1]&lt;br /&gt;
=== Auszubessernde InfoVis Begriffe ===&lt;br /&gt;
*&#039;&#039;&#039;[[Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing|Preattentive Processing]]&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;[[Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space|Color Space]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_1&amp;diff=22960</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12 - Aufgabe 1</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_12_-_Aufgabe_1&amp;diff=22960"/>
		<updated>2009-11-06T17:06:21Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Aufgabenstellung ==&lt;br /&gt;
[http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/infovis_ue_aufgabe1.html Beschreibung der Aufgabe 1]&lt;br /&gt;
=== Auszubessernde InfoVis Begriffe ===&lt;br /&gt;
*&#039;&#039;&#039;[[Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing|Preattentive Processing]]&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;[[Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_Space|Color Space]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Teaching:TUW_-_UE_InfoVis_WS_2009/10|InfoVis:Wiki UE Homepage]]&lt;br /&gt;
&lt;br /&gt;
* [http://ieg.ifs.tuwien.ac.at/~gschwand/teaching/infovis_ue_ws09/ UE InfoVis]&lt;br /&gt;
&lt;br /&gt;
*[[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe 12|Gruppe 12]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_spaces&amp;diff=22959</id>
		<title>Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color spaces</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_spaces&amp;diff=22959"/>
		<updated>2009-11-06T17:04:56Z</updated>

		<summary type="html">&lt;p&gt;Ares: Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color spaces moved to Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22958</id>
		<title>Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22958"/>
		<updated>2009-11-06T17:04:56Z</updated>

		<summary type="html">&lt;p&gt;Ares: Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color spaces moved to Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Bearbeitunge Gruppe 12 ==&lt;br /&gt;
* Neue Referenz [Marko Tkalčič, 2003] für weitere Abdeckung zu Color space definitionen.&lt;br /&gt;
&lt;br /&gt;
* Coding definition hinausgenommen: weil sie irrrelevant für das Thema Color Coding ist.&lt;br /&gt;
* Introduction umbenannt: um die Definition bzw die Erklärung der Begriffe kompakter zu gestallten.&lt;br /&gt;
* Definitionen zu den einzelnen Kapiteln gestellt: Für einen logischeren Aufbau.&lt;br /&gt;
* Überschriften Angepasst: Models of Color Coding um näher an den Begriff Farbmodelle heranzukommen.&lt;br /&gt;
* Quellenformatierung richtiggestellt: weil sie nicht der Definition entsprochen hat.&lt;br /&gt;
* Quelle hinzugefügt: Weil nur eine einzige Quelle als Grundlage für die Ausarbeitung herangezogen wurde.&lt;br /&gt;
&lt;br /&gt;
== Bearbeitung von Hiro 05.11.2009 ==&lt;br /&gt;
* RGB allgemeiner geschrieben: da die Funktionsweise von Shadow Mask CRT&#039;s nicht sehr allgemein ist&lt;br /&gt;
&lt;br /&gt;
== Bearbeitung von Immanuel 05.11.2009 ==&lt;br /&gt;
* CMY(K) Teil neu geschrieben&lt;br /&gt;
* YIQ/YUV Teil neu geschrieben&lt;br /&gt;
* Sahler Zitat sowie dazu gehörige Referenz entfernt, da nicht relevant für den neuen Titel &amp;quot;color spaces&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Bearbeitungen Matthias 5.11.2009 ==&lt;br /&gt;
&lt;br /&gt;
* HSL und HSV Artikel neu geschrieben&lt;br /&gt;
* HSL und HSV Abbildung hinaufgeladen und eingefügt&lt;br /&gt;
* Einleitung geschrieben&lt;br /&gt;
* Hufeisen CIE Farbgrafik&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_spaces&amp;diff=22957</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color spaces</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_spaces&amp;diff=22957"/>
		<updated>2009-11-06T17:04:56Z</updated>

		<summary type="html">&lt;p&gt;Ares: Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color spaces moved to Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22956</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22956"/>
		<updated>2009-11-06T17:04:56Z</updated>

		<summary type="html">&lt;p&gt;Ares: Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color spaces moved to Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Color space=&lt;br /&gt;
[[Image:Intro.JPG|thumb|400px|none|Light is electromagnetic radiation with wavelength between 380 nm = blue and 780 nm = red.&lt;br /&gt;
Unit 1 nm = 1 billionth of a meter.|right]] &lt;br /&gt;
[[Image:Colorspace.png|thumb|200px|none|Colorspaces and Horseshoe Shape of visible Color&lt;br /&gt;
|right]]&lt;br /&gt;
{{Quotation|Color is the perceptual result of light in the visible region of the spectrum, having wavelengths in the region of 400 nm to 700 nm, incident upon the retina. |[Poynton, 1999]}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complexity of all kinds of different color mixtures was substantially simplified in 1931 by Commission Internationale de l&#039;Éclairage CIE, who defined a two-dimensional, horseshoe-like color space, that allows easy definition and description of color mixtures. The edge of the horseshoe includes all the pure spectral colors. The inside region contains the mixtures. &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The human visual perception is too complex to be quantified in a more than approximate manner. One practical approach is to define 2,3 or more spectral colors and create mixed colors by adjusting the relative proportions of the said spectral colors and colorless (i.e. white/black) component,. Or one defines first the mixed color, quantifies first its colorless (brightness/darkness) component and then codes the color information as deviation in the direction of 2, 3 or more spectral colors. Typical examples would be the RGB and YIQ systems respectively. [Miszalok and Smolej, 2001]&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
==Examples for Color spaces==&lt;br /&gt;
&lt;br /&gt;
===RGB===&lt;br /&gt;
&lt;br /&gt;
[[Image:3DVecModelRGB.JPG|thumb|300px|right|3D-vector space of the RGB-color model]]&lt;br /&gt;
&lt;br /&gt;
The RGB color model is an additive color model, forming its gamut from various mixtures of the primary additive colors red, green and blue. The main idea behind the RGB color model is the human perception of color, furthermore the trichromatic theory which states that there are three types of cones, which are referred to as L, M, and S cones (long, middle and short wavelength sensitivity), approximately sensitive to the red, green and blue region of the visible spectrum.&lt;br /&gt;
&lt;br /&gt;
The main purpose of the RGB color model is the sensing and reproduction of color on electronic devices such as computers, televisions. Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===CMY(K)===&lt;br /&gt;
&lt;br /&gt;
[[Image:3DVecModeCMY.JPG|thumb|300px|right|3D-vector model of the CMY color space]]&lt;br /&gt;
[[Image:CMY.JPG|thumb|300px|right|Composed image and its seperate channels]]&lt;br /&gt;
&lt;br /&gt;
Other than RGB, CMY doesn&#039;t add light but rather removes it much like the color of reflected light is composed in the real world hence it is also called a subtractive color space. To achieve this subtractive characteristic the CMY color space uses the three primary colors Cyan, Magenta and Yellow and adds those to white. The higher the values of the primary colors the darker is the represented color. The CMY color space is basically an inverted RGB color space and therefore values can be converted very easily.&amp;lt;br&amp;gt;&lt;br /&gt;
The CMY color space is primarily used in printing applications were mostly a fourth primary color K (Key, Black) is added which is then called CMYK color space. The reason for the use of the additional Key is that printing black with CMY in real life doesn&#039;t result in really deep black, is very costly and needs more time to dry than a single black color.&amp;lt;br&amp;gt;&lt;br /&gt;
The conversion between RGB and CMYK isn&#039;t as trivial as the conversion between RGB and CMY.&amp;lt;br&amp;gt;&lt;br /&gt;
Although the CMY space is closer to how we use colors in dying or painting scenarios it is still non-intuitive since the percepted difference between colors isn&#039;t linear as the CMY values might suggest.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===YIQ and YUV===&lt;br /&gt;
&lt;br /&gt;
The YIQ and the YUV color spaces are basically transformations of the RGB color space where at first the three channels of RGB and composed into a single luminance (Y) channel and two difference channels that basically contain the difference between either R - Y or B - Y.&amp;lt;br&amp;gt;&lt;br /&gt;
This color space was developed because of the growing demand of color television and the need to e still backwards compatible to old black&amp;amp;amp;white television sets that worked with a single luminance channel.&amp;lt;br&amp;gt;&lt;br /&gt;
Though the concept behind YIQ and YUV is the same the actual conversion of RGB into luminance and weighted difference channels is implemented differently.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;YIQ&#039;&#039;&#039;: is used in the color TV norm NTSC which is used in America and Japan.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;YUV&#039;&#039;&#039;: is the color space of the PAL color TV norm used in Europe, Africa, Asia except for Japan, Australia. It is also used in digital video.&amp;lt;br&amp;gt;&lt;br /&gt;
Though YIQ and YUV signals are very similar YUV has a higher bandwidth and correspondingly higher quality.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===HSL and HSV===&lt;br /&gt;
[[Image:Hsl-hsv.png|thumb|300px|none|HSV and HSL Colorspaces|right]]&lt;br /&gt;
HSL and HSV are color models wich describe the color relationships better than RGB. HSL stands for hue, saturation and lightness while HSV stands for hue, saturation and value. These color models reflect the human color vision better than the RGB, CMY, YUV and YIQ models, which are targeted primarily for hardware applications.&lt;br /&gt;
&lt;br /&gt;
The color space of HSL and HSV can be thought of cylinders. Each point in this cylinder describes a color.&lt;br /&gt;
&lt;br /&gt;
The three coordinates H, L and S of this system can be easily visualized as follows: Pure colors are found at the outer border of a horizontal color circle. The hue can be interpreted as the polar angle, going from red (0 degrees), green (120), blue (240) back to red.&lt;br /&gt;
The closer to the center of the circle the higher the proportion of the white color. The center of the circle is colorless white. Below this level other color circles are positioned in a cylindrical fashion. The lower they are, the darker they get.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
==References==&lt;br /&gt;
*[Poynton, 1999] Charles Poynton. Frequently Asked Questions about Color. Created at: Dec 30, 1999. http://www.miszalok.de/Lectures/L11_ColorCoding/ColorFAQ.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Miszalok and Smolej, 2001] V. Miszalok, V. Smolej. Color Coding. Jan 13, 2001. http://www.miszalok.de/Lectures/L11_ColorCoding/ColorCoding_english.htm#a1&lt;br /&gt;
&lt;br /&gt;
*[Marko Tkalčič, 2003] Marko Tkalčič.  Colour spaces - perceptual, historical and applicational background. 2003. http://ldos.fe.uni-lj.si/docs/documents/20030929092037_markot.pdf &lt;br /&gt;
&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22955</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22955"/>
		<updated>2009-11-06T17:04:29Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Color space=&lt;br /&gt;
[[Image:Intro.JPG|thumb|400px|none|Light is electromagnetic radiation with wavelength between 380 nm = blue and 780 nm = red.&lt;br /&gt;
Unit 1 nm = 1 billionth of a meter.|right]] &lt;br /&gt;
[[Image:Colorspace.png|thumb|200px|none|Colorspaces and Horseshoe Shape of visible Color&lt;br /&gt;
|right]]&lt;br /&gt;
{{Quotation|Color is the perceptual result of light in the visible region of the spectrum, having wavelengths in the region of 400 nm to 700 nm, incident upon the retina. |[Poynton, 1999]}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complexity of all kinds of different color mixtures was substantially simplified in 1931 by Commission Internationale de l&#039;Éclairage CIE, who defined a two-dimensional, horseshoe-like color space, that allows easy definition and description of color mixtures. The edge of the horseshoe includes all the pure spectral colors. The inside region contains the mixtures. &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The human visual perception is too complex to be quantified in a more than approximate manner. One practical approach is to define 2,3 or more spectral colors and create mixed colors by adjusting the relative proportions of the said spectral colors and colorless (i.e. white/black) component,. Or one defines first the mixed color, quantifies first its colorless (brightness/darkness) component and then codes the color information as deviation in the direction of 2, 3 or more spectral colors. Typical examples would be the RGB and YIQ systems respectively. [Miszalok and Smolej, 2001]&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
==Examples for Color spaces==&lt;br /&gt;
&lt;br /&gt;
===RGB===&lt;br /&gt;
&lt;br /&gt;
[[Image:3DVecModelRGB.JPG|thumb|300px|right|3D-vector space of the RGB-color model]]&lt;br /&gt;
&lt;br /&gt;
The RGB color model is an additive color model, forming its gamut from various mixtures of the primary additive colors red, green and blue. The main idea behind the RGB color model is the human perception of color, furthermore the trichromatic theory which states that there are three types of cones, which are referred to as L, M, and S cones (long, middle and short wavelength sensitivity), approximately sensitive to the red, green and blue region of the visible spectrum.&lt;br /&gt;
&lt;br /&gt;
The main purpose of the RGB color model is the sensing and reproduction of color on electronic devices such as computers, televisions. Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===CMY(K)===&lt;br /&gt;
&lt;br /&gt;
[[Image:3DVecModeCMY.JPG|thumb|300px|right|3D-vector model of the CMY color space]]&lt;br /&gt;
[[Image:CMY.JPG|thumb|300px|right|Composed image and its seperate channels]]&lt;br /&gt;
&lt;br /&gt;
Other than RGB, CMY doesn&#039;t add light but rather removes it much like the color of reflected light is composed in the real world hence it is also called a subtractive color space. To achieve this subtractive characteristic the CMY color space uses the three primary colors Cyan, Magenta and Yellow and adds those to white. The higher the values of the primary colors the darker is the represented color. The CMY color space is basically an inverted RGB color space and therefore values can be converted very easily.&amp;lt;br&amp;gt;&lt;br /&gt;
The CMY color space is primarily used in printing applications were mostly a fourth primary color K (Key, Black) is added which is then called CMYK color space. The reason for the use of the additional Key is that printing black with CMY in real life doesn&#039;t result in really deep black, is very costly and needs more time to dry than a single black color.&amp;lt;br&amp;gt;&lt;br /&gt;
The conversion between RGB and CMYK isn&#039;t as trivial as the conversion between RGB and CMY.&amp;lt;br&amp;gt;&lt;br /&gt;
Although the CMY space is closer to how we use colors in dying or painting scenarios it is still non-intuitive since the percepted difference between colors isn&#039;t linear as the CMY values might suggest.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===YIQ and YUV===&lt;br /&gt;
&lt;br /&gt;
The YIQ and the YUV color spaces are basically transformations of the RGB color space where at first the three channels of RGB and composed into a single luminance (Y) channel and two difference channels that basically contain the difference between either R - Y or B - Y.&amp;lt;br&amp;gt;&lt;br /&gt;
This color space was developed because of the growing demand of color television and the need to e still backwards compatible to old black&amp;amp;amp;white television sets that worked with a single luminance channel.&amp;lt;br&amp;gt;&lt;br /&gt;
Though the concept behind YIQ and YUV is the same the actual conversion of RGB into luminance and weighted difference channels is implemented differently.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;YIQ&#039;&#039;&#039;: is used in the color TV norm NTSC which is used in America and Japan.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;YUV&#039;&#039;&#039;: is the color space of the PAL color TV norm used in Europe, Africa, Asia except for Japan, Australia. It is also used in digital video.&amp;lt;br&amp;gt;&lt;br /&gt;
Though YIQ and YUV signals are very similar YUV has a higher bandwidth and correspondingly higher quality.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===HSL and HSV===&lt;br /&gt;
[[Image:Hsl-hsv.png|thumb|300px|none|HSV and HSL Colorspaces|right]]&lt;br /&gt;
HSL and HSV are color models wich describe the color relationships better than RGB. HSL stands for hue, saturation and lightness while HSV stands for hue, saturation and value. These color models reflect the human color vision better than the RGB, CMY, YUV and YIQ models, which are targeted primarily for hardware applications.&lt;br /&gt;
&lt;br /&gt;
The color space of HSL and HSV can be thought of cylinders. Each point in this cylinder describes a color.&lt;br /&gt;
&lt;br /&gt;
The three coordinates H, L and S of this system can be easily visualized as follows: Pure colors are found at the outer border of a horizontal color circle. The hue can be interpreted as the polar angle, going from red (0 degrees), green (120), blue (240) back to red.&lt;br /&gt;
The closer to the center of the circle the higher the proportion of the white color. The center of the circle is colorless white. Below this level other color circles are positioned in a cylindrical fashion. The lower they are, the darker they get.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
==References==&lt;br /&gt;
*[Poynton, 1999] Charles Poynton. Frequently Asked Questions about Color. Created at: Dec 30, 1999. http://www.miszalok.de/Lectures/L11_ColorCoding/ColorFAQ.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Miszalok and Smolej, 2001] V. Miszalok, V. Smolej. Color Coding. Jan 13, 2001. http://www.miszalok.de/Lectures/L11_ColorCoding/ColorCoding_english.htm#a1&lt;br /&gt;
&lt;br /&gt;
*[Marko Tkalčič, 2003] Marko Tkalčič.  Colour spaces - perceptual, historical and applicational background. 2003. http://ldos.fe.uni-lj.si/docs/documents/20030929092037_markot.pdf &lt;br /&gt;
&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22954</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Color space</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Color_space&amp;diff=22954"/>
		<updated>2009-11-06T17:03:53Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Color spaces=&lt;br /&gt;
[[Image:Intro.JPG|thumb|400px|none|Light is electromagnetic radiation with wavelength between 380 nm = blue and 780 nm = red.&lt;br /&gt;
Unit 1 nm = 1 billionth of a meter.|right]] &lt;br /&gt;
[[Image:Colorspace.png|thumb|200px|none|Colorspaces and Horseshoe Shape of visible Color&lt;br /&gt;
|right]]&lt;br /&gt;
{{Quotation|Color is the perceptual result of light in the visible region of the spectrum, having wavelengths in the region of 400 nm to 700 nm, incident upon the retina. |[Poynton, 1999]}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complexity of all kinds of different color mixtures was substantially simplified in 1931 by Commission Internationale de l&#039;Éclairage CIE, who defined a two-dimensional, horseshoe-like color space, that allows easy definition and description of color mixtures. The edge of the horseshoe includes all the pure spectral colors. The inside region contains the mixtures. &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The human visual perception is too complex to be quantified in a more than approximate manner. One practical approach is to define 2,3 or more spectral colors and create mixed colors by adjusting the relative proportions of the said spectral colors and colorless (i.e. white/black) component,. Or one defines first the mixed color, quantifies first its colorless (brightness/darkness) component and then codes the color information as deviation in the direction of 2, 3 or more spectral colors. Typical examples would be the RGB and YIQ systems respectively. [Miszalok and Smolej, 2001]&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
==Examples for Color spaces==&lt;br /&gt;
&lt;br /&gt;
===RGB===&lt;br /&gt;
&lt;br /&gt;
[[Image:3DVecModelRGB.JPG|thumb|300px|right|3D-vector space of the RGB-color model]]&lt;br /&gt;
&lt;br /&gt;
The RGB color model is an additive color model, forming its gamut from various mixtures of the primary additive colors red, green and blue. The main idea behind the RGB color model is the human perception of color, furthermore the trichromatic theory which states that there are three types of cones, which are referred to as L, M, and S cones (long, middle and short wavelength sensitivity), approximately sensitive to the red, green and blue region of the visible spectrum.&lt;br /&gt;
&lt;br /&gt;
The main purpose of the RGB color model is the sensing and reproduction of color on electronic devices such as computers, televisions. Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===CMY(K)===&lt;br /&gt;
&lt;br /&gt;
[[Image:3DVecModeCMY.JPG|thumb|300px|right|3D-vector model of the CMY color space]]&lt;br /&gt;
[[Image:CMY.JPG|thumb|300px|right|Composed image and its seperate channels]]&lt;br /&gt;
&lt;br /&gt;
Other than RGB, CMY doesn&#039;t add light but rather removes it much like the color of reflected light is composed in the real world hence it is also called a subtractive color space. To achieve this subtractive characteristic the CMY color space uses the three primary colors Cyan, Magenta and Yellow and adds those to white. The higher the values of the primary colors the darker is the represented color. The CMY color space is basically an inverted RGB color space and therefore values can be converted very easily.&amp;lt;br&amp;gt;&lt;br /&gt;
The CMY color space is primarily used in printing applications were mostly a fourth primary color K (Key, Black) is added which is then called CMYK color space. The reason for the use of the additional Key is that printing black with CMY in real life doesn&#039;t result in really deep black, is very costly and needs more time to dry than a single black color.&amp;lt;br&amp;gt;&lt;br /&gt;
The conversion between RGB and CMYK isn&#039;t as trivial as the conversion between RGB and CMY.&amp;lt;br&amp;gt;&lt;br /&gt;
Although the CMY space is closer to how we use colors in dying or painting scenarios it is still non-intuitive since the percepted difference between colors isn&#039;t linear as the CMY values might suggest.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===YIQ and YUV===&lt;br /&gt;
&lt;br /&gt;
The YIQ and the YUV color spaces are basically transformations of the RGB color space where at first the three channels of RGB and composed into a single luminance (Y) channel and two difference channels that basically contain the difference between either R - Y or B - Y.&amp;lt;br&amp;gt;&lt;br /&gt;
This color space was developed because of the growing demand of color television and the need to e still backwards compatible to old black&amp;amp;amp;white television sets that worked with a single luminance channel.&amp;lt;br&amp;gt;&lt;br /&gt;
Though the concept behind YIQ and YUV is the same the actual conversion of RGB into luminance and weighted difference channels is implemented differently.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;YIQ&#039;&#039;&#039;: is used in the color TV norm NTSC which is used in America and Japan.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;YUV&#039;&#039;&#039;: is the color space of the PAL color TV norm used in Europe, Africa, Asia except for Japan, Australia. It is also used in digital video.&amp;lt;br&amp;gt;&lt;br /&gt;
Though YIQ and YUV signals are very similar YUV has a higher bandwidth and correspondingly higher quality.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
===HSL and HSV===&lt;br /&gt;
[[Image:Hsl-hsv.png|thumb|300px|none|HSV and HSL Colorspaces|right]]&lt;br /&gt;
HSL and HSV are color models wich describe the color relationships better than RGB. HSL stands for hue, saturation and lightness while HSV stands for hue, saturation and value. These color models reflect the human color vision better than the RGB, CMY, YUV and YIQ models, which are targeted primarily for hardware applications.&lt;br /&gt;
&lt;br /&gt;
The color space of HSL and HSV can be thought of cylinders. Each point in this cylinder describes a color.&lt;br /&gt;
&lt;br /&gt;
The three coordinates H, L and S of this system can be easily visualized as follows: Pure colors are found at the outer border of a horizontal color circle. The hue can be interpreted as the polar angle, going from red (0 degrees), green (120), blue (240) back to red.&lt;br /&gt;
The closer to the center of the circle the higher the proportion of the white color. The center of the circle is colorless white. Below this level other color circles are positioned in a cylindrical fashion. The lower they are, the darker they get.&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both;&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&lt;br /&gt;
==References==&lt;br /&gt;
*[Poynton, 1999] Charles Poynton. Frequently Asked Questions about Color. Created at: Dec 30, 1999. http://www.miszalok.de/Lectures/L11_ColorCoding/ColorFAQ.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Miszalok and Smolej, 2001] V. Miszalok, V. Smolej. Color Coding. Jan 13, 2001. http://www.miszalok.de/Lectures/L11_ColorCoding/ColorCoding_english.htm#a1&lt;br /&gt;
&lt;br /&gt;
*[Marko Tkalčič, 2003] Marko Tkalčič.  Colour spaces - perceptual, historical and applicational background. 2003. http://ldos.fe.uni-lj.si/docs/documents/20030929092037_markot.pdf &lt;br /&gt;
&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22946</id>
		<title>Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22946"/>
		<updated>2009-11-06T16:52:24Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Bearbeitung: 03.11.2009==&lt;br /&gt;
&lt;br /&gt;
* Definition zu Quotes gemacht: weil es Quotes sind.&lt;br /&gt;
* Tablegrafik in Wikitable umgewandelt&lt;br /&gt;
* Referezen formatiert&lt;br /&gt;
&lt;br /&gt;
== Bearbeitung 06.11.2009 ==&lt;br /&gt;
&lt;br /&gt;
* Tabelle erweitert und mit Grafiken und Beschreibungen versehen: verständlicher&lt;br /&gt;
* Referenzen aus Originalliste weggelassen da diese in der Healey-Referenz bereits angegeben sind&lt;br /&gt;
* Quote unter der Tabelle hinzugefügt: für zusätzliche Information zu Preattentive Features&lt;br /&gt;
* Rechtschreibfehler ausgebessert&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22945</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22945"/>
		<updated>2009-11-06T16:51:50Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
== Preattentive features==&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for preattentive processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22943</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22943"/>
		<updated>2009-11-06T16:51:13Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
== Preattentive features==&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for preattentive processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22942</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22942"/>
		<updated>2009-11-06T16:50:35Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive features=&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
= Examples for preattentive processing =&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22941</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22941"/>
		<updated>2009-11-06T16:49:43Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
== Examples for Preattentive Processing ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22940</id>
		<title>Teaching talk:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22940"/>
		<updated>2009-11-06T16:45:38Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Bearbeitung 05.11.2009 ==&lt;br /&gt;
* Tabelle durch ein Bild ersetzt&lt;br /&gt;
* Quelle zu Preattentive_1.jpg richtig gestellt&lt;br /&gt;
* 1.Zitat auf richtige Autoren ausgebessert und Quellen hinzugefügt&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22939</id>
		<title>Teaching talk:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching_talk:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22939"/>
		<updated>2009-11-06T16:45:22Z</updated>

		<summary type="html">&lt;p&gt;Ares: New page: ==Bearbeitung: 03.11.2009==  * Definition zu Quotes gemacht: weil es Quotes sind. * Tablegrafik in Wikitable umgewandelt * Referezen formatiert  == Bearbeitung 06.11.2009 ==  * Tabelle erw...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Bearbeitung: 03.11.2009==&lt;br /&gt;
&lt;br /&gt;
* Definition zu Quotes gemacht: weil es Quotes sind.&lt;br /&gt;
* Tablegrafik in Wikitable umgewandelt&lt;br /&gt;
* Referezen formatiert&lt;br /&gt;
&lt;br /&gt;
== Bearbeitung 06.11.2009 ==&lt;br /&gt;
&lt;br /&gt;
* Tabelle erweitert und mit Grafiken und Beschreibungen versehen: verständlicher&lt;br /&gt;
* Referenzen aus Originalliste weggelassen da diese in der Healey-Referenz bereits angegeben sind&lt;br /&gt;
* Quote unter der Tabelle hinzugefügt: für zusätzliche Information zu Preattentive Features&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22938</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22938"/>
		<updated>2009-11-06T16:41:34Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22937</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22937"/>
		<updated>2009-11-06T16:40:42Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
|width=&amp;quot;400&amp;quot;| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|width=&amp;quot;400&amp;quot; | &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;A different orientation of a certain object can be used to distinguish it from the other objects preattentively.&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;length, width, size&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in size can be used for the preattentive distinction of various objects.&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;closure&#039;&#039;&#039;&amp;lt;br&amp;gt;A closed object in a pool of unclosed objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;curvature&#039;&#039;&#039;&amp;lt;br&amp;gt;The curvature of an object can be considered to detect it preattentively.&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;density, contrast&#039;&#039;&#039;&amp;lt;br&amp;gt;The difference of the density of certain objects to the density of the surrounding objects can be detected preattentively.&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;number, estimation&#039;&#039;&#039;&amp;lt;br&amp;gt;A group of objects with a certain feature can be detected preattentively dependent on the number of objects.&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;colour (hue)&#039;&#039;&#039;&amp;lt;br&amp;gt;The hue of the objects is used to divide the elements into two groups (i.e. a red group and a blue group) though the form varies randomly from object to object. Tests did show that it is easy for subjects  to identify the hue boundary as either vertical or horizontal.&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;intensity, binocular lustre&#039;&#039;&#039;&amp;lt;br&amp;gt;The intensity of an attribute (in this case brightness) can be used for the preattentive detection of an object.&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;intersection&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;terminators&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3-D depth cues, stereoscopic depth&#039;&#039;&#039;&amp;lt;br&amp;gt;Describe attributes of objects that are used to distinguish between objects in 3D space. In this example the distance of the shadow which implies that the further the shadow is away from the oject the greater is the distance between the object and the plane it casts its shadow on.  &lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;flicker&#039;&#039;&#039;&amp;lt;br&amp;gt;Describes the abrubt change between two different states of the same attribute. In this example visiblity.&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;direction of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Differences in the direction of motion can be detected preattentively especially if the motion is directed against the flow of general motion.&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;velocity of motion&#039;&#039;&#039;&amp;lt;br&amp;gt;Another motion related cue for preattentive processing is the difference of the continuity of a certain motion speed between an object and its environment. &lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;lighting direction&#039;&#039;&#039;&amp;lt;br&amp;gt;The lightning is normally constant for all objects in a certain scene so variations in the lightning of a single object can be used a preattentiv cue.&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|&#039;&#039;&#039;3D orientation&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;artistic properties&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|&amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled list from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
{{Quotation|It is important to note that some of these features are asymmetric. For example, a sloped line in a sea of vertical lines can be detected preattentively. However, a vertical line in a sea of sloped lines cannot be detected preattentively. Another important consideration is the effect of different types of background distractors on the target feature. These factors must often be addressed when trying to design display techniques that rely on preattentive processing.|[Healey, 2005]}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22936</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22936"/>
		<updated>2009-11-06T16:40:25Z</updated>

		<summary type="html">&lt;p&gt;Ares: Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G12 - Aufgabe 1 - Preattentive Processing moved to Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing]]&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22935</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22935"/>
		<updated>2009-11-06T16:40:25Z</updated>

		<summary type="html">&lt;p&gt;Ares: Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G12 - Aufgabe 1 - Preattentive Processing moved to Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Publication&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|orientation&lt;br /&gt;
|Julesz &amp;amp; Bergen [1983]; Wolfe et al. [1992]&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|length, width&lt;br /&gt;
|Triesman &amp;amp; Gormican [1988]; Julesz [1985]&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|closure&lt;br /&gt;
|Enns [1986]; Triesman &amp;amp; Souther [1985]&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|size&lt;br /&gt;
|Triesman &amp;amp; Gelade [1980]&lt;br /&gt;
|[[Image:Tg_size.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|curvature&lt;br /&gt;
|Triesman &amp;amp; Gormican [1988]&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|density, contrast&lt;br /&gt;
|Healey [2005]&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|number, estimation&lt;br /&gt;
|Julesz [1985]; Trick &amp;amp; Pylyshyn [1994]&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|colour (hue)&lt;br /&gt;
|Nagy &amp;amp; Sanchez [1990]; D&#039;Zmura [1991]; Kawai et al. [1995]; Bauer et al. [1996]&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|intensity, binocular lustre&lt;br /&gt;
|Beck [1983]; Triesman &amp;amp; Gormican [1988]; Wolfe &amp;amp; Franzel [1988]&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|intersection&lt;br /&gt;
|Julesz &amp;amp; Bergen [1983]&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|terminators&lt;br /&gt;
|Julesz &amp;amp; Bergen [1983]&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|3-D depth cues, stereoscopic depth&lt;br /&gt;
|Enns [1990]; Nakayama &amp;amp; Silverman [1986]&lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|flicker&lt;br /&gt;
|Julesz [1971]&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|direction of motion&lt;br /&gt;
|Nakayama &amp;amp; Silverman [1986]; Driver &amp;amp; McLeod [1992]&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|velocity of motion&lt;br /&gt;
|Nakayama &amp;amp; Silverman [1986]; Driver &amp;amp; McLeod [1992]&lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|lighting direction&lt;br /&gt;
|Enns [1990]&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|3D orientation&lt;br /&gt;
|Enns &amp;amp; Rensink; Liu et al. [2003]&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|artistic properties&lt;br /&gt;
|Healey [2001]; Healey &amp;amp; Enns [2002]; Healey et al. [2004]&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled List from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22933</id>
		<title>Teaching:TUW - UE InfoVis WS 2009/10 - Gruppe G12 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2009/10_-_Gruppe_G12_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=22933"/>
		<updated>2009-11-06T16:10:31Z</updated>

		<summary type="html">&lt;p&gt;Ares: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Quotation|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. |[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Quotation|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort. |[Healey et al., 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
{|  border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;3&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;Feature&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Publication&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;Picture&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|orientation&lt;br /&gt;
|Julesz &amp;amp; Bergen [1983]; Wolfe et al. [1992]&lt;br /&gt;
|[[Image:Tg_orient.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|length, width&lt;br /&gt;
|Triesman &amp;amp; Gormican [1988]; Julesz [1985]&lt;br /&gt;
|[[Image:Tg_len.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|closure&lt;br /&gt;
|Enns [1986]; Triesman &amp;amp; Souther [1985]&lt;br /&gt;
|[[Image:Tg_closure.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|size&lt;br /&gt;
|Triesman &amp;amp; Gelade [1980]&lt;br /&gt;
|[[Image:Tg_size.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|curvature&lt;br /&gt;
|Triesman &amp;amp; Gormican [1988]&lt;br /&gt;
|[[Image:Tg_curve.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|density, contrast&lt;br /&gt;
|Healey [2005]&lt;br /&gt;
|[[Image:Tg_den.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|number, estimation&lt;br /&gt;
|Julesz [1985]; Trick &amp;amp; Pylyshyn [1994]&lt;br /&gt;
|[[Image:Tg_num.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|colour (hue)&lt;br /&gt;
|Nagy &amp;amp; Sanchez [1990]; D&#039;Zmura [1991]; Kawai et al. [1995]; Bauer et al. [1996]&lt;br /&gt;
|[[Image:Tg_hue.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|intensity, binocular lustre&lt;br /&gt;
|Beck [1983]; Triesman &amp;amp; Gormican [1988]; Wolfe &amp;amp; Franzel [1988]&lt;br /&gt;
|[[Image:Tg_lum.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|intersection&lt;br /&gt;
|Julesz &amp;amp; Bergen [1983]&lt;br /&gt;
|[[Image:Tg_isect.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|terminators&lt;br /&gt;
|Julesz &amp;amp; Bergen [1983]&lt;br /&gt;
|[[Image:Tg_term.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|3-D depth cues, stereoscopic depth&lt;br /&gt;
|Enns [1990]; Nakayama &amp;amp; Silverman [1986]&lt;br /&gt;
|[[Image:Tg_3d_depth.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|flicker&lt;br /&gt;
|Julesz [1971]&lt;br /&gt;
|[[Image:Tg_flick.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|direction of motion&lt;br /&gt;
|Nakayama &amp;amp; Silverman [1986]; Driver &amp;amp; McLeod [1992]&lt;br /&gt;
|[[Image:Tg_dir.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|velocity of motion&lt;br /&gt;
|Nakayama &amp;amp; Silverman [1986]; Driver &amp;amp; McLeod [1992]&lt;br /&gt;
|[[Image:Tg_vel.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|lighting direction&lt;br /&gt;
|Enns [1990]&lt;br /&gt;
|[[Image:Tg_3d_light.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|3D orientation&lt;br /&gt;
|Enns &amp;amp; Rensink; Liu et al. [2003]&lt;br /&gt;
|[[Image:Tg_orient_3d.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|artistic properties&lt;br /&gt;
|Healey [2001]; Healey &amp;amp; Enns [2002]; Healey et al. [2004]&lt;br /&gt;
|[[Image:Tg_npr.gif|100px]]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Compiled List from [Healey, 2005], [Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey et al., 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey et al., 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe, Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
*[Kosara et al., 2002] Robert Kosara, Silvia Miksch, Helwig Hauser. Focus+Context Taken Literally &#039;&#039;IEEE Computer Graphics &amp;amp; Applications (CG&amp;amp;A), Special Issue on Information Visualization&#039;&#039;, 22(1),  pages 22-29. Created at: January/February, 2002. http://www.kosara.net/papers/Kosara_CGA_2002.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey et al., 1996] Healey, C. G., Booth, K. S., and Enns, J. T.. High-Speed Visual Estimation Using Preattentive Processing. &#039;&#039;ACM Transactions on Human Computer Interaction&#039;&#039; 3(2), pages 107-135, Created at: 1996. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Wolfe, Treisma, 2003] Jeremy M Wolfe, Anne Treisma. What shall we do with the preattentive processing stage: Use it or lose it?, &#039;&#039;Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society&#039;&#039;. Sarasota. Created at: May, 2003. http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf .&lt;br /&gt;
&lt;br /&gt;
*[Healey, 2005] Christopher G. Healey. Perception in Visualization. Department of Computer Science, North Carolina State University. Created at: May, 2005. http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80 .&lt;br /&gt;
&lt;br /&gt;
*[Chipman, 1996] Gene Chipman. Review of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns). Created at: 1996, Retrieved at: October 24, 2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267 .&lt;/div&gt;</summary>
		<author><name>Ares</name></author>
	</entry>
</feed>