<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://infovis-wiki.net/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Menace</id>
	<title>InfoVis:Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://infovis-wiki.net/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Menace"/>
	<link rel="alternate" type="text/html" href="https://infovis-wiki.net/wiki/Special:Contributions/Menace"/>
	<updated>2026-04-08T17:08:01Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.6</generator>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8031</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8031"/>
		<updated>2005-11-21T00:49:04Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Special Interests of Target Groups */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
The Dataset we are analysing is a Webserver Logfile; Specifically an Apache Webserver Access Logfile. Originally there as been no commonly accepted standard for logfiles which made statistics, comparison and visualisation of logfile data very complicated [1][W3C]. Today there exist several different defined logfile standards. Two important formats shall be described below.&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium [1][W3C] the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
{|&lt;br /&gt;
|&#039;&#039;remotehost&#039;&#039;: ||Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
|-&lt;br /&gt;
|||remotehost is a one-dimensional discreet datatype. The IP Adress however carries different kinds of information. For a detailed describtion see [http://en.wikipedia.org/wiki/Ip_address Wikipedia].&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;rfc931&#039;&#039;:||The remote logname of the user. &lt;br /&gt;
|-&lt;br /&gt;
|||rfc931 is a one-dimensional discreet datatype. &lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;authuser&#039;&#039;:||The username as which the user has authenticated himself. &lt;br /&gt;
|-&lt;br /&gt;
|||authuser is a one-dimensional discreet datatype. &lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;[date]&#039;&#039;:||Date and time of the request. &lt;br /&gt;
|-&lt;br /&gt;
|||date has seven dimensions in the following format: [day/month/year:hour:minute:second zone]&amp;lt;br&amp;gt;&lt;br /&gt;
Day: ordinal, 2 digits&amp;lt;br&amp;gt;&lt;br /&gt;
Month: nominal, 3 letters&amp;lt;br&amp;gt;&lt;br /&gt;
year = ordinal, 4 digits&amp;lt;br&amp;gt;&lt;br /&gt;
hour = ordinal, 2 digits&amp;lt;br&amp;gt;&lt;br /&gt;
minute = ordinal, 2 digits&amp;lt;br&amp;gt;&lt;br /&gt;
second = ordinal, 2 digits&amp;lt;br&amp;gt;&lt;br /&gt;
zone = nominal, + or - and 4*digits&lt;br /&gt;
|-&lt;br /&gt;
||&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;:||The request line exactly as it came from the client. &lt;br /&gt;
|-&lt;br /&gt;
|||request has three dimensions in the following format: &amp;quot;request method /filename HTTP/version]&amp;lt;br&amp;gt;&lt;br /&gt;
The request method is nominal, there are eight defined [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - request methods| methods]].&amp;lt;br&amp;gt;&lt;br /&gt;
The filename is discrete&amp;lt;br&amp;gt;&lt;br /&gt;
The HTTP version is theoretically ordinal. However, so far there exists only version 0.9, 1.0 and the current version 1.0&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;status&#039;&#039;:||The HTTP Status Code returned to the client. &lt;br /&gt;
|-&lt;br /&gt;
|||status is a one-dimensional nominal datatype. [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| Here]] you will find a  description of the [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] classes.&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;bytes&#039;&#039;:||The content-length of the document transferred.&lt;br /&gt;
|-&lt;br /&gt;
|||bytes is a one-dimensional ordinal datatype.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The Combined Logfile Format adds two further Positions to the Common Logfile Format (see [2][Apache]):&lt;br /&gt;
{|&lt;br /&gt;
|&#039;&#039;referer&#039;&#039;:||This gives the site that the client reports having been referred from.&lt;br /&gt;
|-&lt;br /&gt;
|||referer is a one-dimensional discreet datatype.&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;agent&#039;&#039;:||The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
|-&lt;br /&gt;
|||agent is a one-dimensional discreet datatype. &lt;br /&gt;
|}&lt;br /&gt;
One entry in the Combined Logfile Format looks as follows:&lt;br /&gt;
&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes &amp;quot;referer&amp;quot; &amp;quot;agent&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Example Data===&lt;br /&gt;
The example data we will use for the prototype uses the Combined Logfile Format. One example  entry in this file looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Groups of Visualization===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
===Special Interests of Target Groups===&lt;br /&gt;
Each log contains different types of logs i.e. Errors, warnings, information, success audit and failure audits. Therefore the visualization of the log file is in each case for each target group different. The website administrators are interested in the popularity and/or usability of certain pages or areas of their website. In the other case, the visualization of logfiles provides information for illegal proceedings. The logfiles may be useful for advertising companies Such as; How many visitors came in a certain period on the web page? From where the visitors came?  Which search words have been found or not found?  Which pages have been looked at?  What is the IP-number of visitor and from which country is he?..&lt;br /&gt;
&lt;br /&gt;
===Known Solutions / Methods ===&lt;br /&gt;
&lt;br /&gt;
*Webtracer (The Webtracer uses a wide range of protocols and databases to retrieve all information on a resource on the internet, such as a domain name, an e-mail address, an IP address, a server name or a web address (URL). The relations between resources are displayed in a tree, allowing recursive analysis.) &lt;br /&gt;
*Conetree(Cone trees are 3D interactive visualizations of hierarchically structured information. Each sub-tree associated to a cone; the vertex at the root of the sub-tree is placed at the apex of the cone and its children are arranged around the base cone. Text can be added to give more information about a node(children of the sub-tree)  &lt;br /&gt;
*Matrix-Visualization(There are several alternative ways for visualizing the links and demand matrices.)&lt;br /&gt;
*Hyperspace-View(A graphical view of the hyperspace emerging from a document depicted a tree structure.)&lt;br /&gt;
*The Sugiyama algorithm&amp;amp;layout(Sugiyama algorithm draws directed acyclic graphs meeting the basic aesthetic cri-teria, which is very suitable to describe hierarchically temporal relationships amongworkflow entities. It can make visualisation of the workflow cleaner andfind the best structure for the hierarchical type of information representation. Sugiyama-layout has more benefits in more complex projects.)&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apache] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;br /&gt;
&lt;br /&gt;
[3][Gershon et al., 1995] Nahum Gershon, Steve Eick, &#039;&#039;Information Visualization Processdings Atlanta&#039;&#039;, First Edition, IEEE Computer Society Press, October 1995.&lt;br /&gt;
&lt;br /&gt;
[4][Kreuseler et al., 1999] Matthias Kreuseler, Heidrun Schumann, David S. Elbert et al., &#039;&#039;Work Shop on New Paradigms in Information Visualization and Manipulation&#039;&#039;, First Edition, ACM Press, November 1999.&lt;br /&gt;
&lt;br /&gt;
[5][WebTracer] http://forensic.to/webhome/jsavage/www.forensictracer%5B1%5D&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8006</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8006"/>
		<updated>2005-11-20T20:01:03Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Known Solutions / Methods */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Groups of Visualization===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
===Known Solutions / Methods ===&lt;br /&gt;
&lt;br /&gt;
*Webtracer (The Webtracer uses a wide range of protocols and databases to retrieve all information on a resource on the internet, such as a domain name, an e-mail address, an IP address, a server name or a web address (URL). The relations between resources are displayed in a tree, allowing recursive analysis.) &lt;br /&gt;
*Conetree(Cone trees are 3D interactive visualizations of hierarchically structured information. Each sub-tree associated to a cone; the vertex at the root of the sub-tree is placed at the apex of the cone and its children are arranged around the base cone. Text can be added to give more information about a node(children of the sub-tree)  &lt;br /&gt;
*Matrix-Visualization(There are several alternative ways for visualizing the links and demand matrices.)&lt;br /&gt;
*Hyperspace-View(A graphical view of the hyperspace emerging from a document depicted a tree structure.)&lt;br /&gt;
*The Sugiyama algorithm&amp;amp;layout(Sugiyama algorithm draws directed acyclic graphs meeting the basic aesthetic cri-teria, which is very suitable to describe hierarchically temporal relationships amongworkflow entities. It can make visualisation of the workflow cleaner andfind the best structure for the hierarchical type of information representation. Sugiyama-layout has more benefits in more complex projects.)&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;br /&gt;
&lt;br /&gt;
[3][Gershon et al., 1995] Nahum Gershon, Steve Eick, &#039;&#039;Information Visualization Processdings Atlanta&#039;&#039;, First Edition, IEEE Computer Society Press, October 1995.&lt;br /&gt;
&lt;br /&gt;
[4][Kreuseler et al., 1999] Matthias Kreuseler, Heidrun Schumann, David S. Elbert et al., &#039;&#039;Work Shop on New Paradigms in Information Visualization and Manipulation&#039;&#039;, First Edition, ACM Press, November 1999.&lt;br /&gt;
&lt;br /&gt;
[5][WebTracer] http://forensic.to/webhome/jsavage/www.forensictracer%5B1%5D&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8005</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8005"/>
		<updated>2005-11-20T19:43:35Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Groups of Visualization===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
===Known Solutions / Methods ===&lt;br /&gt;
&lt;br /&gt;
*Webtracer (The Webtracer uses a wide range of protocols and databases to retrieve all information on a resource on the internet, such as a domain name, an e-mail address, an IP address, a server name or a web address (URL). The relations between resources are displayed in a tree, allowing recursive analysis.) &lt;br /&gt;
*Conetree(Cone trees are 3D interactive visualizations of hierarchically structured information. Each sub-tree associated to a cone; the vertex at the root of the sub-tree is placed at the apex of the cone and its children are arranged around the base cone. Text can be added to give more information about a node(children of the sub-tree)  &lt;br /&gt;
*Matrix-Visualization(There are several alternative ways for visualizing the links and demand matrices.)&lt;br /&gt;
*Hyperspace-View(A graphical view of the hyperspace emerging from a document depicted a tree structure.)&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;br /&gt;
&lt;br /&gt;
[3][Gershon et al., 1995] Nahum Gershon, Steve Eick, &#039;&#039;Information Visualization Processdings Atlanta&#039;&#039;, First Edition, IEEE Computer Society Press, October 1995.&lt;br /&gt;
&lt;br /&gt;
[4][Kreuseler et al., 1999] Matthias Kreuseler, Heidrun Schumann, David S. Elbert et al., &#039;&#039;Work Shop on New Paradigms in Information Visualization and Manipulation&#039;&#039;, First Edition, ACM Press, November 1999.&lt;br /&gt;
&lt;br /&gt;
[5][WebTracer] http://forensic.to/webhome/jsavage/www.forensictracer%5B1%5D&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8004</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8004"/>
		<updated>2005-11-20T19:05:35Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Target Group Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Groups of Visualization===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
===Known Solutions / Methods ===&lt;br /&gt;
&lt;br /&gt;
*Webtracer (The Webtracer uses a wide range of protocols and databases to retrieve all information on a resource on the internet, such as a domain name, an e-mail address, an IP address, a server name or a web address (URL). The relations between resources are displayed in a tree, allowing recursive analysis.) &lt;br /&gt;
*Conetree(Cone trees are 3D interactive visualizations of hierarchically structured information. Each sub-tree associated to a cone; the vertex at the root of the sub-tree is placed at the apex of the cone and its children are arranged around the base cone. Text can be added to give more information about a node(children of the sub-tree)  &lt;br /&gt;
*Matrix-Visualization(There are several alternative ways for visualizing the links and demand matrices.)&lt;br /&gt;
*Hyperspace-View(A graphical view of the hyperspace emerging from a document depicted a tree structure.)&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8003</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8003"/>
		<updated>2005-11-20T19:04:53Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Target Group Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Group Analysis===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
===Known Solutions / Methods ===&lt;br /&gt;
&lt;br /&gt;
*Webtracer (The Webtracer uses a wide range of protocols and databases to retrieve all information on a resource on the internet, such as a domain name, an e-mail address, an IP address, a server name or a web address (URL). The relations between resources are displayed in a tree, allowing recursive analysis.) &lt;br /&gt;
*Conetree(Cone trees are 3D interactive visualizations of hierarchically structured information. Each sub-tree associated to a cone; the vertex at the root of the sub-tree is placed at the apex of the cone and its children are arranged around the base cone. Text can be added to give more information about a node(children of the sub-tree)  &lt;br /&gt;
*Matrix-Visualization(There are several alternative ways for visualizing the links and demand matrices.)&lt;br /&gt;
*Hyperspace-View(A graphical view of the hyperspace emerging from a document depicted a tree structure.)&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8002</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8002"/>
		<updated>2005-11-20T19:02:08Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Target Group of Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Group Analysis===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8001</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=8001"/>
		<updated>2005-11-20T18:58:07Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Target Group of Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
===Combined Logfile Format===&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example data file can be downloaded [http://asgaard.tuwien.ac.at/%7Eaigner/teaching/infovis_ue/data/infovis-wiki_httpd-logs.tgz here].&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Group of Visualization===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
We have identified the following target groups:&lt;br /&gt;
#Administrators&lt;br /&gt;
#Web users&lt;br /&gt;
#Web designers&lt;br /&gt;
#Advertising companies&lt;br /&gt;
#Software companies (who developed browsers and web based applications)&lt;br /&gt;
#Security centers&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=7999</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=7999"/>
		<updated>2005-11-20T18:56:04Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Target Group Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
====Combined Logfile Format====&lt;br /&gt;
The example data our group was given is using the Combined Logfile Format, which adds two further Positions to the Common Logfile Format (see [2][Apa]):&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; &amp;lt;br&amp;gt;  200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 &amp;lt;br&amp;gt;  (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example Data File can be downloaded here.&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
===Target Group of Visualization===&lt;br /&gt;
The usability of visualization in simulation includes visual processing in both static and dynamic form. The simulation database, the experiment process or its results are represented in a static form, e. g. as tables or diagrams.  Also interaction with the simulation model and the direct manipulation of the model take place by using a graphical representation. The dynamic illustration of a process by animation makes it easier to understand complex issues. Therefore, the user interface and the visual aspect of our project will be developed and implemented using flash and XML. We think that, it will be better to understand for each user the predictable representations of given data by audio-visual design.&lt;br /&gt;
1- Administrators&lt;br /&gt;
2- Web users&lt;br /&gt;
3- Web designers&lt;br /&gt;
4- Advertising companies&lt;br /&gt;
5- Software companies (who developed browsers and web based applications)&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=7997</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=7997"/>
		<updated>2005-11-20T18:54:47Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* The Goals of Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
====Project Example Data====&lt;br /&gt;
The example data our group is using the &#039;&#039;&#039;Combined Logfile Format&#039;&#039;&#039; (describtion taken from [3][Apa], which adds two further Positions:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; 200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example Data File can be downloaded here.&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
===The Goals of Visualization===&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=7996</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_3&amp;diff=7996"/>
		<updated>2005-11-20T18:53:46Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Aim of the Visualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Topic: Webserver Logfile Visualization==&lt;br /&gt;
==Application Area Analysis==&lt;br /&gt;
==Dataset Analysis==&lt;br /&gt;
===The Common Logfile Format===&lt;br /&gt;
According to the World Wide Web Consortium the Common Logfile Format is as follows:&lt;br /&gt;
    &#039;&#039;remotehost rfc931 authuser [date] &amp;quot;request&amp;quot; status bytes&#039;&#039;&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: Remote hostname (or IP number if DNS hostname is not available, or if DNSLookup is Off.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: The remote logname of the user. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: The username as which the user has authenticated himself. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: Date and time of the request. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: The request line exactly as it came from the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: The [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 3 - HTTP Status Code| HTTP Status Code]] returned to the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: The content-length of the document transferred. [1][W3C]&lt;br /&gt;
====Project Example Data====&lt;br /&gt;
The example data our group is using the &#039;&#039;&#039;Combined Logfile Format&#039;&#039;&#039; (describtion taken from [3][Apa], which adds two further Positions:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: This gives the site that the client reports having been referred from&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.&lt;br /&gt;
&lt;br /&gt;
One entry in the logfile looks as follows:&lt;br /&gt;
&lt;br /&gt;
   128.131.167.8 - - [16/Oct/2005:09:56:22 +0200] &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot; 200 1178 &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot; &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;remotehost&#039;&#039;: 128.131.167.8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;rfc931&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;authuser&#039;&#039;: -&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[date]&#039;&#039;: [16/Oct/2005:09:56:22 +0200]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;request&amp;quot;&#039;&#039;: &amp;quot;GET /skins/monobook/external.png HTTP/1.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;status&#039;&#039;: 200&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;bytes&#039;&#039;: 1178&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Referer&#039;&#039;: &amp;quot;http://www.infovis-wiki.net/index.php/Main_Page&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Agent&#039;&#039;: &amp;quot;Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The whole example Data File can be downloaded here.&lt;br /&gt;
&lt;br /&gt;
===Datatypes===&lt;br /&gt;
&lt;br /&gt;
==Target Group Analysis==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aim of the Visualization==&lt;br /&gt;
==The Goals of Visualization==&lt;br /&gt;
Visualization of logsfile is intended to &lt;br /&gt;
*alert you to suspicious activity that requires further investigation   &lt;br /&gt;
*determine the extent of an intruder&#039;s activity (if anything has been added, deleted, modified, lost, or stolen)  &lt;br /&gt;
*help you recover your systems   &lt;br /&gt;
*provide information required for legal proceedings&lt;br /&gt;
*draw conclusions about the popularity and/or usability of certain pages or areas of the site.&lt;br /&gt;
&lt;br /&gt;
==Designproposal==&lt;br /&gt;
==References==&lt;br /&gt;
[1][W3C] World Wide Web Consortium, &amp;lt;i&amp;gt;Logging Control In W3C httpd&amp;lt;/i&amp;gt;. Created at: July, 1995. Retrieved at: November 16, 2005. http://www.w3.org/Daemon/User/Config/Logging.html#common-logfile-format.&lt;br /&gt;
&lt;br /&gt;
[2][Apa] The Apache Software Foundation, &#039;&#039;Apache HTTP Server: Log files&#039;&#039;. Retrieved at: November 16, 2005. http://httpd.apache.org/docs/1.3/logs.html&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7866</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7866"/>
		<updated>2005-11-15T22:43:57Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Suggestion 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Poor Graphic =&lt;br /&gt;
[[Image:Mibi04death-and-taxes.jpg|none|thumb|600px|Death and Taxes: A visual look at where your tax dollars go (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Drawbacks / Critical analysis =&lt;br /&gt;
&lt;br /&gt;
Before designing visualization for the “Death and Taxes”, it is important to find what is wrong with the existing one, analyze the missing or wrongly applied essential design principles. Then we will be in a better position to make corrections, and come up with an improved visualization.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing|Preattentive Processing]] ==&lt;br /&gt;
&lt;br /&gt;
“Tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive [Healey et al., 2005]”. These tasks can be performed without the need of focused attention. &lt;br /&gt;
&lt;br /&gt;
* From the given visualization displayed on a 17” display what we can perceive instantly is that the budget is allocated to a number of departments and further allocated to various sub-departments within those. However, the display is so cluttered that we are unable to perceive more than that.&lt;br /&gt;
&lt;br /&gt;
* Lengthy description in the biggest circle and as a part of legend can’t be treated with preattentive processing.&lt;br /&gt;
&lt;br /&gt;
* The proportionate sizes of circles help a lot in finding instantly which department has the highest or lowest budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Five Hat Racks|Five Hat Racks]]==&lt;br /&gt;
&lt;br /&gt;
There are five ways to organize information: category (similarity relatedness), time (chronological sequence), location (geographical or spatial references), alphabet (alphabetical sequence), and continuum (magnitude; highest to lowest, best to worse) [Truong, 2005]”.&lt;br /&gt;
&lt;br /&gt;
* Category (similarity relatedness): Satisfies as far as the depiction of budget in different departments is concerned. But if you see the sub-departments then this aspect is violated.&lt;br /&gt;
&lt;br /&gt;
* Time (chronological sequence): This aspect is not applicable for the given visualization, because this shows the budget for one year only.&lt;br /&gt;
&lt;br /&gt;
* Location (geographical or spatial references): This aspect is also not applicable for the given visualization. Because the visualization does not show the spending of budget in different states. This might be a missing information. Without this information it is assumed that the depicted spending is same across all the states (expenditures on health, education etc.). &lt;br /&gt;
&lt;br /&gt;
* Alphabet (alphabetical sequence): Arrangement is not alphabetical in the given picture. But it is not necessary that the improvement will be made by introducing alphabetical sequences. Rather some more meaningful sequences should be exploited.&lt;br /&gt;
&lt;br /&gt;
* Continuum (magnitude; highest to lowest, best to worse): This aspect is successfully used in the picture. See the sizes of circles which are proportionate with the allocated budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 10 - Aufgabe 1 - Visual Clutter|Visual Clutter]] ==&lt;br /&gt;
&lt;br /&gt;
“Clutter is the state in which excess items, or their representation or organization, lead to a degradation of performance at some task [Rosenholtz et al., 2005]”.&lt;br /&gt;
&lt;br /&gt;
* It is not possible to visualize all the elements of the given picture on a standard display. There are many circles, and associated descriptions. Perhaps the descriptions can be removed as a default. When user focuses or hovers the mouse, then the circle under focus could be displayed with associated description.&lt;br /&gt;
&lt;br /&gt;
* On the initial screen, the circular sub-departments can be hidden. Those can be displayed under user control when the user focuses on one or a group of the bigger circles.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G7 - Aufgabe 1 - Gestalt Laws|Gestalt Laws]]==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts [Pedroza, 2004]”. Often used Gestalt principles are Proximity, Similarity, Closure and Good Continuation.&lt;br /&gt;
&lt;br /&gt;
* Proximity: Elements close to each other tend to form groups. This is evident from the picture. The small circles in vicinity of bigger ones tend to form one group. This also reveals the fact that may be the connecting lines were not necessary and are in fact redundant.&lt;br /&gt;
&lt;br /&gt;
* Similarity: Elements that are similar in some way tend to be grouped together. If we see at the color encodings used, then this aspect seems to be violated. For example, blue parts could be perceived as related, but in fact those are not.&lt;br /&gt;
* Closure: How items are grouped together if they tend to complete a pattern. It seems that this design aspect is not used in the given picture. Instead the author has used explicit connecting lines which in turn increase visual clutter.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Lie Factor|Lie Factor]] ==&lt;br /&gt;
&lt;br /&gt;
The “Lie Factor” is a value to describe the relation between the size of effect shown in a graphic and the size of effect shown in the data. &amp;quot;The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* If you look at the relative sizes of circles and the allocated budgets, this aspect is satisfied.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G8 - Aufgabe 1 - Chart Junk|Chart Junk]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The interior decoration of graphics generates a lot of ink which does not tell the viewer anything new. The purpose of the decoration varies - to make the graphic appear more scientific, to enliven the display, to give the designer an opportunity to exercise artistic skill. Regardless of the cause, it is all non-data-ink or redundant data-ink, and it is often chart junk [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* The connecting lines between circles are chart junk.&lt;br /&gt;
* The descriptions along with meaningful logos are also perhaps chart junk.&lt;br /&gt;
* Black background is making it diffciult to focus the eyes on the graphic.&lt;br /&gt;
* Percentages might be more helpful for initial overview instead of writing exact amount in dollars.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Data-Ink Ratio|Data-Ink Ratio]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A large share of ink on a graphic should present data-information, the ink changing as the data change. Data-ink is the non-erasable core of a graphic, the non-redundant ink arranged in response to variation in the numbers represented [Tufte, 1991].”&lt;br /&gt;
&lt;br /&gt;
* Excessive color is used for the background (all black), which is making it difficult to focus the eyes on the useful data.&lt;br /&gt;
&lt;br /&gt;
* Space is wasted by displaying the complete logos mentioning long redundant texts, for example “United States of America” on many logos. It could be mentioned at a single location that the picture is all about USA.&lt;br /&gt;
&lt;br /&gt;
* The description written inside the biggest circle should be moved away from the graphics. This would also make this circle smaller in size, which is not showing any data.&lt;br /&gt;
&lt;br /&gt;
* The non-data ink which is used to elaborate or decorate the picture is also in excess.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 09 - Aufgabe 1 - Color Coding / Color|Color Coding / Color]] ==&lt;br /&gt;
&lt;br /&gt;
Colors can be used intelligently to encode information in the picture. In the given picture the color encoding is used extensively but it is intermixed and got confused. For example, the circular sections for army, air force and navy all have different colors in spite of the fact that the sections are interrelated, for example R&amp;amp;D, Personnel, Operations etc.  Thus principle of consistency is violated.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Aesthetic-Usability Effect|Aesthetic-Usability Effect]] ==&lt;br /&gt;
&lt;br /&gt;
“The Aesthetic-Usability Effect is a condition whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs [markboulton.co.uk]”. Probably, the other design elements also play their part in making a product aesthetic.&lt;br /&gt;
&lt;br /&gt;
* The rule of Golden ratio is apparently violated. This is evident if we look at the proportions of circles to one another. Their sizes are perfectly in proportion to the allocated budget, but their sizes realtive to one another vis-a-vis the rule of Golden Ratio is not observed. Two quantities are said to be in the golden ratio, if &amp;quot;the whole is to the larger as the larger is to the smaller&amp;quot;[Golden ratio].”&lt;br /&gt;
&lt;br /&gt;
* If the black background is removed, then the existing picture is not bad as far as aesthetics are concerned.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity|Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity]] ==&lt;br /&gt;
&lt;br /&gt;
According to rules elaborated by William of Ockham in his works [Hoffmann et al., 1997];&lt;br /&gt;
&lt;br /&gt;
* It is futile to do with more what can be done with fewer.  Quite meaningful logos are used, but the descriptions of the departments are also mentioned. One of these could be avoided.&lt;br /&gt;
&lt;br /&gt;
* When a proposition comes out true for things, if two things suffice for its truth, it is superfluous to assume a third.  It is related with insight, which is missing or is not easily perceivable from the existing picture.&lt;br /&gt;
&lt;br /&gt;
* Plurality should not be assumed without necessity. ???&lt;br /&gt;
&lt;br /&gt;
* No plurality should be assumed unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority. ???&lt;br /&gt;
&lt;br /&gt;
== Layout ==&lt;br /&gt;
&lt;br /&gt;
An efficient layout can be used to interactively visualize a complex visualization. In the given picture, if we use for example grid layout then its usability can be increased. It can be used to provide focus+context at the same time. In one layout window, whole picture can be shown like the existing one. While on another one, the focused part can be shown in a magnified way. However, the initial overview for the division into military and non/military spending is possible instantly.&lt;br /&gt;
&lt;br /&gt;
= Summary =&lt;br /&gt;
&lt;br /&gt;
More than one solutions can be adapted. Either an altogether new and improved visualization could be designed by taking care of existing deficiencies, or the existing picture could be improved by taking into account the following refinements.&lt;br /&gt;
&lt;br /&gt;
* Remove black background&lt;br /&gt;
&lt;br /&gt;
* Remove unnecessary circles&lt;br /&gt;
&lt;br /&gt;
* Remove connecting lines&lt;br /&gt;
&lt;br /&gt;
* Remove redundant descriptions where meaningful logos suffice&lt;br /&gt;
&lt;br /&gt;
* Introduce %ages&lt;br /&gt;
&lt;br /&gt;
* Use consistent color for similar sub-departments&lt;br /&gt;
&lt;br /&gt;
* Make it possible to visulaize the information collectively under separate meaningful heads. For example, budget allocation for R&amp;amp;D, budget allocation for Maintenence etc.&lt;br /&gt;
&lt;br /&gt;
= Conclusions and suggestions =&lt;br /&gt;
&lt;br /&gt;
== Suggestion 1 ==&lt;br /&gt;
The Visualization itself is not so bad, but only good for print version. The diameters of the different circles gives a good overview of the dispersion of the budget. The problem is that no one can read the name of the different departments. Only when you zoon in the picture you can read it, but then you loos the overview. One approaches to improve the print version is e.g. to change the background colour. &lt;br /&gt;
&lt;br /&gt;
But to improve the understanding of the picture we suggest a dynamic visualization! &lt;br /&gt;
The problem of the data set is that there are too many different departments and sub-departments. They can not be displayed in a normal diagram. There are up to 200 departments with nearly the same budget and with a normal diagram e.g. scatter plots there is no chance to distinguish the different departments and there are no specific information which can be derived from that kind of visualization.&lt;br /&gt;
&lt;br /&gt;
Our approach is to use a SunBurst [SunBurst] like visualization. SunBurst is good to visualize hierarchies with a lot of data. The benefit of this technique is that you can easily compare different departments and sub-departments. The sizes of the different parts represent the budget. The different parts can be labelled with the $ amount or with % from the total budget and/or the Budget of the super-department.&lt;br /&gt;
&lt;br /&gt;
The aim of this visualization is to compare different departments. You can go deeper to one sub-department and then compare two or more of them. It is also possible to get a total overview of all apartments by expanding all of the sub-departments. The drawback of this visualization is that departments with very little budget are nearly invisible in the circle. This disadvantage can be solved by dynamically colour or highlight different parts of the data. Another improvement could be to add a second or third view to the SunBurst visualization. These views could be a simple tree (like the explorer) or a Gaussian distribution of the budget. The data sets are the departments with their budget. With this visualization you can see how many departments have which amount of budget. By selecting a rage of the Gaussian distortion the relevant departments are expanded in the SunBurst and highlited in the explorer-tree. It sould be also possible to show the detail data e.g. name, amount of budger, etc. when you move your mouse over a part of the SunBurst Visualization.&lt;br /&gt;
&lt;br /&gt;
This visualization approach reduces the drawback of the original one where the different departments are not so easy to compare. &lt;br /&gt;
&lt;br /&gt;
[[Image:100 0034.JPG|none|thumb|300px|Multiple view approach (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 0033.JPG|none|thumb|300px|Multiple view approach with expanded sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
== Suggestion 2 ==&lt;br /&gt;
&lt;br /&gt;
The big horizontal rectangle encapsulates other square/rectangle sub deparment and each of them encapsulates dispersion of the budget in billion dollar. Each of them has different size by its ration. If that visualization should still in the detail, then The visualization in Stern can not readable and drawback of this visualization is that departments with very little budget are nearly invisible in the stern. This disadvantage can be solved by small abbreviations, such as: F-22R for “ F-22 Raptor - 5,170 Billion” and C-17TA for “  C-17 Transport Aircraft - 3,686 Billion” &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing19.jpg|none|thumb|768px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing1.jpg|none|thumb|768px|(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
== Suggestion 3 ==&lt;br /&gt;
&lt;br /&gt;
Yet another approach for visualizing the &amp;quot;Budget and Taxes&amp;quot; is as follows. It presents the idea, not the whole graphic with actual figures. It uses the tree map approach.&lt;br /&gt;
&lt;br /&gt;
* The big vertical rectangle encapsulates the directly section wise budget allocations (like R&amp;amp;D, Operations, etc.).&lt;br /&gt;
&lt;br /&gt;
* Standard colors (navy blue for Navy, Grey for Air Force, Brown for Army) are used.&lt;br /&gt;
&lt;br /&gt;
* Each horizonontal section is further proportionately divided between the three forces.&lt;br /&gt;
&lt;br /&gt;
* A few sections which are common and hence equivalent to all the forces are drawn at the top (Others DOD and Defence wide). Since they are assumed to contribute equivalently towards other departments, therefore, all forces Navy, Army and Air Force are equally spaced there.&lt;br /&gt;
&lt;br /&gt;
* Within each rectangular block of a force, further small blocks are made for showing small spendings on AEGIS destroyer, Super Hornet etc. Their sizes are also assumed to be proportional to their allocated budget. &#039;&#039;&#039;Due to space limitation, these are given numbers 1,2,3... Upon mouse click these can be further zoomed-in, or their description can be specified in the legend.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Alongwith actual budget amount, the idea to show it in terms of different percentages, is employed. People are often interested in these kind of percentages when they are looking at budget.&lt;br /&gt;
&lt;br /&gt;
Based upon this approach, rest of the graphic for military and non-military spendings can also be efficiently represented. Although not the perfect one, however it removes some of the defficiencies of the existing visualization. Now the user has a better and quick overview, with details.&lt;br /&gt;
&lt;br /&gt;
[[Image:Death_and_taxes_visualization_-_Modified1.jpg]]&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
[Golden ratio] http://www.absoluteastronomy.com/encyclopedia/g/go/golden_ratio.htm&lt;br /&gt;
&lt;br /&gt;
[SunBurst] http://www.cc.gatech.edu/gvu/ii/sunburst/&lt;br /&gt;
&lt;br /&gt;
[Healey et al., 2005] Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Hoffmann et al., 1997] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham&#039;s Razor and Chemistry, HYLE--International Journal for Philosophy of Chemistry, Vol. 3 (1997), Retrieved at: October 24, 2005, http://www.hyle.org/journal/issues/3/hoffman.htm&lt;br /&gt;
&lt;br /&gt;
[Mark Boulton, March 06, 2005] Journal, Aesthetic-Usability Effect http://www.markboulton.co.uk/journal/comments/aesthetic_usability_effect/&lt;br /&gt;
&lt;br /&gt;
[Pedroza, 2004] Carlos Pedroza, The Encyclopedia of Educational Technology, San Diego State University. Access Date: 21 October 2005, http://coe.sdsu.edu/eet/articles/visualperc1/start.htm&lt;br /&gt;
&lt;br /&gt;
[Rosenholtz et al., 2005] Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. Feature Congestion: A Measure of Display Clutter. http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf &lt;br /&gt;
&lt;br /&gt;
[Truong, 2005] Donny Truong, “Universal Principles of design” Access Date: 21. Oktober 2005 http://www.visualgui.com/index.php?p=1&lt;br /&gt;
&lt;br /&gt;
[Tufte, 1991] Edward Tufte, The Visual Display of Quantitative Information, Second Edition, Graphics Press, USA, 1991.&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7704</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7704"/>
		<updated>2005-11-03T22:27:31Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Conclusion and further suggestions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Poor Graphic =&lt;br /&gt;
[[Image:Mibi04death-and-taxes.jpg|none|thumb|600px|Death and Taxes: A visual look at where your tax dollars go (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Drawbacks / Critical analysis =&lt;br /&gt;
&lt;br /&gt;
Before designing visualization for the “Death and Taxes”, it is important to find what is wrong with the existing one, analyze the missing or wrongly applied essential design principles. Then we will be in a better position to make corrections, and come up with an improved visualization.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing|Preattentive Processing]] ==&lt;br /&gt;
&lt;br /&gt;
“Tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive [Healey et al., 2005]”. These tasks can be performed without the need of focused attention. &lt;br /&gt;
&lt;br /&gt;
* From the given visualization displayed on a 17” display what we can perceive instantly is that the budget is allocated to a number of departments and further allocated to various sub-departments within those. However, the display is so cluttered that we are unable to perceive more than that.&lt;br /&gt;
&lt;br /&gt;
* Lengthy description in the biggest circle and as a part of legend can’t be treated with preattentive processing.&lt;br /&gt;
&lt;br /&gt;
* The proportionate sizes of circles help a lot in finding instantly which department has the highest or lowest budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Five Hat Racks|Five Hat Racks]]==&lt;br /&gt;
&lt;br /&gt;
There are five ways to organize information: category (similarity relatedness), time (chronological sequence), location (geographical or spatial references), alphabet (alphabetical sequence), and continuum (magnitude; highest to lowest, best to worse) [Truong, 2005]”.&lt;br /&gt;
&lt;br /&gt;
* Category (similarity relatedness): Satisfies as far as the depiction of budget in different departments is concerned. But if you see the sub-departments then this aspect is violated.&lt;br /&gt;
&lt;br /&gt;
* Time (chronological sequence): This aspect is not applicable for the given visualization, because this shows the budget for one year only.&lt;br /&gt;
&lt;br /&gt;
* Location (geographical or spatial references): This aspect is also not applicable for the given visualization. Because the visualization does not show the spending of budget in different states. This might be a missing information. Without this information it is assumed that the depicted spending is same across all the states (expenditures on health, education etc.). &lt;br /&gt;
&lt;br /&gt;
* Alphabet (alphabetical sequence): Arrangement is not alphabetical in the given picture. But it is not necessary that the improvement will be made by introducing alphabetical sequences. Rather some more meaningful sequences should be exploited.&lt;br /&gt;
&lt;br /&gt;
* Continuum (magnitude; highest to lowest, best to worse): This aspect is successfully used in the picture. See the sizes of circles which are proportionate with the allocated budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 10 - Aufgabe 1 - Visual Clutter|Visual Clutter]] ==&lt;br /&gt;
&lt;br /&gt;
“Clutter is the state in which excess items, or their representation or organization, lead to a degradation of performance at some task [Rosenholtz et al., 2005]”.&lt;br /&gt;
&lt;br /&gt;
* It is not possible to visualize all the elements of the given picture on a standard display. There are many circles, and associated descriptions. Perhaps the descriptions can be removed as a default. When user focuses or hovers the mouse, then the circle under focus could be displayed with associated description.&lt;br /&gt;
&lt;br /&gt;
* On the initial screen, the circular sub-departments can be hidden. Those can be displayed under user control when the user focuses on one or a group of the bigger circles.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G7 - Aufgabe 1 - Gestalt Laws|Gestalt Laws]]==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts [Pedroza, 2004]”. Often used Gestalt principles are Proximity, Similarity, Closure and Good Continuation.&lt;br /&gt;
&lt;br /&gt;
* Proximity: Elements close to each other tend to form groups. This is evident from the picture. The small circles in vicinity of bigger ones tend to form one group. This also reveals the fact that may be the connecting lines were not necessary and are in fact redundant.&lt;br /&gt;
&lt;br /&gt;
* Similarity: Elements that are similar in some way tend to be grouped together. If we see at the color encodings used, then this aspect seems to be violated. For example, blue parts could be perceived as related, but in fact those are not.&lt;br /&gt;
* Closure: How items are grouped together if they tend to complete a pattern. It seems that this design aspect is not used in the given picture. Instead the author has used explicit connecting lines which in turn increase visual clutter.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Lie Factor|Lie Factor]] ==&lt;br /&gt;
&lt;br /&gt;
The “Lie Factor” is a value to describe the relation between the size of effect shown in a graphic and the size of effect shown in the data. &amp;quot;The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* If you look at the relative sizes of circles and the allocated budgets, this aspect is satisfied.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G8 - Aufgabe 1 - Chart Junk|Chart Junk]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The interior decoration of graphics generates a lot of ink which does not tell the viewer anything new. The purpose of the decoration varies - to make the graphic appear more scientific, to enliven the display, to give the designer an opportunity to exercise artistic skill. Regardless of the cause, it is all non-data-ink or redundant data-ink, and it is often chart junk [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* The connecting lines between circles are chart junk.&lt;br /&gt;
* The descriptions along with meaningful logos are also perhaps chart junk.&lt;br /&gt;
* Black background is making it diffciult to focus the eyes on the graphic.&lt;br /&gt;
* Percentages might be more helpful for initial overview instead of writing exact amount in dollars.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Data-Ink Ratio|Data-Ink Ratio]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A large share of ink on a graphic should present data-information, the ink changing as the data change. Data-ink is the non-erasable core of a graphic, the non-redundant ink arranged in response to variation in the numbers represented [Tufte, 1991].”&lt;br /&gt;
&lt;br /&gt;
* Excessive color is used for the background (all black), which is making it difficult to focus the eyes on the useful data.&lt;br /&gt;
&lt;br /&gt;
* Space is wasted by displaying the complete logos mentioning long redundant texts, for example “United States of America” on many logos. It could be mentioned at a single location that the picture is all about USA.&lt;br /&gt;
&lt;br /&gt;
* The description written inside the biggest circle should be moved away from the graphics. This would also make this circle smaller in size, which is not showing any data.&lt;br /&gt;
&lt;br /&gt;
* The non-data ink which is used to elaborate or decorate the picture is also in excess.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 09 - Aufgabe 1 - Color Coding / Color|Color Coding / Color]] ==&lt;br /&gt;
&lt;br /&gt;
Colors can be used intelligently to encode information in the picture. In the given picture the color encoding is used extensively but it is intermixed and got confused. For example, the circular sections for army, air force and navy all have different colors in spite of the fact that the sections are interrelated, for example R&amp;amp;D, Personnel, Operations etc.  Thus principle of consistency is violated.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Aesthetic-Usability Effect|Aesthetic-Usability Effect]] ==&lt;br /&gt;
&lt;br /&gt;
“The Aesthetic-Usability Effect is a condition whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs [markboulton.co.uk]”. Probably, the other design elements also play their part in making a product aesthetic.&lt;br /&gt;
&lt;br /&gt;
* The rule of Golden ratio is apparently violated. This is evident if we look at the proportions of circles to one another. Their sizes are perfectly in proportion to the allocated budget, but their sizes realtive to one another vis-a-vis the rule of Golden Ratio is not observed. Two quantities are said to be in the golden ratio, if &amp;quot;the whole is to the larger as the larger is to the smaller&amp;quot;[Golden ratio].”&lt;br /&gt;
&lt;br /&gt;
* If the black background is removed, then the existing picture is not bad as far as aesthetics are concerned.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity|Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity]] ==&lt;br /&gt;
&lt;br /&gt;
According to rules elaborated by William of Ockham in his works [Hoffmann et al., 1997];&lt;br /&gt;
&lt;br /&gt;
* It is futile to do with more what can be done with fewer.  Quite meaningful logos are used, but the descriptions of the departments are also mentioned. One of these could be avoided.&lt;br /&gt;
&lt;br /&gt;
* When a proposition comes out true for things, if two things suffice for its truth, it is superfluous to assume a third.  It is related with insight, which is missing or is not easily perceivable from the existing picture.&lt;br /&gt;
&lt;br /&gt;
* Plurality should not be assumed without necessity. ???&lt;br /&gt;
&lt;br /&gt;
* No plurality should be assumed unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority. ???&lt;br /&gt;
&lt;br /&gt;
== Layout ==&lt;br /&gt;
&lt;br /&gt;
An efficient layout can be used to interactively visualize a complex visualization. In the given picture, if we use for example grid layout then its usability can be increased. It can be used to provide focus+context at the same time. In one layout window, whole picture can be shown like the existing one. While on another one, the focused part can be shown in a magnified way. However, the initial overview for the division into military and non/military spending is possible instantly.&lt;br /&gt;
&lt;br /&gt;
= Suggestions =&lt;br /&gt;
&lt;br /&gt;
More than one solutions can be adapted. Either an altogether new and improved visualization could be designed by taking care of existing deficiencies, or the existing picture could be improved by taking into account the following refinements.&lt;br /&gt;
&lt;br /&gt;
* Remove black background&lt;br /&gt;
&lt;br /&gt;
* Remove unnecessary circles&lt;br /&gt;
&lt;br /&gt;
* Remove connecting lines&lt;br /&gt;
&lt;br /&gt;
* Remove redundant descriptions where meaningful logos suffice&lt;br /&gt;
&lt;br /&gt;
* Introduce %ages&lt;br /&gt;
&lt;br /&gt;
* Use consistent color for similar sub-departments&lt;br /&gt;
&lt;br /&gt;
* Make it possible to visulaize the information collectively under separate meaningful heads. For example, budget allocation for R&amp;amp;D, budget allocation for Maintenence etc.&lt;br /&gt;
&lt;br /&gt;
= Conclusion and further suggestions =&lt;br /&gt;
&lt;br /&gt;
The Visualization itself is not so bad, but only good for print version. The diameters of the different circles give a good overview of the dispersion of the budget. The problem is that no one can read the name of the different departments. Only when you zoon in the picture you can read it, but then you loos the overview. One approaches to improve the print version is e.g. to change the background colour. &lt;br /&gt;
&lt;br /&gt;
But to improve the understanding of the picture we suggest a dynamic visualization! &lt;br /&gt;
The problem of the data set is that there are too many different departments and sub-departments. They can not be displayed in a normal diagram. There are up to 200 departments with nearly the same budget and with a normal diagram e.g. scatter plots there is no chance to distinguish the different departments and there are no specific information which can be derived from that kind of visualization.&lt;br /&gt;
&lt;br /&gt;
Our approach is to use a SunBurst [SunBurst] like visualization. SunBurst is good to visualize hierarchies with a lot of data. The benefit of this technique is that you can easily compare different departments and sub-departments. The sizes of the different parts represent the budget. The different parts can be labelled with the $ amount or with % from the total budget and/or the Budget of the super-department.&lt;br /&gt;
&lt;br /&gt;
The aim of this visualization is to compare different departments. You can go deeper to one sub-department and then compare two or more of them. It is also possible to get a total overview of all apartments by expanding all of the sub-departments. The drawback of this visualization is that departments with very little budget are nearly invisible in the circle. This disadvantage can be solved by dynamically colour or highlight different parts of the data. Another improvement could be to add a second or third view to the SunBurst visualization. These views could be a simple tree (like the explorer) or a Gaussian distribution of the budget. The data sets are the departments with their budget. With this visualization you can see how many departments have which amount of budget. By selecting a rage of the Gaussian distortion the relevant departments are expanded in the SunBurst and highlited in the explorer-tree. It sould be also possible to show the detail data e.g. name, amount of budger, etc. when you move your mouse over a part of the SunBurst Visualization.&lt;br /&gt;
&lt;br /&gt;
This visualization approach reduces the drawback of the original one where the different departments are not so easy to compare. &lt;br /&gt;
&lt;br /&gt;
[[Image:100 0034.JPG|none|thumb|300px|Multiple view approach (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 0033.JPG|none|thumb|300px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing19.jpg|none|thumb|768px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing1.jpg|none|thumb|768px|(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
[Golden ratio] http://www.absoluteastronomy.com/encyclopedia/g/go/golden_ratio.htm&lt;br /&gt;
&lt;br /&gt;
[SunBurst] http://www.cc.gatech.edu/gvu/ii/sunburst/&lt;br /&gt;
&lt;br /&gt;
[Healey et al., 2005] Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Hoffmann et al., 1997] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham&#039;s Razor and Chemistry, HYLE--International Journal for Philosophy of Chemistry, Vol. 3 (1997), Retrieved at: October 24, 2005, http://www.hyle.org/journal/issues/3/hoffman.htm&lt;br /&gt;
&lt;br /&gt;
[Mark Boulton, March 06, 2005] Journal, Aesthetic-Usability Effect http://www.markboulton.co.uk/journal/comments/aesthetic_usability_effect/&lt;br /&gt;
&lt;br /&gt;
[Pedroza, 2004] Carlos Pedroza, The Encyclopedia of Educational Technology, San Diego State University. Access Date: 21 October 2005, http://coe.sdsu.edu/eet/articles/visualperc1/start.htm&lt;br /&gt;
&lt;br /&gt;
[Rosenholtz et al., 2005] Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. Feature Congestion: A Measure of Display Clutter. http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf &lt;br /&gt;
&lt;br /&gt;
[Truong, 2005] Donny Truong, “Universal Principles of design” Access Date: 21. Oktober 2005 http://www.visualgui.com/index.php?p=1&lt;br /&gt;
&lt;br /&gt;
[Tufte, 1991] Edward Tufte, The Visual Display of Quantitative Information, Second Edition, Graphics Press, USA, 1991.&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Drawing1.jpg&amp;diff=7703</id>
		<title>File:Drawing1.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Drawing1.jpg&amp;diff=7703"/>
		<updated>2005-11-03T22:05:06Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Updated Grafik&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
Ali Akcaglayan&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7702</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7702"/>
		<updated>2005-11-03T21:59:53Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Conclusion and further suggestions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Poor Graphic =&lt;br /&gt;
[[Image:Mibi04death-and-taxes.jpg|none|thumb|600px|Death and Taxes: A visual look at where your tax dollars go (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Drawbacks / Critical analysis =&lt;br /&gt;
&lt;br /&gt;
Before designing visualization for the “Death and Taxes”, it is important to find what is wrong with the existing one, analyze the missing or wrongly applied essential design principles. Then we will be in a better position to make corrections, and come up with an improved visualization.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing|Preattentive Processing]] ==&lt;br /&gt;
&lt;br /&gt;
“Tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive [Healey et al., 2005]”. These tasks can be performed without the need of focused attention. &lt;br /&gt;
&lt;br /&gt;
* From the given visualization displayed on a 17” display what we can perceive instantly is that the budget is allocated to a number of departments and further allocated to various sub-departments within those. However, the display is so cluttered that we are unable to perceive more than that.&lt;br /&gt;
&lt;br /&gt;
* Lengthy description in the biggest circle and as a part of legend can’t be treated with preattentive processing.&lt;br /&gt;
&lt;br /&gt;
* The proportionate sizes of circles help a lot in finding instantly which department has the highest or lowest budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Five Hat Racks|Five Hat Racks]]==&lt;br /&gt;
&lt;br /&gt;
There are five ways to organize information: category (similarity relatedness), time (chronological sequence), location (geographical or spatial references), alphabet (alphabetical sequence), and continuum (magnitude; highest to lowest, best to worse) [Truong, 2005]”.&lt;br /&gt;
&lt;br /&gt;
* Category (similarity relatedness): Satisfies as far as the depiction of budget in different departments is concerned. But if you see the sub-departments then this aspect is violated.&lt;br /&gt;
&lt;br /&gt;
* Time (chronological sequence): This aspect is not applicable for the given visualization, because this shows the budget for one year only.&lt;br /&gt;
&lt;br /&gt;
* Location (geographical or spatial references): This aspect is also not applicable for the given visualization. Because the visualization does not show the spending of budget in different states. This might be a missing information. Without this information it is assumed that the depicted spending is same across all the states (expenditures on health, education etc.). &lt;br /&gt;
&lt;br /&gt;
* Alphabet (alphabetical sequence): Arrangement is not alphabetical in the given picture. But it is not necessary that the improvement will be made by introducing alphabetical sequences. Rather some more meaningful sequences should be exploited.&lt;br /&gt;
&lt;br /&gt;
* Continuum (magnitude; highest to lowest, best to worse): This aspect is successfully used in the picture. See the sizes of circles which are proportionate with the allocated budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 10 - Aufgabe 1 - Visual Clutter|Visual Clutter]] ==&lt;br /&gt;
&lt;br /&gt;
“Clutter is the state in which excess items, or their representation or organization, lead to a degradation of performance at some task [Rosenholtz et al., 2005]”.&lt;br /&gt;
&lt;br /&gt;
* It is not possible to visualize all the elements of the given picture on a standard display. There are many circles, and associated descriptions. Perhaps the descriptions can be removed as a default. When user focuses or hovers the mouse, then the circle under focus could be displayed with associated description.&lt;br /&gt;
&lt;br /&gt;
* On the initial screen, the circular sub-departments can be hidden. Those can be displayed under user control when the user focuses on one or a group of the bigger circles.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G7 - Aufgabe 1 - Gestalt Laws|Gestalt Laws]]==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts [Pedroza, 2004]”. Often used Gestalt principles are Proximity, Similarity, Closure and Good Continuation.&lt;br /&gt;
&lt;br /&gt;
* Proximity: Elements close to each other tend to form groups. This is evident from the picture. The small circles in vicinity of bigger ones tend to form one group. This also reveals the fact that may be the connecting lines were not necessary and are in fact redundant.&lt;br /&gt;
&lt;br /&gt;
* Similarity: Elements that are similar in some way tend to be grouped together. If we see at the color encodings used, then this aspect seems to be violated. For example, blue parts could be perceived as related, but in fact those are not.&lt;br /&gt;
* Closure: How items are grouped together if they tend to complete a pattern. It seems that this design aspect is not used in the given picture. Instead the author has used explicit connecting lines which in turn increase visual clutter.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Lie Factor|Lie Factor]] ==&lt;br /&gt;
&lt;br /&gt;
The “Lie Factor” is a value to describe the relation between the size of effect shown in a graphic and the size of effect shown in the data. &amp;quot;The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* If you look at the relative sizes of circles and the allocated budgets, this aspect is satisfied.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G8 - Aufgabe 1 - Chart Junk|Chart Junk]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The interior decoration of graphics generates a lot of ink which does not tell the viewer anything new. The purpose of the decoration varies - to make the graphic appear more scientific, to enliven the display, to give the designer an opportunity to exercise artistic skill. Regardless of the cause, it is all non-data-ink or redundant data-ink, and it is often chart junk [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* The connecting lines between circles are chart junk.&lt;br /&gt;
* The descriptions along with meaningful logos are also perhaps chart junk.&lt;br /&gt;
* Black background is making it diffciult to focus the eyes on the graphic.&lt;br /&gt;
* Percentages might be more helpful for initial overview instead of writing exact amount in dollars.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Data-Ink Ratio|Data-Ink Ratio]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A large share of ink on a graphic should present data-information, the ink changing as the data change. Data-ink is the non-erasable core of a graphic, the non-redundant ink arranged in response to variation in the numbers represented [Tufte, 1991].”&lt;br /&gt;
&lt;br /&gt;
* Excessive color is used for the background (all black), which is making it difficult to focus the eyes on the useful data.&lt;br /&gt;
&lt;br /&gt;
* Space is wasted by displaying the complete logos mentioning long redundant texts, for example “United States of America” on many logos. It could be mentioned at a single location that the picture is all about USA.&lt;br /&gt;
&lt;br /&gt;
* The description written inside the biggest circle should be moved away from the graphics. This would also make this circle smaller in size, which is not showing any data.&lt;br /&gt;
&lt;br /&gt;
* The non-data ink which is used to elaborate or decorate the picture is also in excess.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 09 - Aufgabe 1 - Color Coding / Color|Color Coding / Color]] ==&lt;br /&gt;
&lt;br /&gt;
Colors can be used intelligently to encode information in the picture. In the given picture the color encoding is used extensively but it is intermixed and got confused. For example, the circular sections for army, air force and navy all have different colors in spite of the fact that the sections are interrelated, for example R&amp;amp;D, Personnel, Operations etc.  Thus principle of consistency is violated.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Aesthetic-Usability Effect|Aesthetic-Usability Effect]] ==&lt;br /&gt;
&lt;br /&gt;
“The Aesthetic-Usability Effect is a condition whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs [markboulton.co.uk]”. Probably, the other design elements also play their part in making a product aesthetic.&lt;br /&gt;
&lt;br /&gt;
* The rule of Golden ratio is apparently violated. This is evident if we look at the proportions of circles to one another. Their sizes are perfectly in proportion to the allocated budget, but their sizes realtive to one another vis-a-vis the rule of Golden Ratio is not observed. Two quantities are said to be in the golden ratio, if &amp;quot;the whole is to the larger as the larger is to the smaller&amp;quot;[Golden ratio].”&lt;br /&gt;
&lt;br /&gt;
* If the black background is removed, then the existing picture is not bad as far as aesthetics are concerned.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity|Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity]] ==&lt;br /&gt;
&lt;br /&gt;
According to rules elaborated by William of Ockham in his works [Hoffmann et al., 1997];&lt;br /&gt;
&lt;br /&gt;
* It is futile to do with more what can be done with fewer.  Quite meaningful logos are used, but the descriptions of the departments are also mentioned. One of these could be avoided.&lt;br /&gt;
&lt;br /&gt;
* When a proposition comes out true for things, if two things suffice for its truth, it is superfluous to assume a third.  It is related with insight, which is missing or is not easily perceivable from the existing picture.&lt;br /&gt;
&lt;br /&gt;
* Plurality should not be assumed without necessity. ???&lt;br /&gt;
&lt;br /&gt;
* No plurality should be assumed unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority. ???&lt;br /&gt;
&lt;br /&gt;
== Layout ==&lt;br /&gt;
&lt;br /&gt;
An efficient layout can be used to interactively visualize a complex visualization. In the given picture, if we use for example grid layout then its usability can be increased. It can be used to provide focus+context at the same time. In one layout window, whole picture can be shown like the existing one. While on another one, the focused part can be shown in a magnified way. However, the initial overview for the division into military and non/military spending is possible instantly.&lt;br /&gt;
&lt;br /&gt;
= Suggestions =&lt;br /&gt;
&lt;br /&gt;
More than one solutions can be adapted. Either an altogether new and improved visualization could be designed by taking care of existing deficiencies, or the existing picture could be improved by taking into account the following refinements.&lt;br /&gt;
&lt;br /&gt;
* Remove black background&lt;br /&gt;
&lt;br /&gt;
* Remove unnecessary circles&lt;br /&gt;
&lt;br /&gt;
* Remove connecting lines&lt;br /&gt;
&lt;br /&gt;
* Remove redundant descriptions where meaningful logos suffice&lt;br /&gt;
&lt;br /&gt;
* Introduce %ages&lt;br /&gt;
&lt;br /&gt;
* Use consistent color for similar sub-departments&lt;br /&gt;
&lt;br /&gt;
* Make it possible to visulaize the information collectively under separate meaningful heads. For example, budget allocation for R&amp;amp;D, budget allocation for Maintenence etc.&lt;br /&gt;
&lt;br /&gt;
= Conclusion and further suggestions =&lt;br /&gt;
&lt;br /&gt;
The Visualization itself is not so bad, but only good for print version. The diameters of the different circles give a good overview of the dispersion of the budget. The problem is that no one can read the name of the different departments. Only when you zoon in the picture you can read it, but then you loos the overview. One approaches to improve the print version is e.g. to change the background colour. &lt;br /&gt;
&lt;br /&gt;
But to improve the understanding of the picture we suggest a dynamic visualization! &lt;br /&gt;
The problem of the data set is that there are too many different departments and sub-departments. They can not be displayed in a normal diagram. There are up to 200 departments with nearly the same budget and with a normal diagram e.g. scatter plots there is no chance to distinguish the different departments and there are no specific information which can be derived from that kind of visualization.&lt;br /&gt;
&lt;br /&gt;
Our approach is to use a SunBurst [SunBurst] like visualization. SunBurst is good to visualize hierarchies with a lot of data. The benefit of this technique is that you can easily compare different departments and sub-departments. The sizes of the different parts represent the budget. The different parts can be labelled with the $ amount or with % from the total budget and/or the Budget of the super-department.&lt;br /&gt;
&lt;br /&gt;
The aim of this visualization is to compare different departments. You can go deeper to one sub-department and then compare two or more of them. It is also possible to get a total overview of all apartments by expanding all of the sub-departments. The drawback of this visualization is that departments with very little budget are nearly invisible in the circle. This disadvantage can be solved by dynamically colour or highlight different parts of the data. Another improvement could be to add a second or third view to the SunBurst visualization. These views could be a simple tree (like the explorer) or a Gaussian distribution of the budget. The data sets are the departments with their budget. With this visualization you can see how many departments have which amount of budget. By selecting a rage of the Gaussian distortion the relevant departments are expanded in the SunBurst and highlited in the explorer-tree. It sould be also possible to show the detail data e.g. name, amount of budger, etc. when you move your mouse over a part of the SunBurst Visualization.&lt;br /&gt;
&lt;br /&gt;
This visualization approach reduces the drawback of the original one where the different departments are not so easy to compare. &lt;br /&gt;
&lt;br /&gt;
[[Image:100 0034.JPG|none|thumb|300px|Multiple view approach (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 0033.JPG|none|thumb|300px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing19.jpg|none|thumb|800px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing1.jpg|none|thumb|800px|(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
[Golden ratio] http://www.absoluteastronomy.com/encyclopedia/g/go/golden_ratio.htm&lt;br /&gt;
&lt;br /&gt;
[SunBurst] http://www.cc.gatech.edu/gvu/ii/sunburst/&lt;br /&gt;
&lt;br /&gt;
[Healey et al., 2005] Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Hoffmann et al., 1997] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham&#039;s Razor and Chemistry, HYLE--International Journal for Philosophy of Chemistry, Vol. 3 (1997), Retrieved at: October 24, 2005, http://www.hyle.org/journal/issues/3/hoffman.htm&lt;br /&gt;
&lt;br /&gt;
[Mark Boulton, March 06, 2005] Journal, Aesthetic-Usability Effect http://www.markboulton.co.uk/journal/comments/aesthetic_usability_effect/&lt;br /&gt;
&lt;br /&gt;
[Pedroza, 2004] Carlos Pedroza, The Encyclopedia of Educational Technology, San Diego State University. Access Date: 21 October 2005, http://coe.sdsu.edu/eet/articles/visualperc1/start.htm&lt;br /&gt;
&lt;br /&gt;
[Rosenholtz et al., 2005] Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. Feature Congestion: A Measure of Display Clutter. http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf &lt;br /&gt;
&lt;br /&gt;
[Truong, 2005] Donny Truong, “Universal Principles of design” Access Date: 21. Oktober 2005 http://www.visualgui.com/index.php?p=1&lt;br /&gt;
&lt;br /&gt;
[Tufte, 1991] Edward Tufte, The Visual Display of Quantitative Information, Second Edition, Graphics Press, USA, 1991.&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7701</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7701"/>
		<updated>2005-11-03T21:59:11Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Conclusion and further suggestions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Poor Graphic =&lt;br /&gt;
[[Image:Mibi04death-and-taxes.jpg|none|thumb|600px|Death and Taxes: A visual look at where your tax dollars go (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Drawbacks / Critical analysis =&lt;br /&gt;
&lt;br /&gt;
Before designing visualization for the “Death and Taxes”, it is important to find what is wrong with the existing one, analyze the missing or wrongly applied essential design principles. Then we will be in a better position to make corrections, and come up with an improved visualization.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing|Preattentive Processing]] ==&lt;br /&gt;
&lt;br /&gt;
“Tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive [Healey et al., 2005]”. These tasks can be performed without the need of focused attention. &lt;br /&gt;
&lt;br /&gt;
* From the given visualization displayed on a 17” display what we can perceive instantly is that the budget is allocated to a number of departments and further allocated to various sub-departments within those. However, the display is so cluttered that we are unable to perceive more than that.&lt;br /&gt;
&lt;br /&gt;
* Lengthy description in the biggest circle and as a part of legend can’t be treated with preattentive processing.&lt;br /&gt;
&lt;br /&gt;
* The proportionate sizes of circles help a lot in finding instantly which department has the highest or lowest budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Five Hat Racks|Five Hat Racks]]==&lt;br /&gt;
&lt;br /&gt;
There are five ways to organize information: category (similarity relatedness), time (chronological sequence), location (geographical or spatial references), alphabet (alphabetical sequence), and continuum (magnitude; highest to lowest, best to worse) [Truong, 2005]”.&lt;br /&gt;
&lt;br /&gt;
* Category (similarity relatedness): Satisfies as far as the depiction of budget in different departments is concerned. But if you see the sub-departments then this aspect is violated.&lt;br /&gt;
&lt;br /&gt;
* Time (chronological sequence): This aspect is not applicable for the given visualization, because this shows the budget for one year only.&lt;br /&gt;
&lt;br /&gt;
* Location (geographical or spatial references): This aspect is also not applicable for the given visualization. Because the visualization does not show the spending of budget in different states. This might be a missing information. Without this information it is assumed that the depicted spending is same across all the states (expenditures on health, education etc.). &lt;br /&gt;
&lt;br /&gt;
* Alphabet (alphabetical sequence): Arrangement is not alphabetical in the given picture. But it is not necessary that the improvement will be made by introducing alphabetical sequences. Rather some more meaningful sequences should be exploited.&lt;br /&gt;
&lt;br /&gt;
* Continuum (magnitude; highest to lowest, best to worse): This aspect is successfully used in the picture. See the sizes of circles which are proportionate with the allocated budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 10 - Aufgabe 1 - Visual Clutter|Visual Clutter]] ==&lt;br /&gt;
&lt;br /&gt;
“Clutter is the state in which excess items, or their representation or organization, lead to a degradation of performance at some task [Rosenholtz et al., 2005]”.&lt;br /&gt;
&lt;br /&gt;
* It is not possible to visualize all the elements of the given picture on a standard display. There are many circles, and associated descriptions. Perhaps the descriptions can be removed as a default. When user focuses or hovers the mouse, then the circle under focus could be displayed with associated description.&lt;br /&gt;
&lt;br /&gt;
* On the initial screen, the circular sub-departments can be hidden. Those can be displayed under user control when the user focuses on one or a group of the bigger circles.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G7 - Aufgabe 1 - Gestalt Laws|Gestalt Laws]]==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts [Pedroza, 2004]”. Often used Gestalt principles are Proximity, Similarity, Closure and Good Continuation.&lt;br /&gt;
&lt;br /&gt;
* Proximity: Elements close to each other tend to form groups. This is evident from the picture. The small circles in vicinity of bigger ones tend to form one group. This also reveals the fact that may be the connecting lines were not necessary and are in fact redundant.&lt;br /&gt;
&lt;br /&gt;
* Similarity: Elements that are similar in some way tend to be grouped together. If we see at the color encodings used, then this aspect seems to be violated. For example, blue parts could be perceived as related, but in fact those are not.&lt;br /&gt;
* Closure: How items are grouped together if they tend to complete a pattern. It seems that this design aspect is not used in the given picture. Instead the author has used explicit connecting lines which in turn increase visual clutter.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Lie Factor|Lie Factor]] ==&lt;br /&gt;
&lt;br /&gt;
The “Lie Factor” is a value to describe the relation between the size of effect shown in a graphic and the size of effect shown in the data. &amp;quot;The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* If you look at the relative sizes of circles and the allocated budgets, this aspect is satisfied.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G8 - Aufgabe 1 - Chart Junk|Chart Junk]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The interior decoration of graphics generates a lot of ink which does not tell the viewer anything new. The purpose of the decoration varies - to make the graphic appear more scientific, to enliven the display, to give the designer an opportunity to exercise artistic skill. Regardless of the cause, it is all non-data-ink or redundant data-ink, and it is often chart junk [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* The connecting lines between circles are chart junk.&lt;br /&gt;
* The descriptions along with meaningful logos are also perhaps chart junk.&lt;br /&gt;
* Black background is making it diffciult to focus the eyes on the graphic.&lt;br /&gt;
* Percentages might be more helpful for initial overview instead of writing exact amount in dollars.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Data-Ink Ratio|Data-Ink Ratio]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A large share of ink on a graphic should present data-information, the ink changing as the data change. Data-ink is the non-erasable core of a graphic, the non-redundant ink arranged in response to variation in the numbers represented [Tufte, 1991].”&lt;br /&gt;
&lt;br /&gt;
* Excessive color is used for the background (all black), which is making it difficult to focus the eyes on the useful data.&lt;br /&gt;
&lt;br /&gt;
* Space is wasted by displaying the complete logos mentioning long redundant texts, for example “United States of America” on many logos. It could be mentioned at a single location that the picture is all about USA.&lt;br /&gt;
&lt;br /&gt;
* The description written inside the biggest circle should be moved away from the graphics. This would also make this circle smaller in size, which is not showing any data.&lt;br /&gt;
&lt;br /&gt;
* The non-data ink which is used to elaborate or decorate the picture is also in excess.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 09 - Aufgabe 1 - Color Coding / Color|Color Coding / Color]] ==&lt;br /&gt;
&lt;br /&gt;
Colors can be used intelligently to encode information in the picture. In the given picture the color encoding is used extensively but it is intermixed and got confused. For example, the circular sections for army, air force and navy all have different colors in spite of the fact that the sections are interrelated, for example R&amp;amp;D, Personnel, Operations etc.  Thus principle of consistency is violated.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Aesthetic-Usability Effect|Aesthetic-Usability Effect]] ==&lt;br /&gt;
&lt;br /&gt;
“The Aesthetic-Usability Effect is a condition whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs [markboulton.co.uk]”. Probably, the other design elements also play their part in making a product aesthetic.&lt;br /&gt;
&lt;br /&gt;
* The rule of Golden ratio is apparently violated. This is evident if we look at the proportions of circles to one another. Their sizes are perfectly in proportion to the allocated budget, but their sizes realtive to one another vis-a-vis the rule of Golden Ratio is not observed. Two quantities are said to be in the golden ratio, if &amp;quot;the whole is to the larger as the larger is to the smaller&amp;quot;[Golden ratio].”&lt;br /&gt;
&lt;br /&gt;
* If the black background is removed, then the existing picture is not bad as far as aesthetics are concerned.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity|Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity]] ==&lt;br /&gt;
&lt;br /&gt;
According to rules elaborated by William of Ockham in his works [Hoffmann et al., 1997];&lt;br /&gt;
&lt;br /&gt;
* It is futile to do with more what can be done with fewer.  Quite meaningful logos are used, but the descriptions of the departments are also mentioned. One of these could be avoided.&lt;br /&gt;
&lt;br /&gt;
* When a proposition comes out true for things, if two things suffice for its truth, it is superfluous to assume a third.  It is related with insight, which is missing or is not easily perceivable from the existing picture.&lt;br /&gt;
&lt;br /&gt;
* Plurality should not be assumed without necessity. ???&lt;br /&gt;
&lt;br /&gt;
* No plurality should be assumed unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority. ???&lt;br /&gt;
&lt;br /&gt;
== Layout ==&lt;br /&gt;
&lt;br /&gt;
An efficient layout can be used to interactively visualize a complex visualization. In the given picture, if we use for example grid layout then its usability can be increased. It can be used to provide focus+context at the same time. In one layout window, whole picture can be shown like the existing one. While on another one, the focused part can be shown in a magnified way. However, the initial overview for the division into military and non/military spending is possible instantly.&lt;br /&gt;
&lt;br /&gt;
= Suggestions =&lt;br /&gt;
&lt;br /&gt;
More than one solutions can be adapted. Either an altogether new and improved visualization could be designed by taking care of existing deficiencies, or the existing picture could be improved by taking into account the following refinements.&lt;br /&gt;
&lt;br /&gt;
* Remove black background&lt;br /&gt;
&lt;br /&gt;
* Remove unnecessary circles&lt;br /&gt;
&lt;br /&gt;
* Remove connecting lines&lt;br /&gt;
&lt;br /&gt;
* Remove redundant descriptions where meaningful logos suffice&lt;br /&gt;
&lt;br /&gt;
* Introduce %ages&lt;br /&gt;
&lt;br /&gt;
* Use consistent color for similar sub-departments&lt;br /&gt;
&lt;br /&gt;
* Make it possible to visulaize the information collectively under separate meaningful heads. For example, budget allocation for R&amp;amp;D, budget allocation for Maintenence etc.&lt;br /&gt;
&lt;br /&gt;
= Conclusion and further suggestions =&lt;br /&gt;
&lt;br /&gt;
The Visualization itself is not so bad, but only good for print version. The diameters of the different circles give a good overview of the dispersion of the budget. The problem is that no one can read the name of the different departments. Only when you zoon in the picture you can read it, but then you loos the overview. One approaches to improve the print version is e.g. to change the background colour. &lt;br /&gt;
&lt;br /&gt;
But to improve the understanding of the picture we suggest a dynamic visualization! &lt;br /&gt;
The problem of the data set is that there are too many different departments and sub-departments. They can not be displayed in a normal diagram. There are up to 200 departments with nearly the same budget and with a normal diagram e.g. scatter plots there is no chance to distinguish the different departments and there are no specific information which can be derived from that kind of visualization.&lt;br /&gt;
&lt;br /&gt;
Our approach is to use a SunBurst [SunBurst] like visualization. SunBurst is good to visualize hierarchies with a lot of data. The benefit of this technique is that you can easily compare different departments and sub-departments. The sizes of the different parts represent the budget. The different parts can be labelled with the $ amount or with % from the total budget and/or the Budget of the super-department.&lt;br /&gt;
&lt;br /&gt;
The aim of this visualization is to compare different departments. You can go deeper to one sub-department and then compare two or more of them. It is also possible to get a total overview of all apartments by expanding all of the sub-departments. The drawback of this visualization is that departments with very little budget are nearly invisible in the circle. This disadvantage can be solved by dynamically colour or highlight different parts of the data. Another improvement could be to add a second or third view to the SunBurst visualization. These views could be a simple tree (like the explorer) or a Gaussian distribution of the budget. The data sets are the departments with their budget. With this visualization you can see how many departments have which amount of budget. By selecting a rage of the Gaussian distortion the relevant departments are expanded in the SunBurst and highlited in the explorer-tree. It sould be also possible to show the detail data e.g. name, amount of budger, etc. when you move your mouse over a part of the SunBurst Visualization.&lt;br /&gt;
&lt;br /&gt;
This visualization approach reduces the drawback of the original one where the different departments are not so easy to compare. &lt;br /&gt;
&lt;br /&gt;
[[Image:100 0034.JPG|none|thumb|300px|Multiple view approach (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 0033.JPG|none|thumb|300px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing19.jpg|none|thumb|800px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Drawing1.jpg|none|thumb|800px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
[Golden ratio] http://www.absoluteastronomy.com/encyclopedia/g/go/golden_ratio.htm&lt;br /&gt;
&lt;br /&gt;
[SunBurst] http://www.cc.gatech.edu/gvu/ii/sunburst/&lt;br /&gt;
&lt;br /&gt;
[Healey et al., 2005] Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Hoffmann et al., 1997] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham&#039;s Razor and Chemistry, HYLE--International Journal for Philosophy of Chemistry, Vol. 3 (1997), Retrieved at: October 24, 2005, http://www.hyle.org/journal/issues/3/hoffman.htm&lt;br /&gt;
&lt;br /&gt;
[Mark Boulton, March 06, 2005] Journal, Aesthetic-Usability Effect http://www.markboulton.co.uk/journal/comments/aesthetic_usability_effect/&lt;br /&gt;
&lt;br /&gt;
[Pedroza, 2004] Carlos Pedroza, The Encyclopedia of Educational Technology, San Diego State University. Access Date: 21 October 2005, http://coe.sdsu.edu/eet/articles/visualperc1/start.htm&lt;br /&gt;
&lt;br /&gt;
[Rosenholtz et al., 2005] Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. Feature Congestion: A Measure of Display Clutter. http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf &lt;br /&gt;
&lt;br /&gt;
[Truong, 2005] Donny Truong, “Universal Principles of design” Access Date: 21. Oktober 2005 http://www.visualgui.com/index.php?p=1&lt;br /&gt;
&lt;br /&gt;
[Tufte, 1991] Edward Tufte, The Visual Display of Quantitative Information, Second Edition, Graphics Press, USA, 1991.&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7700</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7700"/>
		<updated>2005-11-03T21:57:35Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Conclusion and further suggestions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Poor Graphic =&lt;br /&gt;
[[Image:Mibi04death-and-taxes.jpg|none|thumb|600px|Death and Taxes: A visual look at where your tax dollars go (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Drawbacks / Critical analysis =&lt;br /&gt;
&lt;br /&gt;
Before designing visualization for the “Death and Taxes”, it is important to find what is wrong with the existing one, analyze the missing or wrongly applied essential design principles. Then we will be in a better position to make corrections, and come up with an improved visualization.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing|Preattentive Processing]] ==&lt;br /&gt;
&lt;br /&gt;
“Tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive [Healey et al., 2005]”. These tasks can be performed without the need of focused attention. &lt;br /&gt;
&lt;br /&gt;
* From the given visualization displayed on a 17” display what we can perceive instantly is that the budget is allocated to a number of departments and further allocated to various sub-departments within those. However, the display is so cluttered that we are unable to perceive more than that.&lt;br /&gt;
&lt;br /&gt;
* Lengthy description in the biggest circle and as a part of legend can’t be treated with preattentive processing.&lt;br /&gt;
&lt;br /&gt;
* The proportionate sizes of circles help a lot in finding instantly which department has the highest or lowest budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Five Hat Racks|Five Hat Racks]]==&lt;br /&gt;
&lt;br /&gt;
There are five ways to organize information: category (similarity relatedness), time (chronological sequence), location (geographical or spatial references), alphabet (alphabetical sequence), and continuum (magnitude; highest to lowest, best to worse) [Truong, 2005]”.&lt;br /&gt;
&lt;br /&gt;
* Category (similarity relatedness): Satisfies as far as the depiction of budget in different departments is concerned. But if you see the sub-departments then this aspect is violated.&lt;br /&gt;
&lt;br /&gt;
* Time (chronological sequence): This aspect is not applicable for the given visualization, because this shows the budget for one year only.&lt;br /&gt;
&lt;br /&gt;
* Location (geographical or spatial references): This aspect is also not applicable for the given visualization. Because the visualization does not show the spending of budget in different states. This might be a missing information. Without this information it is assumed that the depicted spending is same across all the states (expenditures on health, education etc.). &lt;br /&gt;
&lt;br /&gt;
* Alphabet (alphabetical sequence): Arrangement is not alphabetical in the given picture. But it is not necessary that the improvement will be made by introducing alphabetical sequences. Rather some more meaningful sequences should be exploited.&lt;br /&gt;
&lt;br /&gt;
* Continuum (magnitude; highest to lowest, best to worse): This aspect is successfully used in the picture. See the sizes of circles which are proportionate with the allocated budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 10 - Aufgabe 1 - Visual Clutter|Visual Clutter]] ==&lt;br /&gt;
&lt;br /&gt;
“Clutter is the state in which excess items, or their representation or organization, lead to a degradation of performance at some task [Rosenholtz et al., 2005]”.&lt;br /&gt;
&lt;br /&gt;
* It is not possible to visualize all the elements of the given picture on a standard display. There are many circles, and associated descriptions. Perhaps the descriptions can be removed as a default. When user focuses or hovers the mouse, then the circle under focus could be displayed with associated description.&lt;br /&gt;
&lt;br /&gt;
* On the initial screen, the circular sub-departments can be hidden. Those can be displayed under user control when the user focuses on one or a group of the bigger circles.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G7 - Aufgabe 1 - Gestalt Laws|Gestalt Laws]]==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts [Pedroza, 2004]”. Often used Gestalt principles are Proximity, Similarity, Closure and Good Continuation.&lt;br /&gt;
&lt;br /&gt;
* Proximity: Elements close to each other tend to form groups. This is evident from the picture. The small circles in vicinity of bigger ones tend to form one group. This also reveals the fact that may be the connecting lines were not necessary and are in fact redundant.&lt;br /&gt;
&lt;br /&gt;
* Similarity: Elements that are similar in some way tend to be grouped together. If we see at the color encodings used, then this aspect seems to be violated. For example, blue parts could be perceived as related, but in fact those are not.&lt;br /&gt;
* Closure: How items are grouped together if they tend to complete a pattern. It seems that this design aspect is not used in the given picture. Instead the author has used explicit connecting lines which in turn increase visual clutter.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Lie Factor|Lie Factor]] ==&lt;br /&gt;
&lt;br /&gt;
The “Lie Factor” is a value to describe the relation between the size of effect shown in a graphic and the size of effect shown in the data. &amp;quot;The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* If you look at the relative sizes of circles and the allocated budgets, this aspect is satisfied.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G8 - Aufgabe 1 - Chart Junk|Chart Junk]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The interior decoration of graphics generates a lot of ink which does not tell the viewer anything new. The purpose of the decoration varies - to make the graphic appear more scientific, to enliven the display, to give the designer an opportunity to exercise artistic skill. Regardless of the cause, it is all non-data-ink or redundant data-ink, and it is often chart junk [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* The connecting lines between circles are chart junk.&lt;br /&gt;
* The descriptions along with meaningful logos are also perhaps chart junk.&lt;br /&gt;
* Black background is making it diffciult to focus the eyes on the graphic.&lt;br /&gt;
* Percentages might be more helpful for initial overview instead of writing exact amount in dollars.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Data-Ink Ratio|Data-Ink Ratio]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A large share of ink on a graphic should present data-information, the ink changing as the data change. Data-ink is the non-erasable core of a graphic, the non-redundant ink arranged in response to variation in the numbers represented [Tufte, 1991].”&lt;br /&gt;
&lt;br /&gt;
* Excessive color is used for the background (all black), which is making it difficult to focus the eyes on the useful data.&lt;br /&gt;
&lt;br /&gt;
* Space is wasted by displaying the complete logos mentioning long redundant texts, for example “United States of America” on many logos. It could be mentioned at a single location that the picture is all about USA.&lt;br /&gt;
&lt;br /&gt;
* The description written inside the biggest circle should be moved away from the graphics. This would also make this circle smaller in size, which is not showing any data.&lt;br /&gt;
&lt;br /&gt;
* The non-data ink which is used to elaborate or decorate the picture is also in excess.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 09 - Aufgabe 1 - Color Coding / Color|Color Coding / Color]] ==&lt;br /&gt;
&lt;br /&gt;
Colors can be used intelligently to encode information in the picture. In the given picture the color encoding is used extensively but it is intermixed and got confused. For example, the circular sections for army, air force and navy all have different colors in spite of the fact that the sections are interrelated, for example R&amp;amp;D, Personnel, Operations etc.  Thus principle of consistency is violated.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Aesthetic-Usability Effect|Aesthetic-Usability Effect]] ==&lt;br /&gt;
&lt;br /&gt;
“The Aesthetic-Usability Effect is a condition whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs [markboulton.co.uk]”. Probably, the other design elements also play their part in making a product aesthetic.&lt;br /&gt;
&lt;br /&gt;
* The rule of Golden ratio is apparently violated. This is evident if we look at the proportions of circles to one another. Their sizes are perfectly in proportion to the allocated budget, but their sizes realtive to one another vis-a-vis the rule of Golden Ratio is not observed. Two quantities are said to be in the golden ratio, if &amp;quot;the whole is to the larger as the larger is to the smaller&amp;quot;[Golden ratio].”&lt;br /&gt;
&lt;br /&gt;
* If the black background is removed, then the existing picture is not bad as far as aesthetics are concerned.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity|Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity]] ==&lt;br /&gt;
&lt;br /&gt;
According to rules elaborated by William of Ockham in his works [Hoffmann et al., 1997];&lt;br /&gt;
&lt;br /&gt;
* It is futile to do with more what can be done with fewer.  Quite meaningful logos are used, but the descriptions of the departments are also mentioned. One of these could be avoided.&lt;br /&gt;
&lt;br /&gt;
* When a proposition comes out true for things, if two things suffice for its truth, it is superfluous to assume a third.  It is related with insight, which is missing or is not easily perceivable from the existing picture.&lt;br /&gt;
&lt;br /&gt;
* Plurality should not be assumed without necessity. ???&lt;br /&gt;
&lt;br /&gt;
* No plurality should be assumed unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority. ???&lt;br /&gt;
&lt;br /&gt;
== Layout ==&lt;br /&gt;
&lt;br /&gt;
An efficient layout can be used to interactively visualize a complex visualization. In the given picture, if we use for example grid layout then its usability can be increased. It can be used to provide focus+context at the same time. In one layout window, whole picture can be shown like the existing one. While on another one, the focused part can be shown in a magnified way. However, the initial overview for the division into military and non/military spending is possible instantly.&lt;br /&gt;
&lt;br /&gt;
= Suggestions =&lt;br /&gt;
&lt;br /&gt;
More than one solutions can be adapted. Either an altogether new and improved visualization could be designed by taking care of existing deficiencies, or the existing picture could be improved by taking into account the following refinements.&lt;br /&gt;
&lt;br /&gt;
* Remove black background&lt;br /&gt;
&lt;br /&gt;
* Remove unnecessary circles&lt;br /&gt;
&lt;br /&gt;
* Remove connecting lines&lt;br /&gt;
&lt;br /&gt;
* Remove redundant descriptions where meaningful logos suffice&lt;br /&gt;
&lt;br /&gt;
* Introduce %ages&lt;br /&gt;
&lt;br /&gt;
* Use consistent color for similar sub-departments&lt;br /&gt;
&lt;br /&gt;
* Make it possible to visulaize the information collectively under separate meaningful heads. For example, budget allocation for R&amp;amp;D, budget allocation for Maintenence etc.&lt;br /&gt;
&lt;br /&gt;
= Conclusion and further suggestions =&lt;br /&gt;
&lt;br /&gt;
The Visualization itself is not so bad, but only good for print version. The diameters of the different circles give a good overview of the dispersion of the budget. The problem is that no one can read the name of the different departments. Only when you zoon in the picture you can read it, but then you loos the overview. One approaches to improve the print version is e.g. to change the background colour. &lt;br /&gt;
&lt;br /&gt;
But to improve the understanding of the picture we suggest a dynamic visualization! &lt;br /&gt;
The problem of the data set is that there are too many different departments and sub-departments. They can not be displayed in a normal diagram. There are up to 200 departments with nearly the same budget and with a normal diagram e.g. scatter plots there is no chance to distinguish the different departments and there are no specific information which can be derived from that kind of visualization.&lt;br /&gt;
&lt;br /&gt;
Our approach is to use a SunBurst [SunBurst] like visualization. SunBurst is good to visualize hierarchies with a lot of data. The benefit of this technique is that you can easily compare different departments and sub-departments. The sizes of the different parts represent the budget. The different parts can be labelled with the $ amount or with % from the total budget and/or the Budget of the super-department.&lt;br /&gt;
&lt;br /&gt;
The aim of this visualization is to compare different departments. You can go deeper to one sub-department and then compare two or more of them. It is also possible to get a total overview of all apartments by expanding all of the sub-departments. The drawback of this visualization is that departments with very little budget are nearly invisible in the circle. This disadvantage can be solved by dynamically colour or highlight different parts of the data. Another improvement could be to add a second or third view to the SunBurst visualization. These views could be a simple tree (like the explorer) or a Gaussian distribution of the budget. The data sets are the departments with their budget. With this visualization you can see how many departments have which amount of budget. By selecting a rage of the Gaussian distortion the relevant departments are expanded in the SunBurst and highlited in the explorer-tree. It sould be also possible to show the detail data e.g. name, amount of budger, etc. when you move your mouse over a part of the SunBurst Visualization.&lt;br /&gt;
&lt;br /&gt;
This visualization approach reduces the drawback of the original one where the different departments are not so easy to compare. &lt;br /&gt;
&lt;br /&gt;
[[Image:100 0034.JPG|none|thumb|300px|Multiple view approach (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 0033.JPG|none|thumb|300px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 Drawing19.jpg|none|thumb|800px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 Drawing1.jpg|none|thumb|800px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
[Golden ratio] http://www.absoluteastronomy.com/encyclopedia/g/go/golden_ratio.htm&lt;br /&gt;
&lt;br /&gt;
[SunBurst] http://www.cc.gatech.edu/gvu/ii/sunburst/&lt;br /&gt;
&lt;br /&gt;
[Healey et al., 2005] Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Hoffmann et al., 1997] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham&#039;s Razor and Chemistry, HYLE--International Journal for Philosophy of Chemistry, Vol. 3 (1997), Retrieved at: October 24, 2005, http://www.hyle.org/journal/issues/3/hoffman.htm&lt;br /&gt;
&lt;br /&gt;
[Mark Boulton, March 06, 2005] Journal, Aesthetic-Usability Effect http://www.markboulton.co.uk/journal/comments/aesthetic_usability_effect/&lt;br /&gt;
&lt;br /&gt;
[Pedroza, 2004] Carlos Pedroza, The Encyclopedia of Educational Technology, San Diego State University. Access Date: 21 October 2005, http://coe.sdsu.edu/eet/articles/visualperc1/start.htm&lt;br /&gt;
&lt;br /&gt;
[Rosenholtz et al., 2005] Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. Feature Congestion: A Measure of Display Clutter. http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf &lt;br /&gt;
&lt;br /&gt;
[Truong, 2005] Donny Truong, “Universal Principles of design” Access Date: 21. Oktober 2005 http://www.visualgui.com/index.php?p=1&lt;br /&gt;
&lt;br /&gt;
[Tufte, 1991] Edward Tufte, The Visual Display of Quantitative Information, Second Edition, Graphics Press, USA, 1991.&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Drawing1.jpg&amp;diff=7699</id>
		<title>File:Drawing1.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Drawing1.jpg&amp;diff=7699"/>
		<updated>2005-11-03T21:56:16Z</updated>

		<summary type="html">&lt;p&gt;Menace: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
Ali Akcaglayan&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Drawing19.jpg&amp;diff=7698</id>
		<title>File:Drawing19.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Drawing19.jpg&amp;diff=7698"/>
		<updated>2005-11-03T21:54:31Z</updated>

		<summary type="html">&lt;p&gt;Menace: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
Ali Akcaglayan&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7697</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 2</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_2&amp;diff=7697"/>
		<updated>2005-11-03T21:54:12Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Conclusion and further suggestions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Poor Graphic =&lt;br /&gt;
[[Image:Mibi04death-and-taxes.jpg|none|thumb|600px|Death and Taxes: A visual look at where your tax dollars go (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Drawbacks / Critical analysis =&lt;br /&gt;
&lt;br /&gt;
Before designing visualization for the “Death and Taxes”, it is important to find what is wrong with the existing one, analyze the missing or wrongly applied essential design principles. Then we will be in a better position to make corrections, and come up with an improved visualization.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing|Preattentive Processing]] ==&lt;br /&gt;
&lt;br /&gt;
“Tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive [Healey et al., 2005]”. These tasks can be performed without the need of focused attention. &lt;br /&gt;
&lt;br /&gt;
* From the given visualization displayed on a 17” display what we can perceive instantly is that the budget is allocated to a number of departments and further allocated to various sub-departments within those. However, the display is so cluttered that we are unable to perceive more than that.&lt;br /&gt;
&lt;br /&gt;
* Lengthy description in the biggest circle and as a part of legend can’t be treated with preattentive processing.&lt;br /&gt;
&lt;br /&gt;
* The proportionate sizes of circles help a lot in finding instantly which department has the highest or lowest budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Five Hat Racks|Five Hat Racks]]==&lt;br /&gt;
&lt;br /&gt;
There are five ways to organize information: category (similarity relatedness), time (chronological sequence), location (geographical or spatial references), alphabet (alphabetical sequence), and continuum (magnitude; highest to lowest, best to worse) [Truong, 2005]”.&lt;br /&gt;
&lt;br /&gt;
* Category (similarity relatedness): Satisfies as far as the depiction of budget in different departments is concerned. But if you see the sub-departments then this aspect is violated.&lt;br /&gt;
&lt;br /&gt;
* Time (chronological sequence): This aspect is not applicable for the given visualization, because this shows the budget for one year only.&lt;br /&gt;
&lt;br /&gt;
* Location (geographical or spatial references): This aspect is also not applicable for the given visualization. Because the visualization does not show the spending of budget in different states. This might be a missing information. Without this information it is assumed that the depicted spending is same across all the states (expenditures on health, education etc.). &lt;br /&gt;
&lt;br /&gt;
* Alphabet (alphabetical sequence): Arrangement is not alphabetical in the given picture. But it is not necessary that the improvement will be made by introducing alphabetical sequences. Rather some more meaningful sequences should be exploited.&lt;br /&gt;
&lt;br /&gt;
* Continuum (magnitude; highest to lowest, best to worse): This aspect is successfully used in the picture. See the sizes of circles which are proportionate with the allocated budget.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 10 - Aufgabe 1 - Visual Clutter|Visual Clutter]] ==&lt;br /&gt;
&lt;br /&gt;
“Clutter is the state in which excess items, or their representation or organization, lead to a degradation of performance at some task [Rosenholtz et al., 2005]”.&lt;br /&gt;
&lt;br /&gt;
* It is not possible to visualize all the elements of the given picture on a standard display. There are many circles, and associated descriptions. Perhaps the descriptions can be removed as a default. When user focuses or hovers the mouse, then the circle under focus could be displayed with associated description.&lt;br /&gt;
&lt;br /&gt;
* On the initial screen, the circular sub-departments can be hidden. Those can be displayed under user control when the user focuses on one or a group of the bigger circles.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G7 - Aufgabe 1 - Gestalt Laws|Gestalt Laws]]==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather than separate component parts [Pedroza, 2004]”. Often used Gestalt principles are Proximity, Similarity, Closure and Good Continuation.&lt;br /&gt;
&lt;br /&gt;
* Proximity: Elements close to each other tend to form groups. This is evident from the picture. The small circles in vicinity of bigger ones tend to form one group. This also reveals the fact that may be the connecting lines were not necessary and are in fact redundant.&lt;br /&gt;
&lt;br /&gt;
* Similarity: Elements that are similar in some way tend to be grouped together. If we see at the color encodings used, then this aspect seems to be violated. For example, blue parts could be perceived as related, but in fact those are not.&lt;br /&gt;
* Closure: How items are grouped together if they tend to complete a pattern. It seems that this design aspect is not used in the given picture. Instead the author has used explicit connecting lines which in turn increase visual clutter.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Lie Factor|Lie Factor]] ==&lt;br /&gt;
&lt;br /&gt;
The “Lie Factor” is a value to describe the relation between the size of effect shown in a graphic and the size of effect shown in the data. &amp;quot;The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* If you look at the relative sizes of circles and the allocated budgets, this aspect is satisfied.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G8 - Aufgabe 1 - Chart Junk|Chart Junk]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The interior decoration of graphics generates a lot of ink which does not tell the viewer anything new. The purpose of the decoration varies - to make the graphic appear more scientific, to enliven the display, to give the designer an opportunity to exercise artistic skill. Regardless of the cause, it is all non-data-ink or redundant data-ink, and it is often chart junk [Tufte, 1991]”.&lt;br /&gt;
&lt;br /&gt;
* The connecting lines between circles are chart junk.&lt;br /&gt;
* The descriptions along with meaningful logos are also perhaps chart junk.&lt;br /&gt;
* Black background is making it diffciult to focus the eyes on the graphic.&lt;br /&gt;
* Percentages might be more helpful for initial overview instead of writing exact amount in dollars.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Data-Ink Ratio|Data-Ink Ratio]] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A large share of ink on a graphic should present data-information, the ink changing as the data change. Data-ink is the non-erasable core of a graphic, the non-redundant ink arranged in response to variation in the numbers represented [Tufte, 1991].”&lt;br /&gt;
&lt;br /&gt;
* Excessive color is used for the background (all black), which is making it difficult to focus the eyes on the useful data.&lt;br /&gt;
&lt;br /&gt;
* Space is wasted by displaying the complete logos mentioning long redundant texts, for example “United States of America” on many logos. It could be mentioned at a single location that the picture is all about USA.&lt;br /&gt;
&lt;br /&gt;
* The description written inside the biggest circle should be moved away from the graphics. This would also make this circle smaller in size, which is not showing any data.&lt;br /&gt;
&lt;br /&gt;
* The non-data ink which is used to elaborate or decorate the picture is also in excess.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 09 - Aufgabe 1 - Color Coding / Color|Color Coding / Color]] ==&lt;br /&gt;
&lt;br /&gt;
Colors can be used intelligently to encode information in the picture. In the given picture the color encoding is used extensively but it is intermixed and got confused. For example, the circular sections for army, air force and navy all have different colors in spite of the fact that the sections are interrelated, for example R&amp;amp;D, Personnel, Operations etc.  Thus principle of consistency is violated.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe 01 - Aufgabe 1 - Aesthetic-Usability Effect|Aesthetic-Usability Effect]] ==&lt;br /&gt;
&lt;br /&gt;
“The Aesthetic-Usability Effect is a condition whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs [markboulton.co.uk]”. Probably, the other design elements also play their part in making a product aesthetic.&lt;br /&gt;
&lt;br /&gt;
* The rule of Golden ratio is apparently violated. This is evident if we look at the proportions of circles to one another. Their sizes are perfectly in proportion to the allocated budget, but their sizes realtive to one another vis-a-vis the rule of Golden Ratio is not observed. Two quantities are said to be in the golden ratio, if &amp;quot;the whole is to the larger as the larger is to the smaller&amp;quot;[Golden ratio].”&lt;br /&gt;
&lt;br /&gt;
* If the black background is removed, then the existing picture is not bad as far as aesthetics are concerned.&lt;br /&gt;
&lt;br /&gt;
== [[Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G4 - Aufgabe 1 - Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity|Ockham&#039;s Razor / Occam&#039;s Razor / Principle of Simplicity]] ==&lt;br /&gt;
&lt;br /&gt;
According to rules elaborated by William of Ockham in his works [Hoffmann et al., 1997];&lt;br /&gt;
&lt;br /&gt;
* It is futile to do with more what can be done with fewer.  Quite meaningful logos are used, but the descriptions of the departments are also mentioned. One of these could be avoided.&lt;br /&gt;
&lt;br /&gt;
* When a proposition comes out true for things, if two things suffice for its truth, it is superfluous to assume a third.  It is related with insight, which is missing or is not easily perceivable from the existing picture.&lt;br /&gt;
&lt;br /&gt;
* Plurality should not be assumed without necessity. ???&lt;br /&gt;
&lt;br /&gt;
* No plurality should be assumed unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority. ???&lt;br /&gt;
&lt;br /&gt;
== Layout ==&lt;br /&gt;
&lt;br /&gt;
An efficient layout can be used to interactively visualize a complex visualization. In the given picture, if we use for example grid layout then its usability can be increased. It can be used to provide focus+context at the same time. In one layout window, whole picture can be shown like the existing one. While on another one, the focused part can be shown in a magnified way. However, the initial overview for the division into military and non/military spending is possible instantly.&lt;br /&gt;
&lt;br /&gt;
= Suggestions =&lt;br /&gt;
&lt;br /&gt;
More than one solutions can be adapted. Either an altogether new and improved visualization could be designed by taking care of existing deficiencies, or the existing picture could be improved by taking into account the following refinements.&lt;br /&gt;
&lt;br /&gt;
* Remove black background&lt;br /&gt;
&lt;br /&gt;
* Remove unnecessary circles&lt;br /&gt;
&lt;br /&gt;
* Remove connecting lines&lt;br /&gt;
&lt;br /&gt;
* Remove redundant descriptions where meaningful logos suffice&lt;br /&gt;
&lt;br /&gt;
* Introduce %ages&lt;br /&gt;
&lt;br /&gt;
* Use consistent color for similar sub-departments&lt;br /&gt;
&lt;br /&gt;
* Make it possible to visulaize the information collectively under separate meaningful heads. For example, budget allocation for R&amp;amp;D, budget allocation for Maintenence etc.&lt;br /&gt;
&lt;br /&gt;
= Conclusion and further suggestions =&lt;br /&gt;
&lt;br /&gt;
The Visualization itself is not so bad, but only good for print version. The diameters of the different circles give a good overview of the dispersion of the budget. The problem is that no one can read the name of the different departments. Only when you zoon in the picture you can read it, but then you loos the overview. One approaches to improve the print version is e.g. to change the background colour. &lt;br /&gt;
&lt;br /&gt;
But to improve the understanding of the picture we suggest a dynamic visualization! &lt;br /&gt;
The problem of the data set is that there are too many different departments and sub-departments. They can not be displayed in a normal diagram. There are up to 200 departments with nearly the same budget and with a normal diagram e.g. scatter plots there is no chance to distinguish the different departments and there are no specific information which can be derived from that kind of visualization.&lt;br /&gt;
&lt;br /&gt;
Our approach is to use a SunBurst [SunBurst] like visualization. SunBurst is good to visualize hierarchies with a lot of data. The benefit of this technique is that you can easily compare different departments and sub-departments. The sizes of the different parts represent the budget. The different parts can be labelled with the $ amount or with % from the total budget and/or the Budget of the super-department.&lt;br /&gt;
&lt;br /&gt;
The aim of this visualization is to compare different departments. You can go deeper to one sub-department and then compare two or more of them. It is also possible to get a total overview of all apartments by expanding all of the sub-departments. The drawback of this visualization is that departments with very little budget are nearly invisible in the circle. This disadvantage can be solved by dynamically colour or highlight different parts of the data. Another improvement could be to add a second or third view to the SunBurst visualization. These views could be a simple tree (like the explorer) or a Gaussian distribution of the budget. The data sets are the departments with their budget. With this visualization you can see how many departments have which amount of budget. By selecting a rage of the Gaussian distortion the relevant departments are expanded in the SunBurst and highlited in the explorer-tree. It sould be also possible to show the detail data e.g. name, amount of budger, etc. when you move your mouse over a part of the SunBurst Visualization.&lt;br /&gt;
&lt;br /&gt;
This visualization approach reduces the drawback of the original one where the different departments are not so easy to compare. &lt;br /&gt;
&lt;br /&gt;
[[Image:100 0034.JPG|none|thumb|300px|Multiple view approach (click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 0033.JPG|none|thumb|300px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 Drawing19.JPG|none|thumb|800px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:100 Drawing1.JPG|none|thumb|800px|Multiple view approach with expandet sub-node(click on image for larger version)]]&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
&lt;br /&gt;
[Golden ratio] http://www.absoluteastronomy.com/encyclopedia/g/go/golden_ratio.htm&lt;br /&gt;
&lt;br /&gt;
[SunBurst] http://www.cc.gatech.edu/gvu/ii/sunburst/&lt;br /&gt;
&lt;br /&gt;
[Healey et al., 2005] Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005. http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Hoffmann et al., 1997] Roald Hoffmann, Vladimir I. Minkin, Barry K. Carpenter, Ockham&#039;s Razor and Chemistry, HYLE--International Journal for Philosophy of Chemistry, Vol. 3 (1997), Retrieved at: October 24, 2005, http://www.hyle.org/journal/issues/3/hoffman.htm&lt;br /&gt;
&lt;br /&gt;
[Mark Boulton, March 06, 2005] Journal, Aesthetic-Usability Effect http://www.markboulton.co.uk/journal/comments/aesthetic_usability_effect/&lt;br /&gt;
&lt;br /&gt;
[Pedroza, 2004] Carlos Pedroza, The Encyclopedia of Educational Technology, San Diego State University. Access Date: 21 October 2005, http://coe.sdsu.edu/eet/articles/visualperc1/start.htm&lt;br /&gt;
&lt;br /&gt;
[Rosenholtz et al., 2005] Ruth Rosenholtz, Yuanzhen Li, Jonathan Mansfield, and Zhenlan Jin. Feature Congestion: A Measure of Display Clutter. http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf &lt;br /&gt;
&lt;br /&gt;
[Truong, 2005] Donny Truong, “Universal Principles of design” Access Date: 21. Oktober 2005 http://www.visualgui.com/index.php?p=1&lt;br /&gt;
&lt;br /&gt;
[Tufte, 1991] Edward Tufte, The Visual Display of Quantitative Information, Second Edition, Graphics Press, USA, 1991.&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7284</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7284"/>
		<updated>2005-11-01T10:38:19Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Ressources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[Healey-Booth-Enns, 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey-Booth-Enns, 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe-Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Kosara, 2002],Robert Kosara, Silvia Miksch, Helwig Hauser, Focus+Context Taken Literally, Vienna University of Technolog, VRVis Research Center, Austria, Created at: 2002 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey-Booth-Enns, 1996], Christopher G. Healey, Kellog S. Booth and James T. Enns, High-Speed Visual Estimation Using Preattentive Processing, The University of British Columbia, Created at: June,1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Wolfe-Treisma, 2003], Jeremy M Wolfe, Anne Treisma, What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it?, Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, Created at: May,2003, Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey, 2005], Christopher G. Healey, Perception in Visualization, Department of Computer Science, North Carolina State University, Created at: May,2005, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7283</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7283"/>
		<updated>2005-11-01T10:38:05Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[Kosara, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[Healey-Booth-Enns, 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey-Booth-Enns, 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe-Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Kosara-Miksch-Hauser, 2002],Robert Kosara, Silvia Miksch, Helwig Hauser, Focus+Context Taken Literally, Vienna University of Technolog, VRVis Research Center, Austria, Created at: 2002 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey-Booth-Enns, 1996], Christopher G. Healey, Kellog S. Booth and James T. Enns, High-Speed Visual Estimation Using Preattentive Processing, The University of British Columbia, Created at: June,1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Wolfe-Treisma, 2003], Jeremy M Wolfe, Anne Treisma, What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it?, Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, Created at: May,2003, Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey, 2005], Christopher G. Healey, Perception in Visualization, Department of Computer Science, North Carolina State University, Created at: May,2005, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7282</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7282"/>
		<updated>2005-11-01T10:30:39Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[Kosara-Miksch-Hauser, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[Healey-Booth-Enns, 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey-Booth-Enns, 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996] It is more difficult but still preantentiv.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe-Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Kosara-Miksch-Hauser, 2002],Robert Kosara, Silvia Miksch, Helwig Hauser, Focus+Context Taken Literally, Vienna University of Technolog, VRVis Research Center, Austria, Created at: 2002 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey-Booth-Enns, 1996], Christopher G. Healey, Kellog S. Booth and James T. Enns, High-Speed Visual Estimation Using Preattentive Processing, The University of British Columbia, Created at: June,1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Wolfe-Treisma, 2003], Jeremy M Wolfe, Anne Treisma, What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it?, Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, Created at: May,2003, Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey, 2005], Christopher G. Healey, Perception in Visualization, Department of Computer Science, North Carolina State University, Created at: May,2005, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7281</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7281"/>
		<updated>2005-11-01T10:04:17Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[Kosara-Miksch-Hauser, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[Healey-Booth-Enns, 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey-Booth-Enns, 1996] One visual variable and very easy to find it.   &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe-Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Kosara-Miksch-Hauser, 2002],Robert Kosara, Silvia Miksch, Helwig Hauser, Focus+Context Taken Literally, Vienna University of Technolog, VRVis Research Center, Austria, Created at: 2002 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey-Booth-Enns, 1996], Christopher G. Healey, Kellog S. Booth and James T. Enns, High-Speed Visual Estimation Using Preattentive Processing, The University of British Columbia, Created at: June,1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Wolfe-Treisma, 2003], Jeremy M Wolfe, Anne Treisma, What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it?, Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, Created at: May,2003, Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey, 2005], Christopher G. Healey, Perception in Visualization, Department of Computer Science, North Carolina State University, Created at: May,2005, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7280</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7280"/>
		<updated>2005-11-01T10:02:13Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[Kosara-Miksch-Hauser, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[Healey-Booth-Enns, 1996]}}&lt;br /&gt;
&lt;br /&gt;
= Preattentive Features=&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey-Booth-Enns, 1996]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe-Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
=Ressources=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Kosara-Miksch-Hauser, 2002],Robert Kosara, Silvia Miksch, Helwig Hauser, Focus+Context Taken Literally, Vienna University of Technolog, VRVis Research Center, Austria, Created at: 2002 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey-Booth-Enns, 1996], Christopher G. Healey, Kellog S. Booth and James T. Enns, High-Speed Visual Estimation Using Preattentive Processing, The University of British Columbia, Created at: June,1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Wolfe-Treisma, 2003], Jeremy M Wolfe, Anne Treisma, What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it?, Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, Created at: May,2003, Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey, 2005], Christopher G. Healey, Perception in Visualization, Department of Computer Science, North Carolina State University, Created at: May,2005, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7240</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=7240"/>
		<updated>2005-10-31T14:30:07Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Ressources: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Preattentive Processing =&lt;br /&gt;
== Definition:==&lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[Kosara-Miksch-Hauser, 2002]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[Healey-Enns, 1996]}}  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Preattentive Features==&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[Chipman, 1996],&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattentively.[Healey-Booth-Enns, 1996]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattentively.[Chipman, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; (b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.[Healey-Booth-Enns, 1996]&lt;br /&gt;
&lt;br /&gt;
==Conclusion:==&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[Wolfe-Treisma, 2003]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [Healey, 2005]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Ressources:==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Kosara-Miksch-Hauser, 2002],Robert Kosara, Silvia Miksch, Helwig Hauser, Focus+Context Taken Literally, Vienna University of Technolog, VRVis Research Center, Austria, Created at: 2002 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey-Booth-Enns, 1996], Christopher G. Healey, Kellog S. Booth and James T. Enns, High-Speed Visual Estimation Using Preattentive Processing, The University of British Columbia, Created at: June,1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[Wolfe-Treisma, 2003], Jeremy M Wolfe, Anne Treisma, What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it?, Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, Created at: May,2003, Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[Healey, 2005], Christopher G. Healey, Perception in Visualization, Department of Computer Science, North Carolina State University, Created at: May,2005, Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preattantive_2.jpg&amp;diff=7232</id>
		<title>File:Preattantive 2.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preattantive_2.jpg&amp;diff=7232"/>
		<updated>2005-10-31T13:55:08Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Source */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Detecting the Circle preattentively&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
[Chipman, 1996], Gene Chipman, Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns), Created at: 1996, Access Date: 24.October.2005. http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6996</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6996"/>
		<updated>2005-10-25T20:56:14Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[1]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[2]}}  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.&#039;&#039;[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively.&#039;&#039;[2]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively.&#039;&#039;[6]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; &amp;lt;br&amp;gt;&lt;br /&gt;
(b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.[2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&#039;&#039;[2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
6]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preattentive_4.JPG&amp;diff=6994</id>
		<title>File:Preattentive 4.JPG</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preattentive_4.JPG&amp;diff=6994"/>
		<updated>2005-10-25T20:55:15Z</updated>

		<summary type="html">&lt;p&gt;Menace: Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; &amp;lt;br&amp;gt;
(b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; &amp;lt;br&amp;gt;&lt;br /&gt;
(b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6991</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6991"/>
		<updated>2005-10-25T20:54:51Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[1]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[2]}}  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.&#039;&#039;[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively.&#039;&#039;[2]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively.&#039;&#039;[6]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_4.JPG]]&lt;br /&gt;
&lt;br /&gt;
Examples of two target detection tasks: (a) target can be detected preattentively because it possess the feature “filled”; &amp;lt;br&amp;gt;&lt;br /&gt;
(b) target cannot be detected preattentively because it has no visual feature that is unique from its distractors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&#039;&#039;[2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
6]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6967</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6967"/>
		<updated>2005-10-25T20:46:50Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[1]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[2]}}  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.&#039;&#039;[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively.&#039;&#039;[2]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively.&#039;&#039;[6]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&#039;&#039;[2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
6]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6966</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6966"/>
		<updated>2005-10-25T20:46:37Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[1]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements &amp;lt;br&amp;gt; take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[2]}}  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.&#039;&#039;[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively.&#039;&#039;[2]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively.&#039;&#039;[6]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&#039;&#039;[2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
6]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6965</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6965"/>
		<updated>2005-10-25T20:46:22Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
{{Definition|Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes.  But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing.[1]}}  &lt;br /&gt;
&lt;br /&gt;
{{Definition|One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that are  processed preattentively (i.e. without the &amp;lt;br&amp;gt; need for  focused attention). Typically, tasks that can be performed on large multi-element displays in 200 milliseconds or less are considered preattentive.This is because eye movements &amp;lt;br&amp;gt; take at least 200 milliseconds to initiate. Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement of the elements in the  displays  ensures that attention cannot be prefocused on any particular location.  Observers  report that these  tasks can be  completed with very  little effort.[2]}}  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.&#039;&#039;[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively.&#039;&#039;[2]    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively.&#039;&#039;[6]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even though form varies randomly&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&#039;&#039;[2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
6]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6553</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6553"/>
		<updated>2005-10-25T06:35:24Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [1] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [2] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even&lt;br /&gt;
though form varies randomly in the two regions; (b) random hue variations interfere with the identi-&lt;br /&gt;
fication of a region boundary based on form.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6552</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6552"/>
		<updated>2005-10-25T06:35:00Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [1] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [2] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even&lt;br /&gt;
though form varies randomly in the two regions; (b) random hue variations interfere with the identi-&lt;br /&gt;
fication of a region boundary based on form.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6551</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6551"/>
		<updated>2005-10-25T06:31:59Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [1] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [2] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even&lt;br /&gt;
though form varies randomly in the two regions; (b) random hue variations interfere with the identi-&lt;br /&gt;
fication of a region boundary based on form.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6550</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6550"/>
		<updated>2005-10-25T06:31:29Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [1] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [2] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even&lt;br /&gt;
though form varies randomly in the two regions; (b) random hue variations interfere with the identi-&lt;br /&gt;
fication of a region boundary based on form.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6549</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6549"/>
		<updated>2005-10-25T06:30:19Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [1] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [2] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.JPG]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattentive_3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Region segregation by form and hue: (a) hue boundary is identified preattentively, even&lt;br /&gt;
though form varies randomly in the two regions; (b) random hue variations interfere with the identi-&lt;br /&gt;
fication of a region boundary based on form.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[3]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[1]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preattentive_3.JPG&amp;diff=6548</id>
		<title>File:Preattentive 3.JPG</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preattentive_3.JPG&amp;diff=6548"/>
		<updated>2005-10-25T06:28:45Z</updated>

		<summary type="html">&lt;p&gt;Menace: Region segregation by form and hue: (a) hue boundary is identified preattentively, even
though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Region segregation by form and hue: (a) hue boundary is identified preattentively, even&lt;br /&gt;
though form varies randomly in the two regions; (b) random hue variations interfere with the identification of a region boundary based on form.&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6545</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6545"/>
		<updated>2005-10-25T05:55:09Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6544</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6544"/>
		<updated>2005-10-25T05:50:17Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2] &lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]    &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;    &lt;br /&gt;
&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6543</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6543"/>
		<updated>2005-10-25T05:41:25Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Red Object preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Detecting the Circle preattentively&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preattantive_2.jpg&amp;diff=6542</id>
		<title>File:Preattantive 2.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preattantive_2.jpg&amp;diff=6542"/>
		<updated>2005-10-25T05:40:24Z</updated>

		<summary type="html">&lt;p&gt;Menace: Detecting the Circle preattentively&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Detecting the Circle preattentively&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting the Circle preattentively&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preattantive_2.jpg.jpg&amp;diff=6541</id>
		<title>File:Preattantive 2.jpg.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preattantive_2.jpg.jpg&amp;diff=6541"/>
		<updated>2005-10-25T05:38:44Z</updated>

		<summary type="html">&lt;p&gt;Menace: Detecting the Circle preattantivcely&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Detecting the Circle preattantivcely&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#262,6,Detecting the Circle preattentively&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6540</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6540"/>
		<updated>2005-10-25T05:38:03Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattantively&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_2.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Circle preattantively&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6539</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6539"/>
		<updated>2005-10-25T05:36:57Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
Detecting the Red Object preattantively&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6538</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6538"/>
		<updated>2005-10-25T05:29:40Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]&lt;br /&gt;
Detecting the Red Object preattantively&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6537</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6537"/>
		<updated>2005-10-25T05:28:57Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantive_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=File:Preattantive_1.jpg&amp;diff=6536</id>
		<title>File:Preattantive 1.jpg</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=File:Preattantive_1.jpg&amp;diff=6536"/>
		<updated>2005-10-25T05:28:34Z</updated>

		<summary type="html">&lt;p&gt;Menace: Detecting the Red Object preattantively&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Detecting the Red Object preattantively&lt;br /&gt;
== Copyright status ==&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#261,5,Detecting the Red Object preattentively&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6535</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6535"/>
		<updated>2005-10-25T05:26:42Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Image:Preattantivce_1.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
	<entry>
		<id>https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6534</id>
		<title>Teaching:TUW - UE InfoVis WS 2005/06 - Gruppe G3 - Aufgabe 1 - Preattentive Processing</title>
		<link rel="alternate" type="text/html" href="https://infovis-wiki.net/w/index.php?title=Teaching:TUW_-_UE_InfoVis_WS_2005/06_-_Gruppe_G3_-_Aufgabe_1_-_Preattentive_Processing&amp;diff=6534"/>
		<updated>2005-10-25T05:05:04Z</updated>

		<summary type="html">&lt;p&gt;Menace: /* Preattentive Processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Preattentive Processing ==&lt;br /&gt;
;Definition: &lt;br /&gt;
  Visualization is so effective and useful because it utilizes one of the channels to our brain that have the highest bandwidths: our eyes. &amp;lt;br&amp;gt; But even this channel can be used more  or less efficiently. One special property of our visual system is preattentive processing. [2]&lt;br /&gt;
&lt;br /&gt;
  One very interesting result of vision research over the past 20 years has been the discovery of a limited set of visual properties that &amp;lt;br&amp;gt; are  processed preattentively (i.e. without the need for focused attention). Typically, tasks that can be performed on large multi-element &amp;lt;br&amp;gt; displays  in 200 milliseconds or less are considered preattentive. This is because eye movements take at least 200 milliseconds to initiate. &amp;lt;br&amp;gt; Any  perception  that is possible within this time frame involves only the information available in a single glimpse. Random placement &amp;lt;br&amp;gt; of the  elements  in  the  displays ensures that attention cannot be prefocused on any particular location. Observers report that these &amp;lt;br&amp;gt; tasks can be  completed with very  little effort. [3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preattentive Features&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Features.jpg]]&lt;br /&gt;
&lt;br /&gt;
A partial list of preattentive visual features, together with references to research that showed they were preattentive.[5]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
;Conclusion:&lt;br /&gt;
Any visual processing of that item prior to the act of selection can be called “preattentive”.[1]&lt;br /&gt;
&lt;br /&gt;
Preattentive processing can help to rapidly draw the focus of attention to a target with a unique visual feature (i.e., little or no searching is required in the preattentive case). [4]&lt;br /&gt;
&lt;br /&gt;
;Ressources&lt;br /&gt;
&lt;br /&gt;
[1]. Jeremy M Wolfe, Anne Treisma,- What shall we do with the preattentive processing stage:&lt;br /&gt;
Use it or lose it? - Todd S Horowitz poster presented at the Third Annual Meeting of the Vision Sciences Society, Sarasota, FL May, 2003 - Access Date: 24.October.2005&lt;br /&gt;
http://search.bwh.harvard.edu/links/talks/VSS03-JMW.pdf&lt;br /&gt;
&lt;br /&gt;
[2]. Robert Kosara, Silvia Miksch, Helwig Hauser - Focus+Context Taken Literally - Vienna University of Technolog, VRVis Research Center, Austria - Access Date: 24.October.2005.&lt;br /&gt;
http://www.kosara.net/papers/Kosara_CGA_2002.pdf&lt;br /&gt;
&lt;br /&gt;
[3]. Christopher G. Healey, Kellog S. Booth and James T. Enns - High-Speed Visual Estimation Using Preattentive Processing - The University of British Columbia, June 1996 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/download/tochi.96.pdf&lt;br /&gt;
&lt;br /&gt;
[4]. Christopher G. Healey – Perception in Visualization - Department of Computer Science, North Carolina State University – May.2005 - Access Date: 24.October.2005.&lt;br /&gt;
http://www.csc.ncsu.edu/faculty/healey/PP/index.html#Tri_Cog_Psych:80&lt;br /&gt;
&lt;br /&gt;
[5]. Gene Chipman – Rewiev of High Speed Visual Estimation Using Preattantive Processing (Healy, Booth and Enns) – 1996 – Access Date: 24.October.2005.&lt;br /&gt;
http://www.cs.umd.edu/class/spring2002/cmsc838f/preattentive.ppt#267,8,Preattentive Features&lt;br /&gt;
&lt;br /&gt;
[6].Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#895,14,Example:  Color Selection&lt;br /&gt;
&lt;br /&gt;
[7]. Colin Ware and Christopher G. Healey - Human Cognition Process &amp;amp; Perception in Visualization -  12.August.2005 – Access Date: 24.October.2005&lt;br /&gt;
http://www-staff.it.uts.edu.au/~maolin/32146_DIV/lec2/lecture2_1.ppt#896,15,Example: Shape Selection&lt;/div&gt;</summary>
		<author><name>Menace</name></author>
	</entry>
</feed>