2018-06-27: CFP Deadline Extension: IEEE VIS 2018 Workshop on Machine Learning from User Interaction for Visualization and Analytics

From InfoVis:Wiki
Revision as of 11:15, 28 June 2018 by AArleo PhD (talk | contribs) (Created page with "Please note below that the paper submission deadline has been extended until July 15. '''IEEE VIS 2018 Workshop: MACHINE LEARNING FROM USER INTERACTION FOR VISUALIZATION AND...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Please note below that the paper submission deadline has been extended until July 15.

IEEE VIS 2018 Workshop: MACHINE LEARNING FROM USER INTERACTION FOR VISUALIZATION AND ANALYTICS

CALL FOR PAPERS

  • Date and Location: October 21 or 22, 2018 in Berlin, Germany

Website

The goal of this workshop is to bring together researchers from across the VIS community ? SciVis, InfoVis, and VAST ? to share their knowledge and build collaborations at the intersection of the Machine Learning and Visualization fields, with a focus on learning from user interaction. Our intention in this workshop is to pull expertise from across all fields of VIS in order to generate open discussion about how we currently learn from user interaction and where we can go with future research in this area. We hope to foster discussion regarding systems, interaction models, and interaction techniques across fields within the VIS community, rather than the current state of having these discussions independently contained within the SciVis/InfoVis/VAST fields. Further, we hope to collaboratively create a research agenda that explores the future of machine learning with user interaction based on the discussion during the workshop.

  • WORKSHOP TOPICS

The topic of the workshop will focus on issues and opportunities related to the use of machine learning to learn from user interaction in the course of data visualization and analysis. Specifically, we will focus on research questions including:

  - How are machine learning algorithms currently learning from user
  interaction, and what other possibilities exist?
  - What kinds of interactions can provide feedback to machine learning
  algorithms?
  - What can machine learning algorithms learn from interactions?
  - Which machine learning algorithms are most applicable in this domain?
  - How can machine learning algorithms be designed to enable user
  interaction and feedback?
  - How can visualizations and interactions be designed to exploit machine
  learning algorithms?
  - How can visualization system architectures be designed to support
  machine learning?
  - How should we manage conflicts between the user?s intent and the data
  or machine learning algorithm capabilities?
  - How can we evaluate systems that incorporate both machine learning
  algorithms and user interaction together?
  - How can machine learning and user interaction together make both
  computation and user cognition more efficient?
  - How can we support the sensemaking process by learning from user
  interaction?
  • SUBMISSIONS

We have two submission tracks: for papers and for posters.

  • PAPERS

We invite research and position papers between 5 and 10 pages in length (NOT including references). All submissions must be formatted according to the VGTC conference style template VGTC conference style template (i.e., NOT the journal style template that full papers use). Papers are to be submitted online through the Precision Conference System *at the Machine Learning from User Interaction for Visualization and Analytics track*. All papers accepted for presentation at the workshop will be published on IEEE Xplore and linked from the workshop website. All papers should contain full author names and affiliations. If applicable, a link to a short video (up to 5 min. in length) may also be submitted. The papers will be juried by the organizers and selected external reviewers and will be chosen according to relevance, quality, and likelihood that they will stimulate and contribute to the discussion. At least one author of each accepted paper needs to register for the conference (even if only for the workshop). Registration information will be available on the [http://ieeevis.org/year/2018/welcome IEEE VIS website].

Important Dates

  - Submission deadline:  June 30, 2018   July 15, 2018
  - Author notification:   July 31, 2018   August 6, 2018
  - Camera-ready deadline: August 20, 2018
  - Speaker Schedule Available: September 15, 2018
  - Workshop: October 21 or 22, 2018
  • POSTERS

We invite both late-breaking work and contributions in this area from other research domains to submit extended abstracts between 2 and 4 pages in length (NOT including references). All submissions must be formatted according to the VGTC conference style template (i.e., *NOT the journal style template that full papers use*). Extended abstracts are to be submitted online through the Precision Conference System (additional details TBA; do NOT use the PCS link above to submit extended abstracts for posters). All abstracts accepted for presentation at the workshop will be published on IEEE Xplore and linked from the workshop website. All abstracts should contain full author names and affiliations. If applicable, a link to a short video (up to 5 min. in length) may also be submitted. The abstracts will be juried by the organizers and selected external reviewers and will be chosen according to relevance, quality, and likelihood that they will stimulate and contribute to the discussion. At least one author of each accepted poster needs to register for the conference (even if only for the workshop). Registration information will be available on the [http://ieeevis.org/year/2018/welcome IEEE VIS website].

Important Dates

  - Submission deadline: August 15, 2018
  - Author notification: September 1, 2018
  - Camera-ready deadline: October 1, 2018
  - Workshop: October 21 or 22, 2018
  • ORGANIZERS*
  - John Wenskovitch, Virginia Tech (jw87@vt.edu)
  - Michelle Dowling, Virginia Tech (dowlingm@vt.edu)
  - Chris North, Virginia Tech
  - Remco Chang, Tufts University
  - Alex Endert, Georgia Tech
  - David Rogers, Los Alamos National Lab