MOG 2008

April 3-4, 2008, Aberdeen, Scotland, UK
As part of: The AISB 2008 Convention
Click here for the Mirror site of the AISB 2008 Convention website

Endorsed by:

SIGGEN, the ACL Special Interest Group on Generation

SIGMedia, the ACL Special Interest Group on Multimedia Language Processing

Sponsored by:

Research project of Tilburg University & University of Twente, The Netherlands

Email contact: mog2008<at> (Substitute @ sign for <at>)


An important aspect of the new generation of intelligent systems is the possibility to employ more than one output modality when interacting with the user. A quick and successful interaction is expected when, for instance, the system’s output is presented to the user via multimedia/hypermedia in which text and graphics are merged, or by a conversational agent that combines the use of speech and gesture. In such multimodal systems sophisticated specifications are needed to combine the different output modalities in such a way that each bit of information is presented in the most appropriate manner (i.e., the system should select the most suitable modalities and modality combinations to convey information to the user).

The AISB symposium MOG 2008 aims to bring work on multimodal output generation from different disciplines together to establish common ground and discuss possible future collaborations. Besides contributions from research fields such as multimodal language generation and embodied conversational agents, we would like to bring in an additional angle by investigating how research on multimodal output generation can benefit from a non-engineering perspective on multimodality. For example, how can research done in psychology and cognitive sciences, related to understanding how humans perceive and process multimodal information, be properly formalized for the purposes of intelligent multimodal output generation? And to what extent is it possible to formalize existing theories about how meaning is made in multimodal communication and use that for generating more meaningful multimodal output in the context of intelligent systems?

Thus, we invite technically oriented contributions as well as work in the area of human communication, such as cognitive models of multimodal communication and interaction. In this way we hope to combine an AI/engineering perspective with input from other disciplines such as linguistics and psychology, providing a forum where international researchers from different disciplinary backgrounds can exchange ideas on multimodal output generation and engage in scientific research collaboration.

MOG 2008 is a follow-up of MOG 2007, the workshop on Multimodal Output Generation organized on January 25-26, 2007, at the University of Aberdeen.


The symposium will take place in two consecutive days. Note that this is provisional depending on the number of submissions. Apart from the talks in which the participants present their work there will also be central discussions that should result in useful strategies for future work. All accepted papers will have a poster space on the same day as their paper presentation.


We welcome submissions on issues such as modality choice, integration of output modalities, and meaning representation for multimodal output generation, where natural language (in the form of either text or speech) is one of the modalities. We aim to have a varied programme that reflects the different research fields involved in multimodal output generation. Possible topics are listed below, but this list is not intended to be exhaustive.
  • task-based modality choice (domain and data dependencies)
  • user-based modality choice (constraints, preferences and expertise)
  • cross-references between modalities (e.g., text and graphics)
  • dependencies between modalities (e.g., speech, mimics and gestures)
  • relation between input and output modalities
  • integration of modalities (models, levels, dependencies)
  • cognitive models for processing multimodal information
  • computational models for multimodal output generation
  • models of modality integration based on multimodal discourse analysis
  • usability and evaluation of existing models
  • knowledge representation for multimodal output generation
  • evaluation of (methods for generating) multimodal output
  • development of multimodal corpora from a generation perspective


Justine Cassell (Northwestern University)
Michelle Zhou (IBM T. J. Watson Research Center)


We invite both long papers describing mature research (max. 8 pages) and short papers (max. 4 pages) describing plans, ideas and demos, which could invoke discussion and questions to be addressed in the future.

Accepted papers will be published in the AISB proceedings, with an ISBN number. Authors must sign a non-exclusive copyright declaration which gives AISB the right to publish the paper, but does not prevent the author from also publishing it in other venues after.

We intend to publish a special issue journal or book collection (e.g., LNCS or LNAI) based on a selection of the best papers from MOG 2007 and MOG 2008.

Paper formatting instructions will be provided via the AISB website.

Note that there is a best student paper award, and student scholarships

Papers can be submitted by e-mail to: mog2008<at> (Substitute @ sign for <at>)


  • Submission of papers: January 18, 2008
  • Notification: February 15, 2008
  • Final paper deadline: March 10
  • Symposium Dates: April 3-4, 2008

(Note that the symposium dates are provisional, subject to the number of final accepted papers.)



To attend the MOG 2008 symposium you need to register yourself at the AISB website.


Accommodation can be found on the AISB site about accommodation in Aberdeen.

Getting there

Route directions to the symposium can be found on the AISB site about Aberdeen and tourism.


  • Mariet Theune, University of Twente, The Netherlands
  • Yulia Bachvarova, University of Twente, The Netherlands
  • Elisabeth André, University of Augsburg, Germany
  • Ielka van der Sluis, University of Aberdeen, UK


  • Adrian Bangerter, University of Neuchâtel, Switzerland
  • Ellen Gurman Bard, University of Edinburgh, UK
  • John Bateman, University of Bremen, Germany
  • Harry Bunt, Tilburg University, The Netherlands
  • Stephan Kopp, University of Bielefeld, Germany
  • Emiel Krahmer, Tilburg University, The Netherlands
  • Theo van Leeuwen, University of Technology Sydney, Australia
  • Anton Nijholt, University of Twente, The Netherlands
  • Jon Oberlander, University of Edinburgh, UK
  • Niels Ole Bernsen, University of Southern Denmark
  • Paul Piwek, Open University, Milton Keynes
  • Ehud Reiter, University of Aberdeen, UK
  • Jan Peter de Ruiter, MPI, The Netherlands
  • Jacques Terken, Eindhoven University, The Netherlands
  • Eija Ventola, University of Helsinki, Finland
  • Ipke Wachsmuth, University of Bielefeld, Germany
  • Marilyn Walker, University of Sheffield, UK
We thank SIGGEN, IMOGEN and SIGMedia for their support.


These proceedings are for sale
Contact Mariet Theune: M dot Theune at ewi dot utwente dot nl
when you want to buy them for only 15 euros

AISB Call for demos for highschool visit on April 3rd.

Some schoolkids (15-16 yrs old) are visiting Aberdeen on April 3rd. Two separate groups of children will be present. The first group will be present from 12:00 to 12:30 and the second from 13:30 to 14:00. This is during the lunchbreak/poster session on the convention. Each group will have approximately 10 students. We wish to solicit proposals for interactive AI related demos. We intend to run a small number of exciting demos at a few stations, so that the kids can spend some time moving from station to station. This would mean just a couple of kids visiting one station at any one time. The aim is simply to make the students enthusiastic about AI. Demos will take place in the main hall of the convention, and will also be available to the convention delegates. Power will be available (UK style sockets), and laptops can be made available by arrangement.

Please send demo proposals to by February 29th.