Multimodal Social Interaction Dataset Survey
Until now, multimodal datasets have largely focused on single individuals. Social interaction, which is a major form of human experience, has been neglected. With community input, we propose to construct a massive multimodal human social interaction data warehouse for human-centered research in AI and related fields.  The dataset will include 1,000 diverse participants recorded in 3- to 4-person social interaction tasks. Multimodal measures of all participants will include: visual data for facial expression, eye gaze, body and hand gesture, body pose, and body dynamics; voice and textual data for speech and natural language; autonomic physiology, thermal, and brain signals.  Extensive annotation will include eye gaze, facial expression, body language, body gestures, verbal speech, communication patterns, and group dynamics.  

We would appreciate your thoughts about and interest in contributing to the design, implementation, and use of the proposed dataset.   Please send your responses by June 15th so we can analyze them in time for the funding submission deadline for this project.

Thank you

Lijun Yin, State University of New York at Binghamton
Jeffrey Cohn, University of Pittsburgh
Malihe Alikhani, University of Pittsburgh
Cynthia K. Maupin, State University of New York at Binghamton
Qiang Ji, Rensselaer Polytechnic Institute


Sign in to Google to save your progress. Learn more
Would you be interested in using such a dataset in your research?
Clear selection
Would you like to contribute to the design, implementation, or curation of  the proposed dataset?
Clear selection
What aspects of the dataset would be of interest (mark all that apply) ?
Which field or fields are you in (mark all that apply)?
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy