Communicating ideas and information from and to humans is a very important subject. In our daily life, human interact with variety of entities, such as, other humans, machines, media …etc. Constructive interactions are needed for good communication, which would result in successful outcomes, such as answering a query, learning a new skill, getting a service done, and communicating emotions. Each of these entities invokes a set of signals. Current research has focused on analyzing one entity’s signals with no respect to the other entities in a unidirectional manner. The computer vision community focused on detection, classification and recognition of humans and their poses and gestures progressing onto actions, activities, and events but it does not go beyond that. The signal processing community focused on emotion recognition from facial expressions or audio or both combined. The HCI community focused on making easier interfaces for machines to ease their usage. The goal of this workshop is to bring multiple disciplines together, to process human directed signals holistically, in a bidirectional manner, rather than isolation. This workshop is positioned to display this rich domain of applications, which will provide the necessary next boost for these technologies. At the same time, it seeks to ground computational models on theory that would help achieve the technology goals. This would allow us to leverage decades of research in different fields and to spur interdisciplinary research thereby opening up new problem domains for the multimedia community. For more details about the program please check the program page.
Call For Papers
Modeling human centric computing is anticipated to be a rewarding and challenging area of research. It would require modeling of a variety of contexts and interactions, each involving two or more entities with their context-dependent goals, emotions, affects and behavioral modes; and modeling of the interaction between them. The scope of the workshop covers the spectrum of Human-Human, Human-Machine, and Human-Media Interactions. Human-Human Interactions: humans form a multitude of social groups through their life and regularly interact with other humans in these groups producing social behavior. Social behavior is behavior that is socially relevant or is situated in an identifiable social context. Interacting or observant humans sense, interpret, and understand these behaviors mostly using aural and visual sensory stimuli. Human-Machine Interactions: aims to improve the interactions between users and computers by making interfaces more usable and receptive to users' needs by developing descriptive and predictive models and theories of interaction. Human-Media Interactions: propaganda evokes strong emotions for or against a cause. Modern multimodal content is known to evoke much stronger emotions than textual content. The focus of human-media interaction is therefore on the analysis of the audience’s affective response to social media content. Fine-grained assessment is essential to understand the affective impact of multimedia content. These posit many unaddressed challenges for the signal processing, computer vision, machine learning, and human-computer interaction research communities. New systems and approaches will be needed to for real-time solutions for media analysis, interface adaptation, gaze, gesture, body affect, face affect, engagement, and joint attention detection. This workshop will address all aspects of the above research including (but not limited to) the following topics of interest:
- Novel human-computer models for accessing media content, including multimodal and emotionally sensitive.
- Multimodal approaches to audio and video analysis, indexing, search, and retrieval.
- Modeling, prediction and forecasting collective behavior in social media.
- Human social, emotional, and/or affective cue extraction.
- Socially interactive and/or emotional multimedia content tagging.
- Multimedia tools for affective or interactive social behavior analysis.
- Dyadic or small-group interaction analysis in multimedia.
- Multimodal approaches to temporal or structural analysis of multimedia data.- Machine learning for multimodal human behavior analysis.
We call for submission of high-quality papers of standard ACMMM length and format in the above and related areas. For more details check important dates & paper submission page.