BEAT Keynotes Organizers Return To Top

Description

Wearable movement and physiology sensors such as smartwatches, smart glasses, and ear-buds offer lightweight, non-invasive, and ecologically valid means to monitor human activity, affective state, and social behavior. With the rise of commercially deployed devices and new wearable foundation models, opportunities for scalable human behavior analysis continue to grow. However, challenges such as personalized modeling, on-device integration, or multimodal fusion prevail, limiting in-the-wild deployment of wearable devices.

Recent research has focused on advancing both modeling and sensing capabilities of wearable systems. In particular, the emergence of wearable foundation models enables researchers to work with expressive representations that scale to large populations and capture complex signals. On the other hand, researchers are developing increasingly sophisticated devices to capture physiological signals non-invasively, i.e., remote photoplethysmography (rPPG) which can measure heart rate from videos of the face. These advances have enabled the widespread adoption of commercial wearables such as smartwatches, allowing individuals to monitor their sleep, mood, or health. They are also supporting a variety of applications, from tracking affective states and social behaviors, to support human-robot interaction, mobile health, and behavioral research in both laboratory and real-world contexts.

However, significant challenges remain in both computational modeling and sensor design. Human behavior is inherently complex, context-dependent, and individual, making its analysis through wearable sensing particularly challenging. Even in controlled environments where task complexity is limited, learning personalized and/or generalized models remains difficult due to the high variability across individuals and our incomplete understanding of the underlying physiological mechanisms. This challenge is directly related to the multimodal nature of human behaviors, with different physiological or movement signals conveying distinct yet complementary information. Finally, as wearables are designed to monitor people in real-world, uncontrolled settings, they bring additional concerns related to privacy, ethical use, and data integrity.

The 1st Workshop on Behavioral and Emotion Analysis through wearable Technology (BEAT) aims to foster collaboration between ML researchers from various backgrounds (Gesture & Face Analysis, Affective Computing, HRI), as well as researchers in biomedical engineering. The main focus is on lightweight wearable movement and physiological sensors for computational human behavior analysis. While contributions are expected to be centered on real-world and ecologically valid settings, we also welcome controlled laboratory studies that introduce novel sensing approaches, benchmark datasets, or innovative applications.

Topics of interest include, but are not limited to:

  • Machine Learning and computational models for movement and physiological wearables
  • Resource efficient and lightweight models
  • Multimodal fusion and synchronization strategies
  • Methods for irregularly sampled or missing data
  • Individual differences, personalization and context-awareness
  • Ethical and privacy-preserving AI in wearable systems
  • Novel wearables and applications
  • Experimental methods for validation of wearable systems
  • Lab-controlled experiments and In-the-wild deployment
  • Datasets and Benchmarks
  • Responsible data management and user consent
  • Applications in Affective Computing / Mobile Health / Action Recognition / Social Interaction / HRI

Keynotes

Aaqib Saeed

Aaqib Saeed, Eindhoven University of Technology (TU/e), the Netherlands, is an Assistant Professor (tenured) at Eindhoven University of Technology (TU/e). He holds a Ph.D. cum laude from TU/e, where he conducted research in the Department of Mathematics and Computer Science, on self-supervised learning for 8 sensory data (ECG, EEG, IMU, PPG and Audio). Aaqib Saeed studies Artificial Intelligence at the intersection of Self-Supervised Learning, Sensing Systems, and Decentralized Computing. His work explores intelligent systems that can learn from raw sensory signals (ECG, EEG, Audio, Speech, IMU) without extensive human supervision, enabling scalable AI solutions for healthcare and beyond.

Zilu Liang

Zilu Liang, Kyoto University of Advanced Science (KUAS), Japan, is an Associate Professor at KUAS, where she leads the Ubiquitous and Personal Computing Lab. She received her Ph.D. and M.Sc. in Electrical Engineering and Information Systems from the University of Tokyo. Before joining KUAS, she held research and academic positions at the University of Tokyo, the University of Melbourne, the University of Oxford, and Imperial College London. Her research focuses on human-centered computing, wearable and ubiquitous technologies, and AI-driven methods for understanding and supporting human behavior.

Kai Kunze

Kai Kunze, Keio University (KMD), Japan, works as Professor at the Graduate School of Media Design, Keio University, Yokohama, Japan. Beforehand, he held an Assistant Professorship at Osaka Prefecture University. With over 254 papers published at high profile conferences and journals (e.g. CHI, TOCHI, UIST, IEEE Computer), Kai Kunze is a pioneer researcher in the HCI field, augmenting humans using technology. His most significant research contributions are in Eyewear Computing and Placement Robust Activity Recognition. His current research also includes Digitalizing Human Emotions and Amplifying Human Senses.

Organizers

Louis Simon
Louis Simon

Sorbonne University, France.

Arianna De Vecchi
Arianna De Vecchi

Politecnico di Milano and EssilorLuxottica, Italy.

Cristina Palmero
Cristina Palmero

King’s College London, UK.

Felix Dollack
Felix Dollack

Nara Institute of Science and Technology, Japan.

Ting Dang
Ting Dang

University of Melbourne, Australia.

Mohamed Chetouani
Mohamed Chetouani

Sorbonne University, France.