Theme
Generative AI for Human–Robot Interaction: Perception, Reasoning, and Collaborative Intelligence
Abstract
The recent convergence of Generative AI, vision-language models, and embodied robotics has opened new possibilities for natural, collaborative, and adaptive Human–Robot Interaction (HRI). This workshop brings together researchers from robotics, machine learning, cognitive science, and HRI to explore how generative models—LLMs, LVLMs, diffusion models, and world models—can enhance perception, task reasoning, joint decision-making, and safety in human–robot collaboration. Discussion topics include generative scene understanding, Gaussian-splat visual representations, LLM-based task planning, multimodal grounding, interactive dialogue for shared autonomy, industrial and assistive applications,
and safety/alignment challenges in generative robotic systems. The workshop aims to identify foundational principles, emerging challenges, and future research directions toward building trustworthy, capable, and human-aware generative robots.