HCI Deep Dives Podcast By Kai Kunze cover art

HCI Deep Dives

HCI Deep Dives

By: Kai Kunze
Listen for free

About this listen

HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.Copyright 2024 All rights reserved. Science
Episodes
  • TEI 2025 : Ambient Display Utilizing Anisotropy of Tatami
    Mar 30 2025
    Riku Kitamura, Kenji Yamada, Takumi Yamamoto, and Yuta Sugiura. 2025. Ambient Display Utilizing Anisotropy of Tatami. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '25). Association for Computing Machinery, New York, NY, USA, Article 3, 1–15. https://doi.org/10.1145/3689050.3704924

    Recently, digital displays such as liquid crystal displays and projectors have enabled high-resolution and high-speed information transmission. However, their artificial appearance can sometimes detract from natural environments and landscapes. In contrast, ambient displays, which transfer information to the entire physical environment, have gained attention for their ability to blend seamlessly into living spaces. This study aims to develop an ambient display that harmonizes with traditional Japanese tatami rooms by proposing an information presentation method using tatami mats. By leveraging the anisotropic properties of tatami, which change their reflective characteristics according to viewing angles and light source positions, various images and animations can be represented. We quantitatively evaluated the color change of tatami using color difference. Additionally, we created both static and dynamic displays as information presentation methods using tatami.

    https://doi.org/10.1145/3689050.3704924

    Show more Show less
    25 mins
  • DIS 2025 ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot
    Feb 20 2025

    Hu, Yuhan, Peide Huang, Mouli Sivapurapu, and Jian Zhang. "ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot." arXiv preprint arXiv:2501.12493(2025).

    https://arxiv.org/abs/2501.12493

    Nonverbal behaviors such as posture, gestures, and gaze are essential for conveying internal states, both consciously and unconsciously, in human interaction. For robots to interact more naturally with humans, robot movement design should likewise integrate expressive qualities—such as intention, attention, and emotions—alongside traditional functional considerations like task fulfillment, spatial constraints, and time efficiency. In this paper, we present the design and prototyping of a lamp-like robot that explores the interplay between functional and expressive objectives in movement design. Using a research-through-design methodology, we document the hardware design process, define expressive movement primitives, and outline a set of interaction scenario storyboards. We propose a framework that incorporates both functional and expressive utilities during movement generation, and implement the robot behavior sequences in different function- and social-oriented tasks. Through a user study comparing expression-driven versus function-driven movements across six task scenarios, our findings indicate that expression-driven movements significantly enhance user engagement and perceived robot qualities. This effect is especially pronounced in social-oriented tasks.

    Show more Show less
    12 mins
  • ISMAR 2024 Do you read me? (E)motion Legibility of Virtual Reality Character Representations
    Feb 7 2025

    K. Brandstätter, B. J. Congdon and A. Steed, "Do you read me? (E)motion Legibility of Virtual Reality Character Representations," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 299-308, doi: 10.1109/ISMAR62088.2024.00044.

    We compared the body movements of five virtual reality (VR) avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motioncapture, and the state-of-the-art deep-learning model AGRoL. Participants’ emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-capture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications.

    https://ieeexplore.ieee.org/document/10765392

    Show more Show less
    11 mins
adbl_web_global_use_to_activate_T1_webcro805_stickypopup
No reviews yet