CHI 2024 Logo

We at CHI '24

Meet us at the CHI conference in Honolulu!

Research

our publications at CHI '24

 [CHI '24] 'We Do Not Have the Capacity to Monitor All Media': A Design Case Study on Cyber Situational Awareness in Computer Emergency Response Teams

M. Kaufhold, T. Riebe, M. Bayer, C. Reuter

ABSTRACT - Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.

In Proceedings of the Conference on Human Factors in Computing Systems (CHI)
10.1145/3613904.3642368    PDF   
@inproceedings{kaufhold_cybersituationalawareness_2024,
	title        = {'We Do Not Have the Capacity to Monitor All Media': A Design Case Study on Cyber Situational Awareness in Computer Emergency Response Teams},
	author       = {Kaufhold, Marc-André and Riebe, Thea and Bayer, Markus and Reuter, Christian},
	year         = 2024,
	month        = {may},
	booktitle    = {Proceedings of the Conference on Human Factors in Computing Systems (CHI)},
	publisher    = {ACM},
	series       = {CHI '24},
	doi          = {10.1145/3613904.3642368},
	url          = {https://doi.org/10.1145/3613904.3642368},
	abstract     = {Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.},
	file         = {https://peasec.de/wp-content/uploads/2024/03/2024_KaufholdRiebeBayerReuter_CertDesignCaseStudy_CHI.pdf},
 award={Best Paper},
 note={Best Paper Award}
}

[CHI '24] From Adolescents' Eyes: Assessing an Indicator-Based Intervention to Combat Misinformation on TikTok

K. Hartwig, T. Biselli, F. Schneider, C. Reuter

ABSTRACT - Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents' assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an interventions's limitations. by adopting teenagers' perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.

In Proceedings of the Conference on Human Factors in Computing Systems (CHI)
10.1145/3613904.3642264    PDF   
@InProceedings{hartwig_adolescents_2024,
  author    = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
	booktitle    = {Proceedings of the Conference on Human Factors in Computing Systems (CHI)},
  series      = {CHI '24},
  title     = {From Adolescents' Eyes: Assessing an Indicator-Based Intervention to Combat Misinformation on TikTok},
  year      = {2024},
  address   = {New York, NY, USA},
  month     = {may},
  publisher = {ACM},
  doi       = {10.1145/3613904.3642264},
  url         = {https://doi.org//10.1145/3613904.3642264}, 
  file        = {https://peasec.de/wp-content/uploads/2024/02/2024_AdolescentsTikTok_CHI.pdf},
  abstract    = {Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents' assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an interventions's limitations. by adopting teenagers' perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.}
}


[CHI '24] Keyboard Fighters: The Use of ICTs by Activists in Times of Military Coup in Myanmar

L. Guntrum

ABSTRACT - Amidst the ongoing anti-military protests in Myanmar since 2021, there is a noticeable research gap on ICT-supported activism. Generally, ICTs play an important role during political crises in conjunction with activists’ practices on the ground. Inspired by Resource Mobilization Theory, I conducted qualitative interviews (N=16) and a qualitative online survey (N=34), which demonstrate the intersection between analog and digital domains, showcasing the ingenuity of the activists, and the rapid adoption of ICTs in a country that has experienced a digital revolution within the last few years. As not all people were able to protest on-the-ground, they acted as keyboard fighters to organize protests, to share information, and to support the civil disobedience movement in Myanmar. The study identifies, inter alia, the need for better offline applications with wider coverage in times of internet shutdowns, applications that cannot be easily identified during physical controls, and providing free and secure VPN access.

In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24)
10.1145/3613904.3642279   
@inproceedings{guntrum_keyboard_2024,
  author    = {Guntrum, Laura},
  booktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24)},
  series      = {CHI '24},
  title     = {Keyboard Fighters: The Use of ICTs by Activists in Times of Military Coup in Myanmar},
  year      = {2024},
  address   = {New York, NY, USA},
  month     = {May},
  publisher = {ACM},
  doi       = {10.1145/3613904.3642279},
abstract    = {Amidst the ongoing anti-military protests in Myanmar since 2021, there is a noticeable research gap on ICT-supported activism. Generally, ICTs play an important role during political crises in conjunction with activists’ practices on the ground. Inspired by Resource Mobilization Theory, I conducted qualitative interviews (N=16) and a qualitative online survey (N=34), which demonstrate the intersection between analog and digital domains, showcasing the ingenuity of the activists, and the rapid adoption of ICTs in a country that has experienced a digital revolution within the last few years. As not all people were able to protest on-the-ground, they acted as keyboard fighters to organize protests, to share information, and to support the civil disobedience movement in Myanmar. The study identifies, inter alia, the need for better offline applications with wider coverage in times of internet shutdowns, applications that cannot be easily identified during physical controls, and providing free and secure VPN access.}
}

[CHI '24] Assessing the Influence of Visual Cues in Virtual Reality on the Spatial Perception of Physical Thermal Stimuli

S. Günther, A. Skogseide, R. Buhlmann, M. Mühlhäuser

ABSTRACT - Advancements in haptics for Virtual Reality (VR) increased the quality of immersive content. Particularly, recent efforts to provide realistic temperature sensations have gained traction, but most often require very specialized or large complex devices to create precise thermal actuations. However, being largely detached from the real world, such a precise correspondence between the physical location of thermal stimuli and the shown visuals in VR might not be necessary for an authentic experience. In this work, we contribute the findings of a controlled experiment with 20 participants, investigating the spatial localization accuracy of thermal stimuli while having matching and non-matching visual cues of a virtual heat source in VR. Although participants were highly confident in their localization decisions, their ability to accurately pinpoint thermal stimuli was notably deficient.

In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24)
10.1145/3613904.3642154    PDF   
@InProceedings{Guenther2024thermomap,
  author    = {Günther, Sebastian and Skogseide, Alexandra and Buhlmann, Robin and Mühlhäuser, Max},
  booktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24)},
  title     = {Assessing the Influence of Visual Cues in Virtual Reality on the Spatial Perception of Physical Thermal Stimuli},
  year      = {2024},
  address   = {New York, NY, USA},
  month     = {May},
  publisher = {ACM},
	series       = {CHI '24},
  doi       = {10.1145/3613904.3642154},
  abstract  = {Advancements in haptics for Virtual Reality (VR) increased the quality of immersive content. Particularly, recent efforts to provide realistic temperature sensations have gained traction, but most often require very specialized or large complex devices to create precise thermal actuations. However, being largely detached from the real world, such a precise correspondence between the physical location of thermal stimuli and the shown visuals in VR might not be necessary for an authentic experience. In this work, we contribute the findings of a controlled experiment with 20 participants, investigating the spatial localization accuracy of thermal stimuli while having matching and non-matching visual cues of a virtual heat source in VR. Although participants were highly confident in their localization decisions, their ability to accurately pinpoint thermal stimuli was notably deficient.},
  file 		= {https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2024/guenther2024thermomap.pdf}
}




[CHI '24] Was it Real or Virtual? Confirming the Occurrence and Explaining Causes of Memory Source Confusion between Reality and Virtual Reality

E. Bonnail, J. Frommel, E. Lecolinet, S. Huron, J. Gugenheimer

ABSTRACT - Source confusion occurs when individuals attribute a memory to the wrong source (e.g., confusing a picture with an experienced event). Virtual Reality (VR) represents a new source of memories particularly prone to being confused with reality. While previous research identified causes of source confusion between reality and other sources (e.g., imagination, pictures), there is currently no understanding of what characteristics specific to VR (e.g., immersion, presence) could influence source confusion. Through a laboratory study (n=29), we 1) confirm the existence of VR source confusion with current technology, and 2) present a quantitative and qualitative exploration of factors influencing VR source confusion. Building on the Source Monitoring Framework, we identify VR characteristics and assumptions about VR capabilities (e.g., poor rendering) that are used to distinguish virtual from real memories. From these insights, we reflect on how the increasing realism of VR could leave users vulnerable to memory errors and perceptual manipulations.

In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
10.1145/3613904.3641992   
@inproceedings{bonnail2024wasitreal,
  title={Was it Real or Virtual? Confirming the Occurrence and Explaining Causes of Memory Source Confusion between Reality and Virtual Reality},
  author={Bonnail, Elise and Frommel, Julian and Lecolinet, Eric and Huron, Samuel and Gugenheimer, Jan},
  series = {CHI '24}, 
  booktitle={Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems},
  pages={1--17},
  year={2024},
  publisher = {Association for Computing Machinery (ACM)},
  doi = {10.1145/3613904.3641992},
  url = {https://dl.acm.org/doi/10.1145/3613904.3641992},
  abstract = {Source confusion occurs when individuals attribute a memory to the wrong source (e.g., confusing a picture with an experienced event). Virtual Reality (VR) represents a new source of memories particularly prone to being confused with reality. While previous research identified causes of source confusion between reality and other sources (e.g., imagination, pictures), there is currently no understanding of what characteristics specific to VR (e.g., immersion, presence) could influence source confusion. Through a laboratory study (n=29), we 1) confirm the existence of VR source confusion with current technology, and 2) present a quantitative and qualitative exploration of factors influencing VR source confusion. Building on the Source Monitoring Framework, we identify VR characteristics and assumptions about VR capabilities (e.g., poor rendering) that are used to distinguish virtual from real memories. From these insights, we reflect on how the increasing realism of VR could leave users vulnerable to memory errors and perceptual manipulations.}
}

[CHI '24] pARam: Leveraging Parametric Design in Extended Reality to Support the Personalization of Artifacts for Personal Fabrication

E. Stemasov, S. Demharter, M. Rädler, J. Gugenheimer, E. Rukzio

ABSTRACT - Extended Reality (XR) allows in-situ previewing of designs to be manufactured through Personal Fabrication (PF). These in-situ interactions exhibit advantages for PF, like incorporating the environment into the design process. However, design-for-fabrication in XR often happens through either highly complex 3D-modeling or is reduced to rudimentary adaptations of crowd-sourced models. We present pARam, a tool combining parametric designs (PDs) and XR, enabling in-situ configuration of artifacts for PF. In contrast to modeling- or search-focused approaches, pARam supports customization through embodied and practical inputs (e.g., gestures, recommendations) and evaluation (e.g., lighting estimation) without demanding complex 3D-modeling skills. We implemented pARam for HoloLens 2 and evaluated it (n=20), comparing XR and desktop conditions. Users succeeded in choosing context-related parameters and took their environment into account for their configuration using pARam. We reflect on the prospects and challenges of PDs in XR to streamline complex design methods for PF while retaining suitable expressivity.

In Proceedings of the 2024 {{CHI Conference
10.1145/3613904.3642083    PDF    Teaser Video    Full Video   
@inproceedings{Stemasov2024param,
author      = {Stemasov, Evgeny and Demharter, Simon and Rädler, Max and Gugenheimer, Jan and Rukzio, Enrico},  
booktitle   = {Proceedings of the 2024 {{CHI Conference}} on {{Human Factors}} in {{Computing Systems}}},
series      = {CHI '24},
title       = {pARam: Leveraging Parametric Design in Extended Reality to Support the Personalization of Artifacts for Personal Fabrication},
year        = {2024},
publisher   = {Association for Computing Machinery (ACM)},
address     = {New York, NY, USA},
month       = {may},
pages       = {1--23},
doi         = {10.1145/3613904.3642083},
url         = {https://dl.acm.org/doi/10.1145/3613904.3642083},
teaservideo = {https://youtu.be/_mj40ft96tY},
video       = {https://youtu.be/yZcv58nkeVE},
file        = {https://stemasov.dev/papers/stemasov-acm_chi_2024-param.pdf},
abstract    = {Extended Reality (XR) allows in-situ previewing of designs to be manufactured through Personal Fabrication (PF). These in-situ interactions exhibit advantages for PF, like incorporating the environment into the design process. However, design-for-fabrication in XR often happens through either highly complex 3D-modeling or is reduced to rudimentary adaptations of crowd-sourced models. We present pARam, a tool combining parametric designs (PDs) and XR, enabling in-situ configuration of artifacts for PF. In contrast to modeling- or search-focused approaches, pARam supports customization through embodied and practical inputs (e.g., gestures, recommendations) and evaluation (e.g., lighting estimation) without demanding complex 3D-modeling skills. We implemented pARam for HoloLens 2 and evaluated it (n=20), comparing XR and desktop conditions. Users succeeded in choosing context-related parameters and took their environment into account for their configuration using pARam. We reflect on the prospects and challenges of PDs in XR to streamline complex design methods for PF while retaining suitable expressivity.}
}

[CHI '24] DungeonMaker: Embedding Tangible Creation and Destruction in Hybrid Board Games through Personal Fabrication Technology

E. Stemasov, T. Wagner, A. Askari, J. Janek, O. Rajabi, A. Schikorr, J. Frommel, J. Gugenheimer, E. Rukzio

ABSTRACT - Hybrid board games (HBGs) augment their analog origins digitally (e.g., through apps) and are an increasingly popular pastime activity. Continuous world and character development and customization, known to facilitate engagement in video games, remain rare in HBGs. If present, they happen digitally or imaginarily, often leaving physical aspects generic. We developed DungeonMaker, a fabrication-augmented HBG bridging physical and digital game elements: 1) the setup narrates a story and projects a digital game board onto a laser cutter; 2) DungeonMaker assesses player-crafted artifacts; 3) DungeonMaker's modified laser head senses and moves player- and non-player figures, and 4) can physically damage figures. An evaluation (n=4x3) indicated that DungeonMaker provides an engaging experience, may support players' connection to their figures, and potentially spark novices' interest in fabrication. DungeonMaker provides a rich constellation to play HBGs by blending aspects of craft and automation to couple the physical and digital elements of an HBG tightly.

In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
10.1145/3613904.3642243    PDF    Teaser Video    Full Video   
@inproceedings{Stemasov2024dungeonmaker,
author      = {Stemasov, Evgeny and Wagner, Tobias and Askari, Ali and Janek, Jessica and Rajabi, Omid and Schikorr, Anja and Frommel, Julian and Gugenheimer, Jan and Rukzio, Enrico},
booktitle   = {Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems},
series      = {CHI '24},
title       = {DungeonMaker: Embedding Tangible Creation and Destruction in Hybrid Board Games through Personal Fabrication Technology},
year        = {2024},
publisher   = {Association for Computing Machinery (ACM)},
address     = {New York, NY, USA},
month       = {may},
pages       = {1--17},
doi         = {10.1145/3613904.3642243},
url         = {https://dl.acm.org/doi/10.1145/3613904.3642243},
teaservideo = {https://youtu.be/kKJD8Nv33qI},
video       = {https://youtu.be/NbIc-sOfT5Y},
file        = {https://stemasov.dev/papers/stemasov-acm_chi_2024-dungeonmaker.pdf},
abstract    = {Hybrid board games (HBGs) augment their analog origins digitally (e.g., through apps) and are an increasingly popular pastime activity. Continuous world and character development and customization, known to facilitate engagement in video games, remain rare in HBGs. If present, they happen digitally or imaginarily, often leaving physical aspects generic. We developed DungeonMaker, a fabrication-augmented HBG bridging physical and digital game elements: 1) the setup narrates a story and projects a digital game board onto a laser cutter; 2) DungeonMaker assesses player-crafted artifacts; 3) DungeonMaker's modified laser head senses and moves player- and non-player figures, and 4) can physically damage figures. An evaluation (n=4x3) indicated that DungeonMaker provides an engaging experience, may support players' connection to their figures, and potentially spark novices' interest in fabrication. DungeonMaker provides a rich constellation to play HBGs by blending aspects of craft and automation to couple the physical and digital elements of an HBG tightly.}
}



[CHI ’24] Don’t Accept All and Continue: Exploring Nudges for More Deliberate Interaction with Tracking Consent Notices

N. Gerber, A. Stöver, J. Peschke, V. Zimmermann

ABSTRACT - Legal frameworks rely on users to make an informed decision about data collection, e.g., by accepting or declining the use of tracking technologies. In practice, however, users hardly interact with tracking consent notices on a deliberate website per website level, but usually accept or decline optional tracking technologies altogether in a habituated behavior. We explored the potential of three different nudge types (color highlighting, social cue, timer) and default settings to interrupt this auto-response in an experimental between-subject design with 167 participants. We did not find statistically significant differences regarding the buttons clicked. Our results showed that opt-in default settings significantly decrease tracking technology use acceptance rates. These results are a first step towards understanding the effects of different nudging concepts on users’ interaction with tracking consent notices.

In ACM Trans. Comput.-Hum. Interact.
10.1145/3617363   
@article{Gerber2024exploringnudges,
author = {Gerber, Nina and St\"{o}ver, Alina and Peschke, Justin and Zimmermann, Verena},
title = {Don’t Accept All and Continue: Exploring Nudges for More Deliberate Interaction with Tracking Consent Notices},
year = {2024},
issue_date = {February 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {31},
number = {1},
issn = {1073-0516},
url = {https://doi.org/10.1145/3617363},
doi = {10.1145/3617363},
abstract = {Legal frameworks rely on users to make an informed decision about data collection, e.g., by accepting or declining the use of tracking technologies. In practice, however, users hardly interact with tracking consent notices on a deliberate website per website level, but usually accept or decline optional tracking technologies altogether in a habituated behavior. We explored the potential of three different nudge types (color highlighting, social cue, timer) and default settings to interrupt this auto-response in an experimental between-subject design with 167 participants. We did not find statistically significant differences regarding the buttons clicked. Our results showed that opt-in default settings significantly decrease tracking technology use acceptance rates. These results are a first step towards understanding the effects of different nudging concepts on users’ interaction with tracking consent notices.},
journal = {ACM Trans. Comput.-Hum. Interact.},
month = {nov},
articleno = {1},
numpages = {36},
keywords = {Nudges, cookie consent, privacy protection, informed decision},
series = {CHI ’24}
}

[CHI ’24] Decide Yourself or Delegate - User Preferences Regarding the Autonomy of Personal Privacy Assistants in Private IoT-Equipped Environments

K. Marky, A. Stöver, S. Prange, K. Bleck, P. Gerber, V. Zimmermann, F. Müller, F. Alt, M. Mühlhäuser

ABSTRACT - Personalized privacy assistants (PPAs) communicate privacy-related decisions of their users to Internet of Things (IoT) devices. There are different ways to implement PPAs by varying the degree of autonomy or decision model. This paper investigates user perceptions of PPA autonomy models and privacy profiles -- archetypes of individual privacy needs -- as a basis for PPA decisions in private environments (e.g., a friend's home). We first explore how privacy profiles can be assigned to users and propose an assignment method. Next, we investigate user perceptions in 18 usage scenarios with varying contexts, data types and number of decisions in a study with 1126 participants. We found considerable differences between the profiles in settings with few decisions. If the number of decisions gets high ($>$ 1/h), participants exclusively preferred fully autonomous PPAs. Finally, we discuss implications and recommendations for designing scalable PPAs that serve as privacy interfaces for future IoT devices.

In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
10.1145/3613904.3642591    PDF    Full Video   
@InProceedings{marky2024chi,
author = {Marky, Karola and Stöver, Alina and Prange, Sarah and Bleck, Kira and Gerber, Paul and Zimmermann, Verena and Müller, Florian and Alt, Florian and Mühlhäuser, Max},
booktitle = {Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems},
title = {Decide Yourself or Delegate - User Preferences Regarding the Autonomy of Personal Privacy Assistants in Private IoT-Equipped Environments},
year = {2024},
address = {New York, NY, USA},
note = {marky2024chi},
publisher = {Association for Computing Machinery},
series = {CHI ’24},
abstract = {Personalized privacy assistants (PPAs) communicate privacy-related decisions of their users to Internet of Things (IoT) devices. There are different ways to implement PPAs by varying the degree of autonomy or decision model. This paper investigates user perceptions of PPA autonomy models and privacy profiles -- archetypes of individual privacy needs -- as a basis for PPA decisions in private environments (e.g., a friend's home). We first explore how privacy profiles can be assigned to users and propose an assignment method. Next, we investigate user perceptions in 18 usage scenarios with varying contexts, data types and number of decisions in a study with 1126 participants. We found considerable differences between the profiles in settings with few decisions. If the number of decisions gets high ($>$ 1/h), participants exclusively preferred fully autonomous PPAs. Finally, we discuss implications and recommendations for designing scalable PPAs that serve as privacy interfaces for future IoT devices.},
doi = {10.1145/3613904.3642591},
isbn = {979-8-4007-0330-0/24/05},
location = {Honolulu, HI, USA},
timestamp = {2024.05.16},
url = {http://florian-alt.org/unibw/wp-content/publications/marky2024chi.pdf},
file = {https://www.unibw.de/usable-security-and-privacy/publikationen/pdf/marky2024chi.pdf},
video = {marky2024chi},
}


[CHI EA '24] 3DA: Assessing 3D-Printed Electrodes for Measuring Electrodermal Activity

M. Schmitz, D. Schön, H. Klagemann, T. Kosch

ABSTRACT - Electrodermal activity (EDA) reflects changes in skin conductance, closely tied to human psychological states. EDA sensors can assess stress, cognitive workload, arousal, and activity related to the parasympathetic nervous system used in various human-computer interaction applications. Yet, current limitations involve the complex attachment and proper skin contact with EDA sensors. This paper explores the concept of 3D printing electrodes for EDA measurements, potentially integrating sensors into arbitrary 3D printed objects, alleviating the need for complex assembly and attachment. We examine the adaptation of conventional EDA circuits for 3D-printed electrodes, assessing different electrode shapes and their impact on the sensing accuracy. A user study (N=6) revealed that 3D-printed electrodes can measure EDA with similar accuracy while recommending larger contact areas for improved precision. We discuss design implications to facilitate EDA sensor integration into 3D-printed devices, fostering a diverse integration into everyday items using consumer-grade 3D printers for physiological interface prototyping.

In
10.1145/3613905.3650938   
@InProceedings{Schmitz20243da,
  author    = {Schmitz, Martin and Schön, Dominik and Klagemann, Henning and Kosch, Thomas},
  booktitle = {Extended Abstracts of the CHI Conference on Human Factors in
  Computing Systems},
  series      = {CHI EA '24},
  title     = {3DA: Assessing 3D-Printed Electrodes for Measuring Electrodermal Activity},
  year      = {2024},
  address   = {New York, NY, USA},
  month     = {may},
  publisher = {ACM},
  doi       = {10.1145/3613905.3650938},
  url         = {https://doi.org/10.1145/3613905.3650938},
  abstract    = {Electrodermal activity (EDA) reflects changes in skin conductance, closely tied to human psychological states. EDA sensors can assess stress, cognitive workload, arousal, and activity related to the parasympathetic nervous system used in various human-computer interaction applications. Yet, current limitations involve the complex attachment and proper skin contact with EDA sensors. This paper explores the concept of 3D printing electrodes for EDA measurements, potentially integrating sensors into arbitrary 3D printed objects, alleviating the need for complex assembly and attachment. We examine the adaptation of conventional EDA circuits for 3D-printed electrodes, assessing different electrode shapes and their impact on the sensing accuracy. A user study (N=6) revealed that 3D-printed electrodes can measure EDA with similar accuracy while recommending larger contact areas for improved precision. We discuss design implications to facilitate EDA sensor integration into 3D-printed devices, fostering a diverse integration into everyday items using consumer-grade 3D printers for physiological interface prototyping.}
}

our publications from last year at CHI '23

[CHI '23] FingerMapper: Mapping Finger Motions onto Virtual Arms to Enable Safe Virtual Reality Interaction in Confined Spaces

W. Tseng, S. Huron, E. Lecolinet, J. Gugenheimer

ABSTRACT - Whole-body movements enhance the presence and enjoyment of Virtual Reality (VR) experiences. However, using large gestures is often uncomfortable and impossible in confined spaces (e.g., public transport). We introduce FingerMapper, mapping small-scale finger motions onto virtual arms and hands to enable whole-body virtual movements in VR. In a first target selection study (n=13) comparing FingerMapper to hand tracking and ray-casting, we found that FingerMapper can significantly reduce physical motions and fatigue while having a similar degree of precision. In a consecutive study (n=13), we compared FingerMapper to hand tracking inside a confined space (the front passenger seat of a car). The results showed participants had significantly higher perceived safety and fewer collisions with FingerMapper while preserving a similar degree of presence and enjoyment as hand tracking. Finally, we present three example applications demonstrating how FingerMapper could be applied for locomotion and interaction for VR in confined spaces.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3580736    PDF    Teaser Video    Full Video   
@inproceedings{tseng2023fingermapper,
  title={FingerMapper: Mapping Finger Motions onto Virtual Arms to Enable Safe Virtual Reality Interaction in Confined Spaces},
  author={Tseng, Wen-Jie and Huron, Samuel and Lecolinet, Eric and Gugenheimer, Jan},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3580736},
  doi = {10.1145/3544548.3580736},
  teaservideo={https://www.youtube.com/watch?v=KomrhEYGBDw},
  video={https://www.youtube.com/watch?v=7Kfq7Ej1krw},
  file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2023/tseng2023fingermapper.pdf},
  abstract={Whole-body movements enhance the presence and enjoyment of Virtual Reality (VR) experiences. However, using large gestures is often uncomfortable and impossible in confined spaces (e.g., public transport). We introduce FingerMapper, mapping small-scale finger motions onto virtual arms and hands to enable whole-body virtual movements in VR. In a first target selection study (n=13) comparing FingerMapper to hand tracking and ray-casting, we found that FingerMapper can significantly reduce physical motions and fatigue while having a similar degree of precision. In a consecutive study (n=13), we compared FingerMapper to hand tracking inside a confined space (the front passenger seat of a car). The results showed participants had significantly higher perceived safety and fewer collisions with FingerMapper while preserving a similar degree of presence and enjoyment as hand tracking. Finally, we present three example applications demonstrating how FingerMapper could be applied for locomotion and interaction for VR in confined spaces.}
}

[CHI '23] Tailor Twist: Assessing Rotational Mid-Air Interactions for Augmented Reality

D. Schön, T. Kosch, F. Müller, M. Schmitz, S. Günther, L. Bommhardt, M. Mühlhäuser

ABSTRACT - Mid-air gestures, widely used in today's Augmented Reality applications, are prone to the "gorilla arm" effect, leading to discomfort with prolonged interactions. While prior work has proposed metrics to quantify this effect and means to improve comfort and ergonomics, these works usually only consider simplistic, one-dimensional AR interactions, like reaching for a point or pushing a button. However, interacting with AR environments also involves far more complex tasks, such as rotational knobs, potentially impacting ergonomics. This paper advances the understanding of the ergonomics of rotational mid-air interactions in AR. For this, we contribute the results of a controlled experiment exposing the participants to a rotational task in the interaction space defined by their arms' reach. Based on the results, we discuss how novel future mid-air gesture modalities benefit from our findings concerning ergonomic-aware rotational interaction.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3581461    PDF    Teaser Video    Full Video   
@inproceedings{schoen2023tailortwist,
  title={Tailor Twist: Assessing Rotational Mid-Air Interactions for Augmented Reality},
  author={Sch\"{o}n, Dominik and Kosch, Thomas and M\"{u}ller, Florian and Schmitz, Martin and G\"{u}nther, Sebastian and Bommhardt, Lukas and M\"{u}hlh\"{a}user, Max},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3581461},
  doi = {10.1145/3544548.3581461},
  teaservideo={https://www.youtube.com/watch?v=FqFr_Eeh1dY},
  video={https://www.youtube.com/watch?v=K3q7uDyGu2o},
  file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2023/schoen2023tailortwist.pdf},
  abstract={Mid-air gestures, widely used in today's Augmented Reality applications, are prone to the "gorilla arm" effect, leading to discomfort with prolonged interactions. While prior work has proposed metrics to quantify this effect and means to improve comfort and ergonomics, these works usually only consider simplistic, one-dimensional AR interactions, like reaching for a point or pushing a button. However, interacting with AR environments also involves far more complex tasks, such as rotational knobs, potentially impacting ergonomics. This paper advances the understanding of the ergonomics of rotational mid-air interactions in AR. For this, we contribute the results of a controlled experiment exposing the participants to a rotational task in the interaction space defined by their arms' reach. Based on the results, we discuss how novel future mid-air gesture modalities benefit from our findings concerning ergonomic-aware rotational interaction.}
}

 [CHI '23] FIDO2 the Rescue? Platform vs. Roaming Authentication on Smartphones

L. Würsching, F. Putz, S. Haesler, M. Hollick

ABSTRACT - Modern smartphones support FIDO2 passwordless authentication using either external security keys or internal biometric authentication, but it is unclear whether users appreciate and accept these new forms of web authentication for their own accounts. We present the first lab study (N=87) comparing platform and roaming authentication on smartphones, determining the practical strengths and weaknesses of FIDO2 as perceived by users in a mobile scenario. Most participants were willing to adopt passwordless authentication during our in-person user study, but closer analysis shows that participants prioritize usability, security, and availability differently depending on the account type. We identify remaining adoption barriers that prevent FIDO2 from succeeding password authentication, such as missing support for contemporary usage patterns, including account delegation and usage on multiple clients.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3580993    PDF    Full Video   
@inproceedings{wuersching2023fido2,
  title={FIDO2 the Rescue? Platform vs. Roaming Authentication on Smartphones},
  author={W\"{u}rsching, Leon and Putz, Florentin and Haesler, Steffen and Hollick, Matthias},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3580993},
  doi = {10.1145/3544548.3580993},
  file={https://arxiv.org/abs/2302.07777},
  abstract={Modern smartphones support FIDO2 passwordless authentication using either external security keys or internal biometric authentication, but it is unclear whether users appreciate and accept these new forms of web authentication for their own accounts. We present the first lab study (N=87) comparing platform and roaming authentication on smartphones, determining the practical strengths and weaknesses of FIDO2 as perceived by users in a mobile scenario. Most participants were willing to adopt passwordless authentication during our in-person user study, but closer analysis shows that participants prioritize usability, security, and availability differently depending on the account type. We identify remaining adoption barriers that prevent FIDO2 from succeeding password authentication, such as missing support for contemporary usage patterns, including account delegation and usage on multiple clients.},
  video={https://www.youtube.com/watch?v=tZ1gzBoCEAc},
 award={Best Paper},
 note={Best Paper Award}
}

[CHI '23] Memory Manipulations in Extended Reality

E. Bonnail, W. Tseng, M. Mcgill, E. Lecolinet, S. Huron, J. Gugenheimer

ABSTRACT - Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR/VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3580988    Teaser Video   
@inproceedings{bonnail2023memory,
  title={Memory Manipulations in Extended Reality},
  author={Bonnail, Elise and Tseng, Wen-Jie and Mcgill, Mark and Lecolinet, Eric and Huron, Samuel and Gugenheimer, Jan},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  pages={1--20},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3580988},
  doi = {10.1145/3544548.3580988},
  teaservideo={https://www.youtube.com/watch?v=asSejQTZILI},
  file={},
  abstract={Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR/VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR.}
}

[CHI '23 Demo] ThermalPen: Adding Thermal Haptic Feedback to 3D Sketching

P. Hoffmann, H. Elsayed, M. Mühlhäuser, R. Wehbe, M. Barrera Machuca

ABSTRACT - Sketching in virtual 3D environments has enabled new forms of artistic expression and a variety of novel design use-cases. However, the lack of haptic feedback proves to be one of the main challenges in this field. While prior work has investigated vibrotactile and force-feedback devices, this paper proposes the addition of thermal feedback. We present ThermalPen, a novel pen for 3D sketching that associates the texture and colour of strokes with different thermal properties. For example, a fire texture elicits an increase in temperature, while an ice texture causes a temperature drop in the pen. Our goal with ThermalPen is to enhance the 3D sketching experience and allow users to use this tool to increase their creativity while sketching. We plan on evaluating the influence of thermal feedback on the 3D sketching experience, with a focus on user creativity in the future.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '23 Extended Abstracts)
10.1145/3544549.3583901   
@inproceedings{hoffmann2023thermalpen,
  title={ThermalPen: Adding Thermal Haptic Feedback to 3D Sketching},
  author={Hoffmann, Philipp and Elsayed, Hesham and M\"{u}hlh\"{a}user, Max and Wehbe, Rina and Barrera Machuca, Mayra D},
  booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '23 Extended Abstracts)},
  year={2023},
  month={apr},
  series={CHI '23 Demo},
  publisher={ACM},
  url={https://doi.org/10.1145/3544549.3583901},
  doi = {10.1145/3544549.3583901},
  abstract={Sketching in virtual 3D environments has enabled new forms of artistic expression and a variety of novel design use-cases. However, the lack of haptic feedback proves to be one of the main challenges in this field. While prior work has investigated vibrotactile and force-feedback devices, this paper proposes the addition of thermal feedback. We present ThermalPen, a novel pen for 3D sketching that associates the texture and colour of strokes with different thermal properties. For example, a fire texture elicits an increase in temperature, while an ice texture causes a temperature drop in the pen. Our goal with ThermalPen is to enhance the 3D sketching experience and allow users to use this tool to increase their creativity while sketching. We plan on evaluating the influence of thermal feedback on the 3D sketching experience, with a focus on user creativity in the future.}
}

[CHI '23] TicTacToes: Assessing Toe Movements as an Input Modality

F. Müller, D. Schmitt, A. Matviienko, D. Schön, S. Günther, T. Kosch, M. Schmitz, M. Mühlhäuser

ABSTRACT - From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3580954    Teaser Video    Full Video   
@inproceedings{mueller2023tictactoes,
  title={TicTacToes: Assessing Toe Movements as an Input Modality},
  author={M\"{u}ller, Florian and Schmitt, Daniel and Matviienko, Andrii and Sch\"{o}n, Dominik and G\"{u}nther, Sebastian and Kosch, Thomas and Schmitz, Martin and M\"{u}hlh\"{a}user, Max},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3580954},
  doi = {10.1145/3544548.3580954},
  teaservideo={https://www.youtube.com/watch?v=2cGBqSQq0LM},
  video={https://www.youtube.com/watch?v=kVVY6ZZ5aHY},
  abstract={From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.}
}

[CHI '23] UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience

F. Müller, . Arantxa, D. Schön, J. Rasch

ABSTRACT - When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3581557    Teaser Video    Full Video   
@inproceedings{mueller2023undoport,
  title={UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience},
  author={M\"{u}ller, Florian and Arantxa and Sch\"{o}n, Dominik and Rasch, Julian},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3581557},
  doi = {10.1145/3544548.3581557},
  teaservideo={https://www.youtube.com/watch?v=16ylRYe7WVk},
  video={https://www.youtube.com/watch?v=pcifaRvG0yA},
  abstract={When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets' cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.}
}

[CHI '23] “Nah, It’s Just Annoying!” A Deep Dive into User Perceptions of Two-Factor Authentication

K. Marky, K. Ragozin, G. Chernyshov, A. Matviienko, M. Schmitz, M. Mühlhäuser, C. Eghtebas, K. Kunze

ABSTRACT - Two-factor authentication (2FA) is a recommended or imposed authentication mechanism for valuable online assets. However, 2FA mechanisms usually exhibit user experience issues that create user friction and even lead to poor acceptance, hampering the wider spread of 2FA. In this article, we investigate user perceptions of 2FA through in-depth interviews with 42 participants, revealing key requirements that are not well met today despite recently emerged 2FA solutions. First, we investigate past experiences with authentication mechanisms emphasizing problems and aspects that hamper good user experience. Second, we investigate the different authentication factors more closely. Our results reveal particularly interesting preferences regarding the authentication factor “ownership” in terms of properties, physical realizations, and interaction. These findings suggest a path toward 2FA mechanisms with considerably better user experience, promising to improve the acceptance and hence, the proliferation of 2FA for the benefit of security in the digital world.

In ACM Trans. Comput.-Hum. Interact.
10.1145/3503514   
@inproceedings{marky2022annoying,
  author = {Marky, Karola and Ragozin, Kirill and Chernyshov, George and Matviienko, Andrii and Schmitz, Martin and M\"{u}hlh\"{a}user, Max and Eghtebas, Chloe and Kunze, Kai},
title = {“Nah, It’s Just Annoying!” A Deep Dive into User Perceptions of Two-Factor Authentication},
year = {2022},
issue_date = {October 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {29},
number = {5},
issn = {1073-0516},
url = {https://doi.org/10.1145/3503514},
doi = {10.1145/3503514},
abstract = {Two-factor authentication (2FA) is a recommended or imposed authentication mechanism for valuable online assets. However, 2FA mechanisms usually exhibit user experience issues that create user friction and even lead to poor acceptance, hampering the wider spread of 2FA. In this article, we investigate user perceptions of 2FA through in-depth interviews with 42 participants, revealing key requirements that are not well met today despite recently emerged 2FA solutions. First, we investigate past experiences with authentication mechanisms emphasizing problems and aspects that hamper good user experience. Second, we investigate the different authentication factors more closely. Our results reveal particularly interesting preferences regarding the authentication factor “ownership” in terms of properties, physical realizations, and interaction. These findings suggest a path toward 2FA mechanisms with considerably better user experience, promising to improve the acceptance and hence, the proliferation of 2FA for the benefit of security in the digital world.},
journal = {ACM Trans. Comput.-Hum. Interact.},
series={CHI '23},
month = {oct},
articleno = {43},
numpages = {32},
keywords = {usability, user experience, human factors, Two-factor authentication}
}

[CHI '23] What does it mean to cycle in Virtual Reality? Exploring Cycling Fidelity and Control of VR Bicycle Simulators

A. Matviienko, H. Hoxha, M. Mühlhäuser

ABSTRACT - Creating highly realistic Virtual Reality (VR) bicycle experiences can be time-consuming and expensive. Moreover, it is unclear what hardware parts are necessary to design a bicycle simulator and whether a bicycle is needed at all. In this paper, we investigated cycling fidelity and control of VR bicycle simulators. For this, we developed and evaluated three cycling simulators: (1) cycling without a bicycle (bikeless), (2) cycling on a fixed (stationary) and (3) moving bicycle (tandem) with four levels of control (no control, steering, pedaling, and steering + pedaling). To evaluate all combinations of fidelity and control, we conducted a controlled experiment (N = 24) in indoor and outdoor settings. We found that the bikeless setup provides the highest feeling of safety, while the tandem leads to the highest realism without increasing motion sickness. Moreover, we discovered that bicycles are not essential for cycling in VR.

In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544548.3581050   
@inproceedings{matviienko2023vrcycling,
  title={What does it mean to cycle in Virtual Reality? Exploring Cycling Fidelity and Control of VR Bicycle Simulators},
  author={Matviienko, Andrii and Hoxha, Hajris and M\"{u}hlh\"{a}user, Max},
  booktitle={Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
  year={2023},
  month={apr},
  series={CHI '23},
  publisher={ACM},
  url={https://doi.org/10.1145/3544548.3581050},
  doi = {10.1145/3544548.3581050},
  abstract={Creating highly realistic Virtual Reality (VR) bicycle experiences can be time-consuming and expensive. Moreover, it is unclear what hardware parts are necessary to design a bicycle simulator and whether a bicycle is needed at all. In this paper, we investigated cycling fidelity and control of VR bicycle simulators. For this, we developed and evaluated three cycling simulators: (1) cycling without a bicycle (bikeless), (2) cycling on a fixed (stationary) and (3) moving bicycle (tandem) with four levels of control (no control, steering, pedaling, and steering + pedaling). To evaluate all combinations of fidelity and control, we conducted a controlled experiment (N = 24) in indoor and outdoor settings. We found that the bikeless setup provides the highest feeling of safety, while the tandem leads to the highest realism without increasing motion sickness. Moreover, we discovered that bicycles are not essential for cycling in VR.}
}

[CHI '23 EA] Exploring the Perception of Pain in Virtual Reality using Perceptual Manipulations

G. Clavelin, M. Bouhier, W. Tseng, J. Gugenheimer

ABSTRACT - Perceptual manipulations (PMs) in Virtual Reality (VR) can steer users’ actions (e.g., redirection techniques) and amplify haptic perceptions (e.g., weight). However, their ability to amplify or induce negative perceptions such as physical pain is not well understood. In this work, we explore if PMs can be leveraged to induce the perception of pain, without modifying the physical stimulus. We implemented a VR experience combined with a haptic prototype, simulating the dislocation of a finger. A user study (n=18) compared three conditions (visual-only, haptic-only and combined) on the perception of physical pain and physical discomfort. We observed that using PMs with a haptic device resulted in a significantly higher perception of physical discomfort and an increase in the perception of pain compared to the unmodified sensation (haptic-only). Finally, we discuss how perception of pain can be leveraged in future VR applications and reflect on ethical concerns.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '23 Extended Abstracts)
10.1145/3544549.3585674   
@inproceedings{clavelin2023painperception,
  title={Exploring the Perception of Pain in Virtual Reality using Perceptual Manipulations},
  author={Clavelin, Gaelle and Bouhier, Mickael and Tseng, Wen-Jie and Gugenheimer, Jan},
  booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '23 Extended Abstracts)},
  year={2023},
  month={apr},
  series={CHI '23 EA},
  publisher={ACM},
  url={https://doi.org/10.1145/3544549.3585674},
  doi = {10.1145/3544549.3585674},
  abstract={Perceptual manipulations (PMs) in Virtual Reality (VR) can steer users’ actions (e.g., redirection techniques) and amplify haptic perceptions (e.g., weight). However, their ability to amplify or induce negative perceptions such as physical pain is not well understood. In this work, we explore if PMs can be leveraged to induce the perception of pain, without modifying the physical stimulus. We implemented a VR experience combined with a haptic prototype, simulating the dislocation of a finger. A user study (n=18) compared three conditions (visual-only, haptic-only and combined) on the perception of physical pain and physical discomfort. We observed that using PMs with a haptic device resulted in a significantly higher perception of physical discomfort and an increase in the perception of pain compared to the unmodified sensation (haptic-only). Finally, we discuss how perception of pain can be leveraged in future VR applications and reflect on ethical concerns.}
}

[CHI '23 EA] Text Me if You Can: Investigating Text Input Methods for Cyclists

A. Matviienko, J. Durand-Pierre, J. Cvancar, M. Mühlhäuser

ABSTRACT - Cycling is emerging as a relevant alternative to cars. However, the more people commute by bicycle, the higher the number of cyclists who use their smartphones on the go and endanger road safety. To better understand input while cycling, in this paper, we present the design and evaluation of three text input methods for cyclists: (1) touch input using smartphones, (2) midair input using a Microsoft Hololens 2, and (3) a set of ten physical buttons placed on both sides of the handlebar. We conducted a controlled indoor experiment (N = 12) on a bicycle simulator to evaluate these input methods. We found that text input via touch input was faster and less mentally demanding than input with midair gestures and physical buttons. However, the midair gestures were the least error-prone, and the physical buttons facilitated keeping both hands on the handlebars and were more intuitive and less distracting.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '23 Extended Abstracts)
10.1145/3544549.3585734   
@inproceedings{matviienko2023textme,
  title={Text Me if You Can: Investigating Text Input Methods for Cyclists},
  author={Matviienko, Andrii and Durand-Pierre, Jean-Baptiste and Cvancar, Jona and M\"{u}hlh\"{a}user, Max},
  booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '23 Extended Abstracts)},
  year={2023},
  month={apr},
  series={CHI '23 EA},
  publisher={ACM},
  url={https://doi.org/10.1145/3544549.3585734},
  doi = {10.1145/3544549.3585734},
  abstract={Cycling is emerging as a relevant alternative to cars. However, the more people commute by bicycle, the higher the number of cyclists who use their smartphones on the go and endanger road safety. To better understand input while cycling, in this paper, we present the design and evaluation of three text input methods for cyclists: (1) touch input using smartphones, (2) midair input using a Microsoft Hololens 2, and (3) a set of ten physical buttons placed on both sides of the handlebar. We conducted a controlled indoor experiment (N = 12) on a bicycle simulator to evaluate these input methods. We found that text input via touch input was faster and less mentally demanding than input with midair gestures and physical buttons. However, the midair gestures were the least error-prone, and the physical buttons facilitated keeping both hands on the handlebars and were more intuitive and less distracting.}
}

Research at TK

a selection of work done by Telecooperation Lab

[MCHI '22 Adjunct] Comparing VR Exploration Support for Ground-Based Rescue Robots

J. Von Willich, A. Matviienko, S. Günther, M. Mühlhäuser

ABSTRACT - Rescue robots have been extensively used in crisis situations for exploring dangerous areas. This exploration is usually facilitated via a remote operation by the rescue team. Although Virtual Reality (VR) was proposed to facilitate remote control due to its high level of immersion and situation awareness, we still lack intuitive and easy-to-use operation modes for search and rescue teams in VR environments. In this work, we propose four operation modes for ground-based rescue robots to utilize an efficient search and rescue: (a) Handle Mode, (b) Lab Mode, (c) Remote Mode, and (d) UI Mode. We evaluated these operation modes in a controlled lab experiment (N = 8) in terms of robot collisions, number of rescued victims, and mental load. Our results indicate that control modes with robot automation (UI and Remote mode) outperform modes with full control given to participants. In particular, we discovered that UI and Remote Mode lead to the lowest number of collisions, driving time, visible victims remaining, rescued victims, and mental load.

In Adjunct Publication of the 24th International Conference on Human-Computer Interaction with Mobile Devices and Services
10.1145/3528575.3551440    PDF   
@inproceedings{willich2022comparing,
author = {Von Willich, Julius and Matviienko, Andrii and G\"{u}nther, Sebastian and M\"{u}hlh\"{a}user, Max},
title = {Comparing VR Exploration Support for Ground-Based Rescue Robots},
booktitle = {Adjunct Publication of the 24th International Conference on Human-Computer Interaction with Mobile Devices and Services},
series = {MCHI '22 Adjunct},
year = {2022},
isbn = {9781450393416},
location = {Vancouver, BC, Canada},
url = {https://doi.org/10.1145/3528575.3551440},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/willich2022comparing.pdf},
doi = {10.1145/3528575.3551440},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {rescue robots, virtual reality, operation concepts, interaction techniques},
abstract = {Rescue robots have been extensively used in crisis situations for exploring dangerous areas. This exploration is usually facilitated via a remote operation by the rescue team. Although Virtual Reality (VR) was proposed to facilitate remote control due to its high level of immersion and situation awareness, we still lack intuitive and easy-to-use operation modes for search and rescue teams in VR environments. In this work, we propose four operation modes for ground-based rescue robots to utilize an efficient search and rescue: (a) Handle Mode, (b) Lab Mode, (c) Remote Mode, and (d) UI Mode. We evaluated these operation modes in a controlled lab experiment (N = 8) in terms of robot collisions, number of rescued victims, and mental load. Our results indicate that control modes with robot automation (UI and Remote mode) outperform modes with full control given to participants. In particular, we discovered that UI and Remote Mode lead to the lowest number of collisions, driving time, visible victims remaining, rescued victims, and mental load.}
}

[MHCI '22] NotiBike: Assessing Target Selection Techniques for Cyclist Notifications in Augmented Reality

T. Kosch, A. Matviienko, F. Müller, J. Bersch, C. Katins, D. Schön, M. Mühlhäuser

ABSTRACT - Cyclists' attention is often compromised when interacting with notifications in traffic, hence increasing the likelihood of road accidents. To address this issue, we evaluate three notification interaction modalities and investigate their impact on the interaction performance while cycling: gaze-based Dwell Time, Gestures, and Manual And Gaze Input Cascaded (MAGIC) Pointing. In a user study (N=18), participants confirmed notifications in Augmented Reality (AR) using the three interaction modalities in a simulated biking scenario. We assessed the efficiency regarding reaction times, error rates, and perceived task load. Our results show significantly faster response times for MAGIC Pointing compared to Dwell Time and Gestures, while Dwell Time led to a significantly lower error rate compared to Gestures. Participants favored the MAGIC Pointing approach, supporting cyclists in AR selection tasks. Our research sets the boundaries for more comfortable and easier interaction with notifications and discusses implications for target selections in AR while cycling.

In Proceedings of the ACM on Human-Computer Interaction, MobileHCI
10.1145/3546732    PDF    Full Video   
@article{Kosch2022notibike,
author = {Kosch, Thomas and Matviienko, Andrii and M\"{u}ller, Florian and Bersch, Jessica and Katins, Christopher and Sch\"{o}n, Dominik and M\"{u}hlh\"{a}user, Max},
title = {NotiBike: Assessing Target Selection Techniques for Cyclist Notifications in Augmented Reality},
year = {2022},
issue_date = {September 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {MHCI},
url = {https://doi.org/10.1145/3546732},
doi = {10.1145/3546732},
abstract = {Cyclists' attention is often compromised when interacting with notifications in traffic, hence increasing the likelihood of road accidents. To address this issue, we evaluate three notification interaction modalities and investigate their impact on the interaction performance while cycling: gaze-based Dwell Time, Gestures, and Manual And Gaze Input Cascaded (MAGIC) Pointing. In a user study (N=18), participants confirmed notifications in Augmented Reality (AR) using the three interaction modalities in a simulated biking scenario. We assessed the efficiency regarding reaction times, error rates, and perceived task load. Our results show significantly faster response times for MAGIC Pointing compared to Dwell Time and Gestures, while Dwell Time led to a significantly lower error rate compared to Gestures. Participants favored the MAGIC Pointing approach, supporting cyclists in AR selection tasks. Our research sets the boundaries for more comfortable and easier interaction with notifications and discusses implications for target selections in AR while cycling.},
journal = {Proceedings of the ACM on Human-Computer Interaction, MobileHCI},
month = {sep},
articleno = {197},
numpages = {24},
keywords = {cycling, augmented reality, selection, notifications},
series = {MHCI '22},
 video = {https://www.youtube.com/watch?v=hTYBTULau7U},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Kosch2022NotiBike.pdf}
}

[MHCI '22] AR Sightseeing: Comparing Information Placements at Outdoor Historical Heritage Sites Using Augmented Reality

A. Matviienko, S. Günther, S. Ritzenhofen, M. Mühlhäuser

ABSTRACT - Augmented Reality (AR) has influenced the presentation of historical information to tourists and museum visitors by making the information more immersive and engaging. Since smartphones and AR glasses are the primary devices to present AR information to users, it is essential to understand how the information about a historical site can be presented effectively and what type of device is best suited for information placements. In this paper, we investigate the placement of two types of content, historical images and informational text, for smartphones and AR glasses in the context of outdoor historical sites. For this, we explore three types of placements: (1) on-body, (2) world, and (3) overlay. To evaluate all nine combinations of text and image placements for smartphone and AR glasses, we conducted a controlled experiment (N = 18) at outdoor historical landmarks. We discovered that on-body image and text placements were the most convenient compared to overlay and world for both devices. Furthermore, participants found themselves more successful in exploring historical sites using a smartphone than AR glasses. Although interaction with a smartphone was more convenient, participants found exploring AR content using AR glasses more fun.

In Proceedings of the ACM on Human-Computer Interaction, MobileHCI
10.1145/3546729    PDF   
@article{Matviienko2022arsightseeing,
author = {Matviienko, Andrii and G\"{u}nther, Sebastian and Ritzenhofen, Sebastian and M\"{u}hlh\"{a}user, Max},
title = {AR Sightseeing: Comparing Information Placements at Outdoor Historical Heritage Sites Using Augmented Reality},
year = {2022},
issue_date = {September 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {MHCI},
url = {https://doi.org/10.1145/3546729},
doi = {10.1145/3546729},
abstract = {Augmented Reality (AR) has influenced the presentation of historical information to tourists and museum visitors by making the information more immersive and engaging. Since smartphones and AR glasses are the primary devices to present AR information to users, it is essential to understand how the information about a historical site can be presented effectively and what type of device is best suited for information placements. In this paper, we investigate the placement of two types of content, historical images and informational text, for smartphones and AR glasses in the context of outdoor historical sites. For this, we explore three types of placements: (1) on-body, (2) world, and (3) overlay. To evaluate all nine combinations of text and image placements for smartphone and AR glasses, we conducted a controlled experiment (N = 18) at outdoor historical landmarks. We discovered that on-body image and text placements were the most convenient compared to overlay and world for both devices. Furthermore, participants found themselves more successful in exploring historical sites using a smartphone than AR glasses. Although interaction with a smartphone was more convenient, participants found exploring AR content using AR glasses more fun.},
journal = {Proceedings of the ACM on Human-Computer Interaction, MobileHCI},
month = {sep},
articleno = {194},
numpages = {17},
keywords = {augmented reality, information placement, sightseeing, historical heritage},
series = {MHCI '22},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022ARsightseeing.pdf}
}

[MHCI '22] "Baby, You Can Ride My Bike": Exploring Maneuver Indications of Self-Driving Bicycles Using a Tandem Simulator

A. Matviienko, D. Mehmedovic, F. Müller, M. Mühlhäuser

ABSTRACT - We envision a future where self-driving bicycles can take us to our destinations. This allows cyclists to use their time on the bike efficiently for work or relaxation without having to focus their attention on traffic. In the related field of self-driving cars, research has shown that communicating the planned route to passengers plays an important role in building trust in automation and situational awareness. For self-driving bicycles, this information transfer will be even more important, as riders will need to actively compensate for the movement of a self-driving bicycle to maintain balance. In this paper, we investigate maneuver indications for self-driving bicycles: (1) ambient light in a helmet, (2) head-up display indications, (3) speech feedback, (4) vibration on the handlebar, and (5) no assistance. To evaluate these indications, we conducted an outdoor experiment (N = 25) in a proposed tandem simulator consisting of a tandem bicycle with a steering and braking control on the back seat and a rider in full control of it. Our results indicate that riders respond faster to visual cues and focus comparably on the reading task while riding with and without maneuver indications. Additionally, we found that the tandem simulator is realistic, safe, and creates an awareness of a human cyclist controlling the tandem.

In Proceedings of the ACM on Human-Computer Interaction, MobileHCI
10.1145/3546723    PDF    Full Video   
@article{Matviienko2022ridemybike,
author = {Matviienko, Andrii and Mehmedovic, Damir and M\"{u}ller, Florian and M\"{u}hlh\"{a}user, Max},
title = {"Baby, You Can Ride My Bike": Exploring Maneuver Indications of Self-Driving Bicycles Using a Tandem Simulator},
year = {2022},
issue_date = {September 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {MHCI},
url = {https://doi.org/10.1145/3546723},
doi = {10.1145/3546723},
abstract = {We envision a future where self-driving bicycles can take us to our destinations. This allows cyclists to use their time on the bike efficiently for work or relaxation without having to focus their attention on traffic. In the related field of self-driving cars, research has shown that communicating the planned route to passengers plays an important role in building trust in automation and situational awareness. For self-driving bicycles, this information transfer will be even more important, as riders will need to actively compensate for the movement of a self-driving bicycle to maintain balance. In this paper, we investigate maneuver indications for self-driving bicycles: (1) ambient light in a helmet, (2) head-up display indications, (3) speech feedback, (4) vibration on the handlebar, and (5) no assistance. To evaluate these indications, we conducted an outdoor experiment (N = 25) in a proposed tandem simulator consisting of a tandem bicycle with a steering and braking control on the back seat and a rider in full control of it. Our results indicate that riders respond faster to visual cues and focus comparably on the reading task while riding with and without maneuver indications. Additionally, we found that the tandem simulator is realistic, safe, and creates an awareness of a human cyclist controlling the tandem.},
journal = {Proceedings of the ACM on Human-Computer Interaction, MobileHCI},
month = {sep},
articleno = {188},
numpages = {21},
keywords = {maneuver indications, tandem, self-driving bicycles},
series = {MHCI '22},
 video = {https://www.youtube.com/watch?v=czOciHFRDk4},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022Baby.pdf}
}


 [CHI '22] Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality

M. Schmitz, S. Günther, D. Schön, F. Müller

ABSTRACT - From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efciency of such grips are afected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our fndings, we conclude that the pinching interaction between the thumb and index fnger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that beneft from pinching as an additional and complementary interaction modality.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3501981    PDF    Teaser Video   
@inproceedings{Schmitz2022squeezyfeely,
address = {New York, NY, USA},
author = {Schmitz, Martin and G\"{u}nther, Sebastian and Sch\"{o}n, Dominik and M\"{u}ller, Florian},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
doi = {10.1145/3491102.3501981},
isbn = {978-1-4503-9157-3/22/04},
keywords = {Input, Pinching, Deformation, Mixed Reality, Thumb-to-fnger, User Studies},
month = {apr},
publisher = {ACM},
title = {Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality},
url = {https://doi.org/10.1145/3491102.3501981},
year = {2022},
abstract = {From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efciency of such grips are afected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our fndings, we conclude that the pinching interaction between the thumb and index fnger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that beneft from pinching as an additional and complementary interaction modality.},
series = {CHI '22},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/schmitz2022squeezyfeely.pdf},
 teaservideo = {https://www.youtube.com/watch?v=DW23J3CalFw},
 award={Best Paper},
 note={Best Paper Award}
}

 [CHI '22] SkyPort: Investigating 3D Teleportation Methods in Virtual Environments

A. Matviienko, F. Müller, M. Schmitz, M. Fendrich, M. Mühlhäuser

ABSTRACT - Teleportation has become the de facto standard of locomotion in Virtual Reality (VR) environments. However, teleportation with parabolic and linear target aiming methods is restricted to horizontal 2D planes and it is unknown how they transfer to the 3D space. In this paper, we propose six 3D teleportation methods in virtual environments based on the combination of two existing aiming methods (linear and parabolic) and three types of transitioning to a target (instant, interpolated and continuous). To investigate the performance of the proposed teleportation methods, we conducted a controlled lab experiment (N = 24) with a mid-air coin collection task to assess accuracy, efciency and VR sickness. We discovered that the linear aiming method leads to faster and more accurate target selection. Moreover, a combination of linear aiming and instant transitioning leads to the highest efciency and accuracy without increasing VR sickness.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3501983    PDF   
@inproceedings{Matviienko2022skyport,
author = {Matviienko, Andrii and Müller, Florian and Schmitz, Martin and Fendrich, Marco and Mühlhäuser, Max},
title = {SkyPort: Investigating 3D Teleportation Methods in Virtual Environments},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491102.3501983},
doi = {10.1145/3491102.3501983},
keywords = {virtual reality, teleportation, locomotion, virtual environments},
location = {New Orleans, LA, USA},
series = {CHI '22},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
award={Honorable Mention},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022skyport.pdf},
 abstract={Teleportation has become the de facto standard of locomotion in Virtual Reality (VR) environments. However, teleportation with parabolic and linear target aiming methods is restricted to horizontal 2D planes and it is unknown how they transfer to the 3D space. In this paper, we propose six 3D teleportation methods in virtual environments based on the combination of two existing aiming methods (linear and parabolic) and three types of transitioning to a target (instant, interpolated and continuous). To investigate the performance of the proposed teleportation methods, we conducted a controlled lab experiment (N = 24) with a mid-air coin collection task to assess accuracy, efciency and VR sickness. We discovered that the linear aiming method leads to faster and more accurate target selection. Moreover, a combination of linear aiming and instant transitioning leads to the highest efciency and accuracy without increasing VR sickness.}
}


[CHI '22] Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual Reality

S. Günther, J. Rasch, D. Schön, F. Müller, M. Schmitz, J. Riemann, A. Matviienko, M. Mühlhäuser

ABSTRACT - Haptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3517454    PDF    Teaser Video    Full Video   
@inproceedings{Guenther2022smooth,
address = {New York, NY, USA},
author = {G\"{u}nther, Sebastian and Rasch, Julian and Sch\"{o}n, Dominik and M\"{u}ller, Florian and Schmitz, Martin and Riemann, Jan and Matviienko, Andrii and M\"{u}hlh\"{a}user, Max},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
doi = {10.1145/3491102.3517454},
isbn = {978-1-4503-9157-3/22/04},
keywords = {haptic,smooth,stimuli,stroke,visual,visualizations},
month = {apr},
publisher = {ACM},
title = {Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual Reality},
url = {https://dl.acm.org/doi/10.1145/3491102.3517454},
year = {2022},
abstract = {Haptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.},
series = {CHI '22},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Guenther2022smooth.pdf},
 video = {https://www.youtube.com/watch?v=9q6zZCJ9rLg},
 teaservideo = {https://www.youtube.com/watch?v=glEOP48qVCE},
}

[CHI '22] BikeAR: Understanding Cyclists' Crossing Decision-Making at Uncontrolled Intersections using Augmented Reality

A. Matviienko, F. Müller, D. Schön, P. Seesemann, S. Günther, M. Mühlhäuser

ABSTRACT - Cycling has become increasingly popular as a means of transportation. However, cyclists remain a highly vulnerable group of road users. According to accident reports, one of the most dangerous situations for cyclists are uncontrolled intersections, where cars approach from both directions. To address this issue and assist cyclists in crossing decision-making at uncontrolled intersections, we designed two visualizations that: (1) highlight occluded cars through an X-ray vision and (2) depict the remaining time the intersection is safe to cross via a Countdown. To investigate the efficiency of these visualizations, we proposed an Augmented Reality simulation as a novel evaluation method, in which the above visualizations are represented as AR, and conducted a controlled experiment with 24 participants indoors. We found that the X-ray ensures a fast selection of shorter gaps between cars, while the Countdown facilitates a feeling of safety and provides a better intersection overview.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3517560    PDF    Full Video   
@inproceedings{Matviienko2022bikear,
author = {Matviienko, Andrii and Müller, Florian and Schön, Dominik and Seesemann, Paul and Günther, Sebastian and Mühlhäuser, Max},
title = {BikeAR: Understanding Cyclists' Crossing Decision-Making at Uncontrolled Intersections using Augmented Reality},
year = {2022},
  booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491102.3517560},
doi = {10.1145/3491102.3517560},
keywords = {augmented reality, cyclist safety, crossing decision-making},
location = {New Orleans, LA, USA},
series = {CHI '22},
abstract = {Cycling has become increasingly popular as a means of transportation. However, cyclists remain a highly vulnerable group of road users. According to accident reports, one of the most dangerous situations for cyclists are uncontrolled intersections, where cars approach from both directions. To address this issue and assist cyclists in crossing decision-making at uncontrolled intersections, we designed two visualizations that: (1) highlight occluded cars through an X-ray vision and (2) depict the remaining time the intersection is safe to cross via a Countdown. To investigate the efficiency of these visualizations, we proposed an Augmented Reality simulation as a novel evaluation method, in which the above visualizations are represented as AR, and conducted a controlled experiment with 24 participants indoors. We found that the X-ray ensures a fast selection of shorter gaps between cars, while the Countdown facilitates a feeling of safety and provides a better intersection overview.},
video={https://www.youtube.com/watch?v=YKsDlPmSd68},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022bikear.pdf}
}
  

[CHI '22] Reducing Virtual Reality Sickness for Cyclists in VR Bicycle Simulators

A. Matviienko, F. Müller, M. Zickler, L. Gasche, J. Abels, T. Steinert, M. Mühlhäuser

ABSTRACT - Virtual Reality (VR) bicycle simulations aim to recreate the feeling of riding a bicycle and are commonly used in many application areas. However, current solutions still create mismatches between the visuals and physical movement, which causes VR sickness and diminishes the cycling experience. To reduce VR sickness in bicycle simulators, we conducted two controlled lab experiments addressing two main causes of VR sickness: (1) steering methods and (2) cycling trajectory. In the frst experiment (N = 18) we compared handlebar, HMD, and upper-body steering methods. In the second experiment (N = 24) we explored three types of movement in VR (1D, 2D, and 3D trajectories) and three countermeasures (airfow, vibration, and dynamic Field-of-View) to reduce VR sickness. We found that handlebar steering leads to the lowest VR sickness without decreasing cycling performance and airfow suggests to be the most promising method to reduce VR sickness for all three types of trajectories.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3501959    PDF   
@inproceedings{Matviienko2022reducingmotionsickness,
author = {Matviienko, Andrii and Müller, Florian and Zickler, Marcel and Gasche, Lisa and Abels, Julia and Steinert, Till and Mühlhäuser, Max},
title = {Reducing Virtual Reality Sickness for Cyclists in VR Bicycle Simulators},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491102.3501959},
doi = {10.1145/3491102.3501959},
keywords = {virtual reality, cycling, VR sickness, bicycle simulators},
location = {New Orleans, LA, USA},
series = {CHI '22},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022reducingmotionsickness.pdf},
 abstract={Virtual Reality (VR) bicycle simulations aim to recreate the feeling of riding a bicycle and are commonly used in many application areas. However, current solutions still create mismatches between the visuals and physical movement, which causes VR sickness and diminishes the cycling experience. To reduce VR sickness in bicycle simulators, we conducted two controlled lab experiments addressing two main causes of VR sickness: (1) steering methods and (2) cycling trajectory. In the frst experiment (N = 18) we compared handlebar, HMD, and upper-body steering methods. In the second experiment (N = 24) we explored three types of movement in VR (1D, 2D, and 3D trajectories) and three countermeasures (airfow, vibration, and dynamic Field-of-View) to reduce VR sickness. We found that handlebar steering leads to the lowest VR sickness without decreasing cycling performance and airfow suggests to be the most promising method to reduce VR sickness for all three types of trajectories.}
}

[CHI EA '22] E-ScootAR: Exploring Unimodal Warnings for E-Scooter Riders in Augmented Reality

A. Matviienko, F. Müller, D. Schön, R. Fayard, S. Abaspur, Y. Li, M. Mühlhäuser

ABSTRACT - Micro-mobility is becoming a more popular means of transportation. However, this increased popularity brings its challenges. In particular, the accident rates for E-Scooter riders increase, which endangers the riders and other road users. In this paper, we explore the idea of augmenting E-Scooters with unimodal warnings to prevent collisions with other road users, which include Augmented Reality (AR) notifcations, vibrotactile feedback on the handlebar, and auditory signals in the AR glasses. We conducted an outdoor experiment (N = 13) using an Augmented Reality simulation and compared these types of warnings in terms of reaction time, accident rate, and feeling of safety. Our results indicate that AR and auditory warnings lead to shorter reaction times, have a better perception, and create a better feeling of safety than vibrotactile warnings. Moreover, auditory signals have a higher acceptance by the riders compared to the other two types of warnings.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '22 Extended Abstracts)
10.1145/3491101.3519831    PDF   
@inproceedings{Matviienko2022escootar,
author = {Matviienko, Andrii and Müller, Florian and Schön, Dominik and Fayard, Régis and Abaspur, Salar and Li, Yi and Mühlhäuser, Max},
title = {E-ScootAR: Exploring Unimodal Warnings for E-Scooter Riders in Augmented Reality},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491101.3519831},
doi = {10.1145/3491101.3519831},
keywords = {E-Scooter, micro-mobility, traffic safety, augmented reality},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '22 Extended Abstracts)},
location = {New Orleans, LA, USA},
series = {CHI EA '22},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022escootar.pdf},
 abstract={Micro-mobility is becoming a more popular means of transportation. However, this increased popularity brings its challenges. In particular, the accident rates for E-Scooter riders increase, which endangers the riders and other road users. In this paper, we explore the idea of augmenting E-Scooters with unimodal warnings to prevent collisions with other road users, which include Augmented Reality (AR) notifcations, vibrotactile feedback on the handlebar, and auditory signals in the AR glasses. We conducted an outdoor experiment (N = 13) using an Augmented Reality simulation and compared these types of warnings in terms of reaction time, accident rate, and feeling of safety. Our results indicate that AR and auditory warnings lead to shorter reaction times, have a better perception, and create a better feeling of safety than vibrotactile warnings. Moreover, auditory signals have a higher acceptance by the riders compared to the other two types of warnings.}
}
  



[DIS '21] CameraReady: Assessing the Influence of Display Types and Visualizations on Posture Guidance

H. Elsayed, P. Hoffmann, S. Günther, M. Schmitz, M. Weigel, M. Mühlhäuser, F. Müller

ABSTRACT - Computer-supported posture guidance is used in sports, dance training, expression of art with movements, and learning gestures for interaction. At present, the influence of display types and visualizations have not been investigated in the literature. These factors are important as they directly impact perception and cognitive load, and hence influence the performance of participants. In this paper, we conducted a controlled experiment with 20 participants to compare the use of five display types with different screen sizes: smartphones, tablets, desktop monitors, TVs, and large displays. On each device, we compared three common visualizations for posture guidance: skeletons, silhouettes, and 3d body models. To conduct our assessment, we developed a mobile and cross-platform system that only requires a single camera. Our results show that compared to a smartphone display, larger displays show a lower error. Regarding the choice of visualization, participants rated 3D body models as significantly more usable in comparison to a skeleton visualization.

In Designing Interactive Systems Conference 2021
10.1145/3461778.3462026    PDF   
@inproceedings{Elsayed2021cameraready,
abstract = {Computer-supported posture guidance is used in sports, dance training, expression of art with movements, and learning gestures for interaction. At present, the influence of display types and visualizations have not been investigated in the literature. These factors are important as they directly impact perception and cognitive load, and hence influence the performance of participants. In this paper, we conducted a controlled experiment with 20 participants to compare the use of five display types with different screen sizes: smartphones, tablets, desktop monitors, TVs, and large displays. On each device, we compared three common visualizations for posture guidance: skeletons, silhouettes, and 3d body models. To conduct our assessment, we developed a mobile and cross-platform system that only requires a single camera. Our results show that compared to a smartphone display, larger displays show a lower error. Regarding the choice of visualization, participants rated 3D body models as significantly more usable in comparison to a skeleton visualization.},
address = {New York, NY, USA},
author = {Elsayed, Hesham and Hoffmann, Philipp and G\"{u}nther, Sebastian and Schmitz, Martin and Weigel, Martin and M\"{u}hlh\"{a}user, Max and M\"{u}ller, Florian},
booktitle = {Designing Interactive Systems Conference 2021},
doi = {10.1145/3461778.3462026},
isbn = {9781450384766},
month = {jun},
pages = {1046--1055},
publisher = {ACM},
title = {CameraReady: Assessing the Influence of Display Types and Visualizations on Posture Guidance},
url = {https://dl.acm.org/doi/10.1145/3461778.3462026},
year = {2021},
series = {DIS '21},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/elsayed2021cameraready.pdf}
}

[EICS '21] ActuBoard: An Open Rapid Prototyping Platform to Integrate Hardware Actuators in Remote Applications

S. Günther, F. Müller, F. Hübner, M. Mühlhäuser, A. Matviienko

ABSTRACT - Prototyping is an essential step in developing tangible experiences and novel devices, ranging from haptic feedback to wearables. However, prototyping of actuated devices nowadays often requires repetitive and time-consuming steps, such as wiring, soldering, and programming basic communication, before HCI researchers and designers can focus on their primary interest: designing interaction. In this paper, we present ActuBoard, a prototyping platform to support 1) quick assembly, 2) less preparation work, and 3) the inclusion of non-tech-savvy users. With ActuBoard, users are not required to create complex circuitry, write a single line of firmware, or implementing communication protocols. Acknowledging existing systems, our platform combines the flexibility of low-level microcontrollers and ease-of-use of abstracted tinker platforms to control actuators from separate applications. As further contribution, we highlight the technical specifications and published the ActuBoard platform as Open Source.

In Companion of the 2021 ACM SIGCHI Symposium on Engineering Interactive Computing Systems
10.1145/3459926.3464757    PDF   
@inproceedings{Guenther2021actuboard,
author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and H\"{u}bner, Felix and M\"{u}hlh\"{a}user, Max and Matviienko, Andrii},
title = {ActuBoard: An Open Rapid Prototyping Platform to Integrate Hardware Actuators in Remote Applications},
year = {2021},
isbn = {9781450384490},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3459926.3464757},
doi = {10.1145/3459926.3464757},
abstract = { Prototyping is an essential step in developing tangible experiences and novel devices, ranging from haptic feedback to wearables. However, prototyping of actuated devices nowadays often requires repetitive and time-consuming steps, such as wiring, soldering, and programming basic communication, before HCI researchers and designers can focus on their primary interest: designing interaction. In this paper, we present ActuBoard, a prototyping platform to support 1) quick assembly, 2) less preparation work, and 3) the inclusion of non-tech-savvy users. With ActuBoard, users are not required to create complex circuitry, write a single line of firmware, or implementing communication protocols. Acknowledging existing systems, our platform combines the flexibility of low-level microcontrollers and ease-of-use of abstracted tinker platforms to control actuators from separate applications. As further contribution, we highlight the technical specifications and published the ActuBoard platform as Open Source.},
booktitle = {Companion of the 2021 ACM SIGCHI Symposium on Engineering Interactive Computing Systems},
pages = {70–76},
numpages = {7},
keywords = {hardware, tinkering, actuators, haptics, rapid prototyping, open source, virtual reality},
location = {Virtual Event, Netherlands},
series = {EICS '21},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/Guenther2021actuboard.pdf}
}

 [CHI '21] Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive Touchscreens

M. Schmitz, F. Müller, M. Mühlhäuser, J. Riemann, H. Le

ABSTRACT - Tangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction. To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.

In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
10.1145/3411764.3445502    PDF    Teaser Video   
@inproceedings{schmitz2021itsybits,
  title = {Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive Touchscreens},
  booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  author = {Schmitz, Martin and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Riemann, Jan and Le, Huy Viet},
  year = {2021},
  publisher = {ACM},
  address = {New York, NY, USA},
  doi = {10.1145/3411764.3445502},
  abstract = {Tangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction. To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.},
  isbn = {978-1-4503-8096-6},
  series = {CHI '21},
  teaservideo = {https://www.youtube.com/watch?v=55vHxnOKl6k},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/schmitz2021itsybits.pdf},
 award={Honorable Mention}
}

 [CHI '21] Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics

M. Schmitz, J. Riemann, F. Müller, S. Kreis, M. Mühlhäuser

ABSTRACT - 3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.

In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
10.1145/3411764.3445641    PDF    Teaser Video   
@inproceedings{schmitz2021ohsnap,
  title = {Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics},
  booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  author = {Schmitz, Martin and Riemann, Jan and M{\"u}ller, Florian and Kreis, Steffen and M{\"u}hlh{\"a}user, Max},
  year = {2021},
  publisher = {ACM},
  address = {New York, NY, USA},
  doi = {10.1145/3411764.3445641},
  abstract = {3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.},
  isbn = {978-1-4503-8096-6},
  series = {CHI '21},
teaservideo = {https://www.youtube.com/watch?v=ado4a_chzqo},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/schmitz2021ohsnap.pdf},
 award={Best Paper}
}

[CHI '21] Let's Frets! Assisting Guitar Students During Practice via Capacitive Sensing

K. Marky, . Wei{\ss

ABSTRACT - Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let's Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let's Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let's Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let's Frets enables independent practice sessions that can be translated to other musical instruments.

In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
10.1145/3411764.3445595    PDF    Teaser Video   
@inproceedings{Marky2021letsfrets,
abstract = {Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let's Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let's Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let's Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let's Frets enables independent practice sessions that can be translated to other musical instruments.},
address = {New York, NY, USA},
author = {Marky, Karola and Wei{\ss}, Andreas and Matviienko, Andrii and Brandherm, Florian and Wolf, Sebastian and Schmitz, Martin and Krell, Florian and M{\"{u}}ller, Florian and M{\"{u}}hlh{\"{a}}user, Max and Kosch, Thomas},
booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
doi = {10.1145/3411764.3445595},
isbn = {9781450380966},
keywords = {capacitive sensing,musical instruments,support setup},
month = {may},
pages = {1--12},
publisher = {ACM},
series = {CHI '21},
title = {Let's Frets! Assisting Guitar Students During Practice via Capacitive Sensing},
url = {https://doi.org/10.1145/3411764.3445595},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/marky2021lets.pdf},
 teaservideo = {https://www.youtube.com/watch?v=vFx8c5aF6vA},
year = {2021}
}


[CHI EA ’21] VRtangibles: Assisting Children in Creating Virtual Scenes using Tangible Objects and Touch Input

A. Matviienko, M. Langer, F. Müller, M. Schmitz, M. Mühlhäuser

ABSTRACT - Children are increasingly exposed to virtual reality (VR) technology as end-users. However, they miss an opportunity to become active creators due to the barrier of insufficient technical background. Creating scenes in VR requires considerable programming knowledge and excludes non-tech-savvy users, e.g., school children. In this paper, we showcase a system called VRtangibles, which combines tangible objects and touch input to create virtual scenes without programming. With VRtangibles, we aim to engage children in the active creation of virtual scenes via playful hands-on activities. From the lab study with six school children, we discovered that the majority of children were successful in creating virtual scenes using VRtangibles and found it engaging and fun to use.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)
10.1145/3411763.3451671    PDF   
@inproceedings{Matviienko2021vrtangibles,
author = {Matviienko, Andrii and Langer, Marcel and Müller, Florian and Schmitz, Martin and Mühlhäuser, Max},
title = {VRtangibles: Assisting Children in Creating Virtual Scenes using Tangible Objects and Touch Input},
year = {2021},
isbn = {978-1-4503-8095-9/21/05},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411763.3451671},
doi = {10.1145/3411763.3451671},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)},
pages = {1–7},
numpages = {7},
keywords = {virtual reality, tangibles, touch input, children, education},
location = {Yokohama, Japan},
series = {CHI EA ’21},
abstract={Children are increasingly exposed to virtual reality (VR) technology as end-users. However, they miss an opportunity to become active creators due to the barrier of insufficient technical background. Creating scenes in VR requires considerable programming knowledge and excludes non-tech-savvy users, e.g., school children. In this paper, we showcase a system called VRtangibles, which combines tangible objects and touch input to create virtual scenes without programming. With VRtangibles, we aim to engage children in the active creation of virtual scenes via playful hands-on activities. From the lab study with six school children, we discovered that the majority of children were successful in creating virtual scenes using VRtangibles and found it engaging and fun to use.}
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/matviienko2021vrtangibles.pdf}
}
  

[CHI EA ’21] Quantified Cycling Safety: Towards a Mobile Sensing Platform to Understand Perceived Safety of Cyclists

A. Matviienko, F. Heller, B. Pfleging

ABSTRACT - Today’s level of cyclists’ road safety is primarily estimated using accident reports and self-reported measures. However, the former is focused on post-accident situations and the latter relies on subjective input. In our work, we aim to extend the landscape of cyclists’ safety assessment methods via a two-dimensional taxonomy, which covers data source (internal/external) and type of measurement (objective/subjective). Based on this taxonomy, we classify existing methods and present a mobile sensing concept for quantified cycling safety that fills the identified methodological gap by collecting data about body movements and physiological data. Finally, we outline a list of use cases and future research directions within the scope of the proposed taxonomy and sensing concept.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)
10.1145/3411763.3451678    PDF   
@inproceedings{Matviienko2021quantisafety,
author = {Matviienko, Andrii and Heller, Florian and Pfleging, Bastian},
title = {Quantified Cycling Safety: Towards a Mobile Sensing Platform to Understand Perceived Safety of Cyclists},
year = {2021},
isbn = {978-1-4503-8095-9/21/05},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411763.3451678},
doi = {10.1145/3411763.3451678},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)},
pages = {1–6},
numpages = {6},
keywords = {Cyclist safety taxonomy, on-body sensing, head movements, perceived road safety},
location = {Yokohama, Japan},
series = {CHI EA ’21},
abstract={Today’s level of cyclists’ road safety is primarily estimated using accident reports and self-reported measures. However, the former is focused on post-accident situations and the latter relies on subjective input. In our work, we aim to extend the landscape of cyclists’ safety assessment methods via a two-dimensional taxonomy, which covers data source (internal/external) and type of measurement (objective/subjective). Based on this taxonomy, we classify existing methods and present a mobile sensing concept for quantified cycling safety that fills the identified methodological gap by collecting data about body movements and physiological data. Finally, we outline a list of use cases and future research directions within the scope of the proposed taxonomy and sensing concept.}
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/matviienko2021quantified.pdf}
}
  



[IMWUT '20] VibroMap: Understanding the Spacing of Vibrotactile Actuators across the Body

H. Elsayed, M. Weigel, F. Müller, M. Schmitz, K. Marky, S. Günther, J. Riemann, M. Mühlhäuser

ABSTRACT - In spite of the great potential of on-body vibrotactile displays for a variety of applications, research lacks an understanding of the spacing between vibrotactile actuators. Through two experiments, we systematically investigate vibrotactile perception on the wrist, forearm, upper arm, back, torso, thigh, and leg, each in transverse and longitudinal body orientation. In the first experiment, we address the maximum distance between vibration motors that still preserves the ability to generate phantom sensations. In the second experiment, we investigate the perceptual accuracy of localizing vibrations in order to establish the minimum distance between vibration motors. Based on the results, we derive VibroMap, a spatial map of the functional range of inter-motor distances across the body. VibroMap supports hardware and interaction designers with design guidelines for constructing body-worn vibrotactile displays.

In Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
10.1145/3432189    PDF   
@article{elsayed2020vibromap,
author = {Elsayed, Hesham and Weigel, Martin and M\"{u}ller, Florian and Schmitz, Martin and Marky, Karola and G\"{u}nther, Sebastian and Riemann, Jan and M\"{u}hlh\"{a}user, Max},
title = {VibroMap: Understanding the Spacing of Vibrotactile Actuators across the Body},
year = {2020},
issue_date = {December 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {4},
number = {4},
url = {https://doi.org/10.1145/3432189},
doi = {10.1145/3432189},
abstract = {In spite of the great potential of on-body vibrotactile displays for a variety of applications, research lacks an understanding of the spacing between vibrotactile actuators. Through two experiments, we systematically investigate vibrotactile perception on the wrist, forearm, upper arm, back, torso, thigh, and leg, each in transverse and longitudinal body orientation. In the first experiment, we address the maximum distance between vibration motors that still preserves the ability to generate phantom sensations. In the second experiment, we investigate the perceptual accuracy of localizing vibrations in order to establish the minimum distance between vibration motors. Based on the results, we derive VibroMap, a spatial map of the functional range of inter-motor distances across the body. VibroMap supports hardware and interaction designers with design guidelines for constructing body-worn vibrotactile displays.},
journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
month = dec,
articleno = {125},
numpages = {16},
keywords = {vibrotactile interfaces, wearable computing, actuator spacing, phantom sensation, haptic output, ERM vibration motors, design implications},
series = {IMWUT '20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/elsayed2020vibromap.pdf}
}


[VRST '20] VRSketchPen: Unconstrained Haptic Assistance for Sketching in Virtual 3D Environments

H. Elsayed, M. Barrera Machuca, C. Schaarschmidt, K. Marky, F. Müller, J. Riemann, A. Matviienko, M. Schmitz, M. Weigel, M. Mühlhäuser

ABSTRACT - Accurate sketching in virtual 3D environments is challenging due to aspects like limited depth perception or the absence of physical support. To address this issue, we propose VRSketchPen – a pen that uses two haptic modalities to support virtual sketching without constraining user actions: (1) pneumatic force feedback to simulate the contact pressure of the pen against virtual surfaces and (2) vibrotactile feedback to mimic textures while moving the pen over virtual surfaces. To evaluate VRSketchPen, we conducted a lab experiment with 20 participants to compare (1) pneumatic, (2) vibrotactile and (3) a combination of both with (4) snapping and no assistance for flat and curved surfaces in a 3D virtual environment. Our findings show that usage of pneumatic, vibrotactile and their combination significantly improves 2D shape accuracy and leads to diminished depth errors for flat and curved surfaces. Qualitative results indicate that users find the addition of unconstraining haptic feedback to significantly improve convenience, confidence and user experience.

In 26th ACM Symposium on Virtual Reality Software and Technology
10.1145/3385956.3418953    PDF   
@inproceedings{elsayed2020vrsketchpen,
author = {Elsayed, Hesham and Barrera Machuca, Mayra Donaji and Schaarschmidt, Christian and Marky, Karola and M\"{u}ller, Florian and Riemann, Jan and Matviienko, Andrii and Schmitz, Martin and Weigel, Martin and M\"{u}hlh\"{a}user, Max},
title = {VRSketchPen: Unconstrained Haptic Assistance for Sketching in Virtual 3D Environments},
year = {2020},
isbn = {9781450376198},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3385956.3418953},
doi = {10.1145/3385956.3418953},
abstract = { Accurate sketching in virtual 3D environments is challenging due to aspects like limited depth perception or the absence of physical support. To address this issue, we propose VRSketchPen – a pen that uses two haptic modalities to support virtual sketching without constraining user actions: (1) pneumatic force feedback to simulate the contact pressure of the pen against virtual surfaces and (2) vibrotactile feedback to mimic textures while moving the pen over virtual surfaces. To evaluate VRSketchPen, we conducted a lab experiment with 20 participants to compare (1) pneumatic, (2) vibrotactile and (3) a combination of both with (4) snapping and no assistance for flat and curved surfaces in a 3D virtual environment. Our findings show that usage of pneumatic, vibrotactile and their combination significantly improves 2D shape accuracy and leads to diminished depth errors for flat and curved surfaces. Qualitative results indicate that users find the addition of unconstraining haptic feedback to significantly improve convenience, confidence and user experience.},
booktitle = {26th ACM Symposium on Virtual Reality Software and Technology},
articleno = {3},
numpages = {11},
keywords = {3D User Interfaces, Pneumatic Actuation, Vibrotactile Actuation, Haptics, Sketching, Virtual Reality},
location = {Virtual Event, Canada},
series = {VRST '20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/elsayed2020vrsketchpen.pdf}
}



[PerDis ’20] Reminding Child Cyclists about Safety Gestures

A. Matviienko, S. Ananthanarayan, R. Kappes, W. Heuten, S. Boll

ABSTRACT - Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children's behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.

In Proceedings of the 9TH ACM International Symposium on Pervasive Displays
10.1145/3393712.3394120    PDF    Full Video   
@inproceedings{matviienko2020remindingcyclists,
author = {Matviienko, Andrii and Ananthanarayan, Swamy and Kappes, Raphael and Heuten, Wilko and Boll, Susanne},
title = {Reminding Child Cyclists about Safety Gestures},
year = {2020},
isbn = {9781450379861},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3393712.3394120},
doi = {10.1145/3393712.3394120},
booktitle = {Proceedings of the 9TH ACM International Symposium on Pervasive Displays},
pages = {1–7},
numpages = {7},
keywords = {HUD glasses, safety gestures, child cyclists, cycling safety},
location = {Manchester, United Kingdom},
series = {PerDis ’20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/matviienko2020remindingcyclists.pdf},
 abstract={Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children's behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.},
 video = {https://www.youtube.com/watch?v=cSKD-MoZ-54},
}
  


[CHI '20] 3D-Auth: Two-Factor Authentication with Personalized 3D-Printed Items

K. Marky, M. Schmitz, V. Zimmermann, M. Herbers, K. Kunze, M. Mühlhäuser

ABSTRACT - Two-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376189    PDF    Teaser Video   
@inproceedings{marky20203dauth,
author = {Marky, Karola and Schmitz, Martin and Zimmermann, Verena and Herbers, Martin and Kunze, Kai and M{\"u}hlh{\"a}user, Max},
title = {3D-Auth: Two-Factor Authentication with Personalized 3D-Printed Items},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx.doi.org/10.1145/3313831.3376189},
teaservideo = {https://youtu.be/_dHihnJTRek},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/marky2020auth3d.pdf},
doi = {10.1145/3313831.3376189},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Two-Factor Authentication, 3D Printing, Capacitive Sensing},
abstract = {Two-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.}
}

[CHI '20] Podoportation: Foot-Based Locomotion in Virtual Reality

J. von Willich, M. Schmitz, F. Müller, D. Schmitt, M. Mühlhäuser

ABSTRACT - Virtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based and hands-free locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for hands-free and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376626    PDF    Teaser Video   
@inproceedings{willich2020podoportation,
author = {von Willich, Julius and Schmitz, Martin  and M{\"u}ller, Florian and Schmitt, Daniel and M{\"u}hlh{\"a}user, Max},
title = {Podoportation: Foot-Based Locomotion in Virtual Reality},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx.doi.org/10.1145/3313831.3376626},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/willich2020podoportation.pdf},
teaservideo = {https://www.youtube.com/watch?v=HGP5MN_e-k0},
doi = {10.1145/3313831.3376626},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Virtual Reality, Locomotion, Foot-based input},
abstract = {Virtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based and hands-free locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for hands-free and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.}
}

[CHI '20] Improving the Usability and UX of the Swiss Internet Voting Interface

K. Marky, V. Zimmermann, M. Funk, J. Daubert, K. Bleck, M. Mühlhäuser

ABSTRACT - Up to 20% of residential votes and up to 70% of absentee votes in Switzerland are cast online. The Swiss scheme aims to provide individual verifiability by different verification codes. The voters have to carry out verification on their own, making the usability and UX of the interface of great importance. To improve the usability, we first performed an evaluation with 12 human-computer interaction experts to uncover usability weaknesses of the Swiss Internet voting interface. Based on the experts' findings, related work, and an exploratory user study with 36 participants, we propose a redesign that we evaluated in a user study with 49 participants. Our study confirmed that the redesign indeed improves the detection of incorrect votes by 33% and increases the trust and understanding of the voters. Our studies furthermore contribute important recommendations for designing verifiable e-voting systems in general.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376769    PDF   
@inproceedings{marky2020swissvoting,
author = {Marky, Karola and Zimmermann, Verena and Funk, Markus and Daubert, J{\"o}rg and Bleck, Kira and M{\"u}hlh{\"a}user, Max},
title = {Improving the Usability and UX of the Swiss Internet Voting Interface},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx. doi.org/10.1145/3313831.3376769},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/marky2020swissvoting.pdf},
doi = {10.1145/3313831.3376769},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {E-Voting, Individual Verifiability, Usability Evaluation},
abstract = {Up to 20% of residential votes and up to 70% of absentee votes in Switzerland are cast online. The Swiss scheme aims to provide individual verifiability by different verification codes. The voters have to carry out verification on their own, making the usability and UX of the interface of great importance. To improve the usability, we first performed an evaluation with 12 human-computer interaction experts to uncover usability weaknesses of the Swiss Internet voting interface. Based on the experts' findings, related work, and an exploratory user study with 36 participants, we propose a redesign that we evaluated in a user study with 49 participants. Our study confirmed that the redesign indeed improves the detection of incorrect votes by 33% and increases the trust and understanding of the voters. Our studies furthermore contribute important recommendations for designing verifiable e-voting systems in general.}
}


[CHI '20] Therminator: Understanding the Interdependency of Visual and On-Body Thermal Feedback in Virtual Reality

S. Günther, F. Müller, D. Schön, O. Elmoghazy, M. Schmitz, M. Mühlhäuser

ABSTRACT - Recent advances have made Virtual Reality (VR) more realistic than ever before. This improved realism is attributed to today's ability to increasingly appeal to human sensations, such as visual, auditory or tactile. While research also examines temperature sensation as an important aspect, the interdependency of visual and thermal perception in VR is still underexplored. In this paper, we propose Therminator, a thermal display concept that provides warm and cold on-body feedback in VR through heat conduction of flowing liquids with different temperatures. Further, we systematically evaluate the interdependency of different visual and thermal stimuli on the temperature perception of arm and abdomen with 25 participants. As part of the results, we found varying temperature perception depending on the stimuli, as well as increasing involvement of users during conditions with matching stimuli.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376195    PDF    Teaser Video    Full Video   
@inproceedings{guenther2020therminator,
 author = {G{\"u}nther, Sebastian and M{\"u}ller, Florian and Sch{\"o}n, Dominik and Elmoghazy, Omar and Schmitz, Martin and M{\"u}hlh{\"a}user, Max},
 title = {Therminator: Understanding the Interdependency of Visual and On-Body Thermal Feedback in Virtual Reality},
 booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3313831.3376195},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/guenther2020therminator.pdf},
 video = {https://www.youtube.com/watch?v=q5lkmqAua78},
 teaservideo = {https://youtu.be/w9FnG1eoWD8},
 doi = {10.1145/3313831.3376195},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Haptics, Temperature, Thermal Feedback, Virtual Reality},
 abstract = {Recent advances have made Virtual Reality (VR) more realistic than ever before. This improved realism is attributed to today's ability to increasingly appeal to human sensations, such as visual, auditory or tactile. While research also examines temperature sensation as an important aspect, the interdependency of visual and thermal perception in VR is still underexplored. In this paper, we propose Therminator, a thermal display concept that provides warm and cold on-body feedback in VR through heat conduction of flowing liquids with different temperatures. Further, we systematically evaluate the interdependency of different visual and thermal stimuli on the temperature perception of arm and abdomen with 25 participants. As part of the results, we found varying temperature perception depending on the stimuli, as well as increasing involvement of users during conditions with matching stimuli.}
}

[CHI '20] Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted Displays

F. Müller, M. Schmitz, D. Schmitt, S. Günther, M. Funk, M. Mühlhäuser

ABSTRACT - Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376852    PDF    Teaser Video    Full Video   
@inproceedings{mueller2020walktheline,
 author = {M{\"u}ller, Florian and Schmitz, Martin and Schmitt, Daniel and G{\"u}nther, Sebastian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
 title = {Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted Displays},
 booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3313831.3376852},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/mueller2020walktheline.pdf},
 video = {https://youtu.be/ylAlzFqWx7g},
 teaservideo = {https://youtu.be/6-XrF6J9cTc},
 doi = {10.1145/3313831.3376852},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Augmented Reality, Head-Mounted Display, Input, Walking},
 abstract = {Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.}
}



[CHI EA '20] PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation

S. Günther, D. Schön, F. Müller, M. Mühlhäuser, M. Schmitz

ABSTRACT - Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: a VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.

In Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3334480.3382916    PDF    Teaser Video    Full Video   
@inproceedings{guenther2020pneumovolley,
 author = {G{\"u}nther, Sebastian and Sch{\"o}n, Dominik and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Schmitz, Martin},
 title = {PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation},
 booktitle = {Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
 series = {CHI EA '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3334480.3382916},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/guenther2020pneumovolley.pdf},
 video = {https://www.youtube.com/watch?v=ZKnV8HrUx9M},
 teaservideo = {https://www.youtube.com/watch?v=-SlrCqF-5m4},
 doi = {10.1145/3334480.3382916},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Haptics, Pressure, Volleyball, Virtual Reality, Blobbyvolley},
 abstract = {Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: a VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.}
}



[DIS '19] You Invaded my Tracking Space!Using Augmented Virtuality for Spotting Passersby inRoom-Scale Virtual Reality

J. von Willich, M. Funk, F. Müller, K. Marky, J. Riemann, M. Mühlhäuser

ABSTRACT - With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a 3D-Scan, showing an Avatar, and showing a 2D-Image of the passerby. Our results show that while a 2D-Image and an Avatar are the fastest representations to spot passersby, the Avatar and the 3D-Scan representations were the most accurate.

In Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19
10.1145/3322276.3322334    Full Video   
@inproceedings{willich2019tracking,
title = {You Invaded my Tracking Space!Using Augmented Virtuality for Spotting Passersby inRoom-Scale Virtual Reality},
author = {von Willich, Julius and Funk, Markus and M{\"u}ller, Florian and Marky, Karola and Riemann, Jan and  M{\"u}hlh{\"a}user, Max},
doi = {10.1145/3322276.3322334},
booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19},
keywords = {Virtual Reality; Augmented Reality; Passersby Visualization},
year = {2019},
series = {DIS '19},
video = {https://www.youtube.com/watch?v=SGOFeRX0tmk},
abstract = {With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR  system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a 3D-Scan, showing an Avatar, and showing a 2D-Image of the passerby. Our results show that while a 2D-Image and an Avatar are the fastest representations to spot passersby, the Avatar  and the 3D-Scan representations were the most accurate.}
}


[DIS '19] PneumAct: Pneumatic Kinesthetic Actuation of Body Joints in Virtual Reality Environments

S. Günther, M. Makhija, F. Müller, D. Schön, M. Mühlhäuser, M. Funk

ABSTRACT - Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.

In Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19
10.1145/3322276.3322302    Teaser Video   
@inproceedings{guenther2019pneumact,
title = {PneumAct: Pneumatic Kinesthetic Actuation of Body Joints in Virtual Reality Environments},
author = {G{\"u}nther, Sebastian and Makhija, Mohit and M{\"u}ller, Florian and Sch{\"o}n, Dominik and M{\"u}hlh{\"a}user, Max and Funk, Markus},
doi = {10.1145/3322276.3322302},
booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19},
keywords = {Compressed Air,Force Feedback,Kinesthetic,Pneumatic,haptics,virtual Reality},
year = {2019},
series = {DIS '19},
teaservideo = {https://youtu.be/4lRWxzs4Rgs},
abstract={Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.}
}


[CHI EA '19] Slappyfications: Towards Ubiquitous Physical and Embodied Notifications

S. Günther, F. Müller, M. Funk, M. Mühlhäuser

ABSTRACT - With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications over a distance. Our proof-of-concept prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3311780    PDF    Full Video   
@inproceedings{guenther2019slappyfications,
title={Slappyfications: Towards Ubiquitous Physical and Embodied Notifications},
author={G{\"u}nther, Sebastian and M{\"u}ller, Florian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3311780},
year={2019},
video = {https://www.youtube.com/watch?v=qDmrSgyV20s},
abstract={With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications over a distance. Our proof-of-concept prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/guenther2019slappyfications.pdf}
}

 [CHI '19] Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays

F. Müller, J. McManus, S. Günther, M. Schmitz, M. Mühlhäuser, M. Funk

ABSTRACT - From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.

In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300707    PDF    Teaser Video    Full Video   
@inproceedings{mueller2019mind,
title={Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays},
author={M{\"u}ller, Florian and McManus, Joshua and G{\"u}nther, Sebastian and Schmitz, Martin and M{\"u}hlh{\"a}user, Max and Funk, Markus},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
doi={10.1145/3290605.3300707},
year={2019},
series = {CHI '19},
teaservideo={https://www.youtube.com/watch?v=RhabMsP0X14},
video={https://www.youtube.com/watch?v=D5hTVIEb7iA},
abstract={From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/mueller2019mindthetap.pdf},
 award={Honorable Mention}
}

[CHI '19] Assessing the Accuracy of Point & Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories

M. Funk, F. Müller, M. Fendrich, M. Shene, M. Kolvenbach, N. Dobbertin, S. Günther, M. Mühlhäuser

ABSTRACT - Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three diferent point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.

In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300377    PDF    Teaser Video    Full Video   
@inproceedings{funk2019assessing,
title={Assessing the Accuracy of Point \& Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories},
author={Funk, Markus and M{\"u}ller, Florian and Fendrich, Marco and Shene, Megan and Kolvenbach, Moritz and Dobbertin, Niclas and G{\"u}nther, Sebastian and M{\"u}hlh{\"a}user, Max},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
doi={10.1145/3290605.3300377},
year={2019},
series = {CHI '19},
teaservideo={https://www.youtube.com/watch?v=klu82WxeBlA},
video={https://www.youtube.com/watch?v=uXctClcQu_g},
abstract={Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three diferent point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/funk2019assessing.pdf}
}


[CHI '19] ./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects

M. Schmitz, M. Stitz, F. Müller, M. Funk, M. Mühlhäuser
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300684    PDF    Teaser Video   
@inproceedings{schmitz2019trilaterate,
title={./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects},
author={Schmitz, Martin and Stitz, Martin and M{\"u}ller, Florian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
series = {CHI '19},
doi={10.1145/3290605.3300684},
year={2019},
teaservideo={https://www.youtube.com/watch?v=QJNmH_IvarY},
abstract={Hover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. 
To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/schmitz2019trilaterate.pdf}
}

[CHI EA '19] LookUnlock: Using Spatial-Targets for User-Authentication on HMDs

M. Funk, K. Marky, I. Mizutani, M. Kritzler, S. Mayer, F. Michahelles

ABSTRACT - With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3312959    PDF    Teaser Video   
@inproceedings{funk2019lookunlock,
title={LookUnlock: Using Spatial-Targets for User-Authentication on HMDs},
author={Funk, Markus and Marky, Karola and Mizutani, Iori and Kritzler, Mareike and Mayer, Simon and Michahelles, Florian},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3312959},
year={2019},
teaservideo={https://www.youtube.com/watch?v=NA0EMlK0zrI},
abstract={With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/funk2019lookunlock.pdf}
}

[CHI EA '19] Usability of Code Voting Modalities

K. Marky, M. Schmitz, F. Lange, M. Mühlhäuser

ABSTRACT - Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.

In CHI Conference on Human Factors in Computing Systems Late Breaking Work
10.1145/3290607.3312971    Teaser Video   
@inproceedings{marky2019usability,
title = {Usability of Code Voting Modalities},
publisher = {ACM},
year = {2019},
author = {Marky, Karola and Schmitz, Martin and Lange, Felix and M{\"u}hlh{\"a}user, Max},
booktitle = {CHI Conference on Human Factors in Computing Systems Late Breaking Work},
series = {CHI EA '19},
keywords = {E-Voting; Code Voting; Tangibles; Usability Evaluation},
abstract = {Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of  voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.},
url = {http://tubiblio.ulb.tu-darmstadt.de/111897/},
doi = {10.1145/3290607.3312971},
teaservideo = {https://www.youtube.com/watch?v=tykP_IrVOIk},
}


[CHI EA '19] VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games

J. von Willich, D. Schön, S. Günther, F. Müller, M. Mühlhäuser, M. Funk

ABSTRACT - Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration,we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer introduces a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3313254    PDF    Teaser Video    Full Video   
@inproceedings{willich2019vrchairracer,
title={VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games},
author={von Willich, Julius and Sch{\"o}n, Dominik and G{\"u}nther, Sebastian and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Funk, Markus},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3313254},
year={2019},
teaservideo={https://www.youtube.com/watch?v=8ukVghWoTlE},
video={https://www.youtube.com/watch?v=v906aGntoKY},
abstract={Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration,we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer introduces a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/willich2019vrchairracer.pdf}
}

[PETRA '19] APS: A 3D Human Body Posture Set as a Baseline for Posture Guidance

H. Elsayed, M. Weigel, J. von Willich, M. Funk, M. Mühlhäuser

ABSTRACT - Human body postures are an important input modality for motion guidance and other application domains in HCI, e.g. games, character animations, and interaction with public displays. However, for training and guidance of body postures prior research had to define their own whole body gesture sets. Hence, the interaction designs and evaluation results are difficult to compare, due to a lack of a standardized posture set. In this work, we contribute APS (APS Posture Set), a novel posture set including 40 body postures. It is based on prior research, sports, and body language. For each identified posture, we collected 3D posture data using a Microsoft Kinect. We make the skeleton data, 3D mesh objects and SMPL data available for future research. Taken together, APS can be used to facilitate design of interfaces that use body gestures and as a reference set for future user studies and system evaluations.

In Proceedings of the 12th PErvasive Technologies Related to Assistive Environments Conference
10.1145/3316782.3324012   
@inproceedings{elsayed2019aps,
title = { APS: A 3D Human Body Posture Set as a Baseline for Posture Guidance },
author = {Elsayed, Hesham and Weigel, Martin and von Willich, Julius and Funk, Markus and M{\"u}hlh{\"a}user, Max },
doi = {10.1145/3316782.3324012},
booktitle = {Proceedings of the 12th PErvasive Technologies Related to Assistive Environments Conference},
year = {2019},
series = {PETRA '19},
acmid = {3324012},
publisher = {ACM},
address = {New York, NY, USA},
abstract = {Human body postures are an important input modality for motion guidance and other application domains in HCI, e.g. games, character animations, and interaction with public displays.  However, for training and guidance of body postures prior research had to define their own whole body gesture sets. Hence, the interaction designs and evaluation results are difficult to compare, due to a lack of a standardized posture set. In this work, we contribute APS (APS Posture Set), a novel posture set including 40 body postures. It is based on prior research, sports, and body language. For each identified posture, we collected 3D posture data using a Microsoft Kinect. We make the skeleton data, 3D mesh objects and SMPL data available for future research. Taken together, APS can be used to facilitate design of interfaces that use body gestures and as a reference set for future user studies and system evaluations.}
}

[PETRA '18] TactileGlove: Assistive Spatial Guidance in 3D Space Through Vibrotactile Navigation

S. Günther, F. Müller, M. Funk, J. Kirchner, N. Dezfuli, M. Mühlhäuser

ABSTRACT - With the recent advance in computing technology, more and more environments are becoming interactive. For interacting with these environments, traditionally 2D input and output elements are being used. However, recently interaction spaces also expanded to 3D space, which enabled new possibilities but also led to challenges in assisting users with interacting in such a 3D space. Usually, this challenge of communicating 3D positions is solved visually. This paper explores a different approach: spatial guidance through vibrotactile instructions. Therefore, we introduce TactileGlove, a smart glove equipped with vibrotactile actuators for providing spatial guidance in 3D space. We contribute a user study with 15 participants to explore how a different number of actuators and metaphors affect the user performance. As a result, we found that using a Pull metaphor for vibrotactile navigation instructions is preferred by our participants. Further, we found that using a higher number of actuators reduces the target acquisition time than when using a low number.

In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference
10.1145/3197768.3197785    PDF   
@inproceedings{guenther2018tactileglove,
 author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and Funk, Markus and Kirchner, Jan and Dezfuli, Niloofar and M\"{u}hlh\"{a}user, Max},
 title = {TactileGlove: Assistive Spatial Guidance in 3D Space Through Vibrotactile Navigation},
 booktitle = {Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference},
 series = {PETRA '18},
 year = {2018},
 isbn = {978-1-4503-6390-7},
 location = {Corfu, Greece},
 pages = {273--280},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/3197768.3197785},
 doi = {10.1145/3197768.3197785},
 acmid = {3197785},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3D-Space, Assistive Technology, Haptics, Navigation, Pull Push Metaphors, Spatial Guidance, Vibrotactile},
 abstract={With the recent advance in computing technology, more and more environments are becoming interactive. For interacting with these environments, traditionally 2D input and output elements are being used. However, recently interaction spaces also expanded to 3D space, which enabled new possibilities but also led to challenges in assisting users with interacting in such a 3D space. Usually, this challenge of communicating 3D positions is solved visually. This paper explores a different approach: spatial guidance through vibrotactile instructions. Therefore, we introduce TactileGlove, a smart glove equipped with vibrotactile actuators for providing spatial guidance in 3D space. We contribute a user study with 15 participants to explore how a different number of actuators and metaphors affect the user performance. As a result, we found that using a Pull metaphor for vibrotactile navigation instructions is preferred by our participants. Further, we found that using a higher number of actuators reduces the target acquisition time than when using a low number.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/gunther2018tactileglove.pdf}
}


[CHI EA '18] CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction

S. Günther, F. Müller, M. Schmitz, J. Riemann, N. Dezfuli, M. Funk, D. Schön, M. Mühlhäuser

ABSTRACT - The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.

In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3170427.3188647    PDF    Teaser Video   
@inproceedings{guenther2018checkmate,
 author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and Schmitz, Martin and Riemann, Jan and Dezfuli, Niloofar and Funk, Markus and Sch\"{o}n, Dominik and M\"{u}hlh\"{a}user, Max},
 title = {CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction},
 booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI EA '18},
 year = {2018},
 isbn = {978-1-4503-5621-3},
 location = {Montreal QC, Canada},
 pages = {LBW570:1--LBW570:6},
 articleno = {LBW570},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/3170427.3188647},
 doi = {10.1145/3170427.3188647},
 acmid = {3188647},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3d fabrication, augmented reality, chess, mixed reality, remote collaboration, tabletops, tangibles},
 teaservideo={https://www.youtube.com/watch?v=Geyr95Nl8mc},
 abstract={The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/guenther2018checkmate.pdf}
}

[CHI EA '18] Personalized User-Carried Single Button Interfaces As Shortcuts for Interacting with Smart Devices

F. Müller, M. Schmitz, M. Funk, S. Günther, N. Dezfuli, M. Mühlhäuser

ABSTRACT - We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices. In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.

In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3170427.3188661    PDF    Teaser Video   
@inproceedings{mueller2018pucsbi,
 author = {M\"{u}ller, Florian and Schmitz, Martin and Funk, Markus and G\"{u}nther, Sebastian and Dezfuli, Niloofar and M\"{u}hlh\"{a}user, Max},
 title = {Personalized User-Carried Single Button Interfaces As Shortcuts for Interacting with Smart Devices},
 booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI EA '18},
 year = {2018},
 isbn = {978-1-4503-5621-3},
 location = {Montreal QC, Canada},
 pages = {LBW602:1--LBW602:6},
 articleno = {LBW602},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/3170427.3188661},
 doi = {10.1145/3170427.3188661},
 acmid = {3188661},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {human factors, interaction, smart devices},
 teaservideo={https://www.youtube.com/watch?v=Z5wicorfmxU},
 abstract={We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices. In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/mueller_pucsbi.pdf}
}

 [CHI '18] Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects

M. Schmitz, M. Herbers, N. Dezfuli, S. Günther, M. Mühlhäuser

ABSTRACT - Embedding sensors into objects allow them to recognize various interactions.However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.

In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3173574.3173756    PDF    Teaser Video   
@inproceedings{schmitz2018offline,
 author = {Schmitz, Martin and Herbers, Martin and Dezfuli, Niloofar and G\"{u}nther, Sebastian and M\"{u}hlh\"{a}user, Max},
 title = {Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects},
 booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '18},
 year = {2018},
 isbn = {978-1-4503-5620-6},
 location = {Montreal QC, Canada},
 pages = {182:1--182:8},
 articleno = {182},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/3173574.3173756},
 doi = {10.1145/3173574.3173756},
 acmid = {3173756},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3d printing, capacitive sensing, digital fabrication, input, mechanism, metamaterial, sensors},
 teaservideo={https://www.youtube.com/watch?v=19dDaeBEnPM},
 abstract={Embedding sensors into objects allow them to recognize various interactions.However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/schmitz2018offline.pdf},
 award={Best Paper}
}

[IMWUT '18] FlowPut: Environment-Aware Interactivity for Tangible 3D Objects

J. Riemann, M. Schmitz, A. Hendrich, M. Mühlhäuser

ABSTRACT - Tangible interaction has shown to be beneficial in a wide variety of scenarios since it provides more direct manipulation and haptic feedback. Further, inherently three-dimensional information is represented more naturally by a 3D object than by a flat picture on a screen. Yet, today's tangibles have often pre-defined form factors and limited input and output facilities. To overcome this issue, the combination of projection and depth cameras is used as a fast and flexible way of non-intrusively adding input and output to tangibles. However, tangibles are often quite small and hence the space for output and interaction on their surface is limited. Therefore, we propose FlowPut: an environment-aware framework that utilizes the space available on and around a tangible object for projected visual output. By means of an optimization-based layout approach, FlowPut considers the environment of the objects to avoid interference between projection and real-world objects. Moreover, we contribute an occlusion resilient object recognition and tracking for tangible objects based on their 3D model and a point-cloud based multi-touch detection, that allows sensing touches also on the side of a tangible. Flowput is validated through a series of technical experiments, a user study, and two example applications.

In Proceedings ACM Interact. Mobile Wearable Ubiquitous Technologies
10.1145/3191763    PDF   
@article{riemann2018flowput,
 author = {Riemann, Jan and Schmitz, Martin and Hendrich, Alexander and M\"{u}hlh\"{a}user, Max},
 title = {FlowPut: Environment-Aware Interactivity for Tangible 3D Objects},
 journal = {Proceedings ACM Interact. Mobile Wearable Ubiquitous Technologies},
 issue_date = {March 2018},
 series = {IMWUT '18},
 volume = {2},
 number = {1},
 month = mar,
 year = {2018},
 issn = {2474-9567},
 pages = {31:1--31:23},
 articleno = {31},
 numpages = {23},
 url = {http://doi.acm.org/10.1145/3191763},
 doi = {10.1145/3191763},
 acmid = {3191763},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Displays, layout, object tracking, optimization, projection, touch},
 abstract={Tangible interaction has shown to be beneficial in a wide variety of scenarios since it provides more direct manipulation and haptic feedback. Further, inherently three-dimensional information is represented more naturally by a 3D object than by a flat picture on a screen. Yet, today's tangibles have often pre-defined form factors and limited input and output facilities. To overcome this issue, the combination of projection and depth cameras is used as a fast and flexible way of non-intrusively adding input and output to tangibles. However, tangibles are often quite small and hence the space for output and interaction on their surface is limited. Therefore, we propose FlowPut: an environment-aware framework that utilizes the space available on and around a tangible object for projected visual output. By means of an optimization-based layout approach, FlowPut considers the environment of the objects to avoid interference between projection and real-world objects. Moreover, we contribute an occlusion resilient object recognition and tracking for tangible objects based on their 3D model and a point-cloud based multi-touch detection, that allows sensing touches also on the side of a tangible. Flowput is validated through a series of technical experiments, a user study, and two example applications.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/riemann2018flowput.pdf}
} 

You can find more of our research on our institute\'s website.

HCI group at Telecooperation Lab

GET IN TOUCH

Telecooperation Lab Gugenheimer HCI Lab PEASEC Lab Arbeits- und Ingenieurpsychologie FAI Group SEEMOO Lab TU Darmstadt