Our Research

a selection of work done by #teamdarmstadt

[MHCI '22] NotiBike: Assessing Target Selection Techniques for Cyclist Notifications in Augmented Reality

T. Kosch, A. Matviienko, F. Müller, J. Bersch, C. Katins, D. Schön, M. Mühlhäuser

ABSTRACT - Cyclists' attention is often compromised when interacting with notifications in traffic, hence increasing the likelihood of road accidents. To address this issue, we evaluate three notification interaction modalities and investigate their impact on the interaction performance while cycling: gaze-based Dwell Time, Gestures, and Manual And Gaze Input Cascaded (MAGIC) Pointing. In a user study (N=18), participants confirmed notifications in Augmented Reality (AR) using the three interaction modalities in a simulated biking scenario. We assessed the efficiency regarding reaction times, error rates, and perceived task load. Our results show significantly faster response times for MAGIC Pointing compared to Dwell Time and Gestures, while Dwell Time led to a significantly lower error rate compared to Gestures. Participants favored the MAGIC Pointing approach, supporting cyclists in AR selection tasks. Our research sets the boundaries for more comfortable and easier interaction with notifications and discusses implications for target selections in AR while cycling.

In Proceedings of the ACM on Human-Computer Interaction, MobileHCI
10.1145/3546732    PDF    Full Video   
@article{Kosch2022notibike,
author = {Kosch, Thomas and Matviienko, Andrii and M\"{u}ller, Florian and Bersch, Jessica and Katins, Christopher and Sch\"{o}n, Dominik and M\"{u}hlh\"{a}user, Max},
title = {NotiBike: Assessing Target Selection Techniques for Cyclist Notifications in Augmented Reality},
year = {2022},
issue_date = {September 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {MHCI},
url = {https://doi.org/10.1145/3546732},
doi = {10.1145/3546732},
abstract = {Cyclists' attention is often compromised when interacting with notifications in traffic, hence increasing the likelihood of road accidents. To address this issue, we evaluate three notification interaction modalities and investigate their impact on the interaction performance while cycling: gaze-based Dwell Time, Gestures, and Manual And Gaze Input Cascaded (MAGIC) Pointing. In a user study (N=18), participants confirmed notifications in Augmented Reality (AR) using the three interaction modalities in a simulated biking scenario. We assessed the efficiency regarding reaction times, error rates, and perceived task load. Our results show significantly faster response times for MAGIC Pointing compared to Dwell Time and Gestures, while Dwell Time led to a significantly lower error rate compared to Gestures. Participants favored the MAGIC Pointing approach, supporting cyclists in AR selection tasks. Our research sets the boundaries for more comfortable and easier interaction with notifications and discusses implications for target selections in AR while cycling.},
journal = {Proceedings of the ACM on Human-Computer Interaction, MobileHCI},
month = {sep},
articleno = {197},
numpages = {24},
keywords = {cycling, augmented reality, selection, notifications},
series = {MHCI '22},
 video = {https://www.youtube.com/watch?v=hTYBTULau7U},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Kosch2022NotiBike.pdf}
}

[MHCI '22] AR Sightseeing: Comparing Information Placements at Outdoor Historical Heritage Sites Using Augmented Reality

A. Matviienko, S. Günther, S. Ritzenhofen, M. Mühlhäuser

ABSTRACT - Augmented Reality (AR) has influenced the presentation of historical information to tourists and museum visitors by making the information more immersive and engaging. Since smartphones and AR glasses are the primary devices to present AR information to users, it is essential to understand how the information about a historical site can be presented effectively and what type of device is best suited for information placements. In this paper, we investigate the placement of two types of content, historical images and informational text, for smartphones and AR glasses in the context of outdoor historical sites. For this, we explore three types of placements: (1) on-body, (2) world, and (3) overlay. To evaluate all nine combinations of text and image placements for smartphone and AR glasses, we conducted a controlled experiment (N = 18) at outdoor historical landmarks. We discovered that on-body image and text placements were the most convenient compared to overlay and world for both devices. Furthermore, participants found themselves more successful in exploring historical sites using a smartphone than AR glasses. Although interaction with a smartphone was more convenient, participants found exploring AR content using AR glasses more fun.

In Proceedings of the ACM on Human-Computer Interaction, MobileHCI
10.1145/3546729    PDF   
@article{Matviienko2022arsightseeing,
author = {Matviienko, Andrii and G\"{u}nther, Sebastian and Ritzenhofen, Sebastian and M\"{u}hlh\"{a}user, Max},
title = {AR Sightseeing: Comparing Information Placements at Outdoor Historical Heritage Sites Using Augmented Reality},
year = {2022},
issue_date = {September 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {MHCI},
url = {https://doi.org/10.1145/3546729},
doi = {10.1145/3546729},
abstract = {Augmented Reality (AR) has influenced the presentation of historical information to tourists and museum visitors by making the information more immersive and engaging. Since smartphones and AR glasses are the primary devices to present AR information to users, it is essential to understand how the information about a historical site can be presented effectively and what type of device is best suited for information placements. In this paper, we investigate the placement of two types of content, historical images and informational text, for smartphones and AR glasses in the context of outdoor historical sites. For this, we explore three types of placements: (1) on-body, (2) world, and (3) overlay. To evaluate all nine combinations of text and image placements for smartphone and AR glasses, we conducted a controlled experiment (N = 18) at outdoor historical landmarks. We discovered that on-body image and text placements were the most convenient compared to overlay and world for both devices. Furthermore, participants found themselves more successful in exploring historical sites using a smartphone than AR glasses. Although interaction with a smartphone was more convenient, participants found exploring AR content using AR glasses more fun.},
journal = {Proceedings of the ACM on Human-Computer Interaction, MobileHCI},
month = {sep},
articleno = {194},
numpages = {17},
keywords = {augmented reality, information placement, sightseeing, historical heritage},
series = {MHCI '22},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022ARsightseeing.pdf}
}

[MHCI '22] "Baby, You Can Ride My Bike": Exploring Maneuver Indications of Self-Driving Bicycles Using a Tandem Simulator

A. Matviienko, D. Mehmedovic, F. Müller, M. Mühlhäuser

ABSTRACT - We envision a future where self-driving bicycles can take us to our destinations. This allows cyclists to use their time on the bike efficiently for work or relaxation without having to focus their attention on traffic. In the related field of self-driving cars, research has shown that communicating the planned route to passengers plays an important role in building trust in automation and situational awareness. For self-driving bicycles, this information transfer will be even more important, as riders will need to actively compensate for the movement of a self-driving bicycle to maintain balance. In this paper, we investigate maneuver indications for self-driving bicycles: (1) ambient light in a helmet, (2) head-up display indications, (3) speech feedback, (4) vibration on the handlebar, and (5) no assistance. To evaluate these indications, we conducted an outdoor experiment (N = 25) in a proposed tandem simulator consisting of a tandem bicycle with a steering and braking control on the back seat and a rider in full control of it. Our results indicate that riders respond faster to visual cues and focus comparably on the reading task while riding with and without maneuver indications. Additionally, we found that the tandem simulator is realistic, safe, and creates an awareness of a human cyclist controlling the tandem.

In Proceedings of the ACM on Human-Computer Interaction, MobileHCI
10.1145/3546723    PDF    Full Video   
@article{Matviienko2022ridemybike,
author = {Matviienko, Andrii and Mehmedovic, Damir and M\"{u}ller, Florian and M\"{u}hlh\"{a}user, Max},
title = {"Baby, You Can Ride My Bike": Exploring Maneuver Indications of Self-Driving Bicycles Using a Tandem Simulator},
year = {2022},
issue_date = {September 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {MHCI},
url = {https://doi.org/10.1145/3546723},
doi = {10.1145/3546723},
abstract = {We envision a future where self-driving bicycles can take us to our destinations. This allows cyclists to use their time on the bike efficiently for work or relaxation without having to focus their attention on traffic. In the related field of self-driving cars, research has shown that communicating the planned route to passengers plays an important role in building trust in automation and situational awareness. For self-driving bicycles, this information transfer will be even more important, as riders will need to actively compensate for the movement of a self-driving bicycle to maintain balance. In this paper, we investigate maneuver indications for self-driving bicycles: (1) ambient light in a helmet, (2) head-up display indications, (3) speech feedback, (4) vibration on the handlebar, and (5) no assistance. To evaluate these indications, we conducted an outdoor experiment (N = 25) in a proposed tandem simulator consisting of a tandem bicycle with a steering and braking control on the back seat and a rider in full control of it. Our results indicate that riders respond faster to visual cues and focus comparably on the reading task while riding with and without maneuver indications. Additionally, we found that the tandem simulator is realistic, safe, and creates an awareness of a human cyclist controlling the tandem.},
journal = {Proceedings of the ACM on Human-Computer Interaction, MobileHCI},
month = {sep},
articleno = {188},
numpages = {21},
keywords = {maneuver indications, tandem, self-driving bicycles},
series = {MHCI '22},
 video = {https://www.youtube.com/watch?v=czOciHFRDk4},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022Baby.pdf}
}


 [CHI '22] Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality

M. Schmitz, S. Günther, D. Schön, F. Müller

ABSTRACT - From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efciency of such grips are afected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our fndings, we conclude that the pinching interaction between the thumb and index fnger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that beneft from pinching as an additional and complementary interaction modality.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3501981    PDF    Teaser Video   
@inproceedings{Schmitz2022squeezyfeely,
address = {New York, NY, USA},
author = {Schmitz, Martin and G\"{u}nther, Sebastian and Sch\"{o}n, Dominik and M\"{u}ller, Florian},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
doi = {10.1145/3491102.3501981},
isbn = {978-1-4503-9157-3/22/04},
keywords = {Input, Pinching, Deformation, Mixed Reality, Thumb-to-fnger, User Studies},
month = {apr},
publisher = {ACM},
title = {Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality},
url = {https://doi.org/10.1145/3491102.3501981},
year = {2022},
abstract = {From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efciency of such grips are afected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our fndings, we conclude that the pinching interaction between the thumb and index fnger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that beneft from pinching as an additional and complementary interaction modality.},
series = {CHI '22},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/schmitz2022squeezyfeely.pdf},
 teaservideo = {https://www.youtube.com/watch?v=DW23J3CalFw},
 award={Best Paper},
 note={Best Paper Award}
}

 [CHI '22] SkyPort: Investigating 3D Teleportation Methods in Virtual Environments

A. Matviienko, F. Müller, M. Schmitz, M. Fendrich, M. Mühlhäuser

ABSTRACT - Teleportation has become the de facto standard of locomotion in Virtual Reality (VR) environments. However, teleportation with parabolic and linear target aiming methods is restricted to horizontal 2D planes and it is unknown how they transfer to the 3D space. In this paper, we propose six 3D teleportation methods in virtual environments based on the combination of two existing aiming methods (linear and parabolic) and three types of transitioning to a target (instant, interpolated and continuous). To investigate the performance of the proposed teleportation methods, we conducted a controlled lab experiment (N = 24) with a mid-air coin collection task to assess accuracy, efciency and VR sickness. We discovered that the linear aiming method leads to faster and more accurate target selection. Moreover, a combination of linear aiming and instant transitioning leads to the highest efciency and accuracy without increasing VR sickness.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3501983    PDF   
@inproceedings{Matviienko2022skyport,
author = {Matviienko, Andrii and Müller, Florian and Schmitz, Martin and Fendrich, Marco and Mühlhäuser, Max},
title = {SkyPort: Investigating 3D Teleportation Methods in Virtual Environments},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491102.3501983},
doi = {10.1145/3491102.3501983},
keywords = {virtual reality, teleportation, locomotion, virtual environments},
location = {New Orleans, LA, USA},
series = {CHI '22},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
award={Honorable Mention},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022skyport.pdf},
 abstract={Teleportation has become the de facto standard of locomotion in Virtual Reality (VR) environments. However, teleportation with parabolic and linear target aiming methods is restricted to horizontal 2D planes and it is unknown how they transfer to the 3D space. In this paper, we propose six 3D teleportation methods in virtual environments based on the combination of two existing aiming methods (linear and parabolic) and three types of transitioning to a target (instant, interpolated and continuous). To investigate the performance of the proposed teleportation methods, we conducted a controlled lab experiment (N = 24) with a mid-air coin collection task to assess accuracy, efciency and VR sickness. We discovered that the linear aiming method leads to faster and more accurate target selection. Moreover, a combination of linear aiming and instant transitioning leads to the highest efciency and accuracy without increasing VR sickness.}
}


[CHI '22] Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual Reality

S. Günther, J. Rasch, D. Schön, F. Müller, M. Schmitz, J. Riemann, A. Matviienko, M. Mühlhäuser

ABSTRACT - Haptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3517454    PDF    Teaser Video    Full Video   
@inproceedings{Guenther2022smooth,
address = {New York, NY, USA},
author = {G\"{u}nther, Sebastian and Rasch, Julian and Sch\"{o}n, Dominik and M\"{u}ller, Florian and Schmitz, Martin and Riemann, Jan and Matviienko, Andrii and M\"{u}hlh\"{a}user, Max},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
doi = {10.1145/3491102.3517454},
isbn = {978-1-4503-9157-3/22/04},
keywords = {haptic,smooth,stimuli,stroke,visual,visualizations},
month = {apr},
publisher = {ACM},
title = {Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual Reality},
url = {https://dl.acm.org/doi/10.1145/3491102.3517454},
year = {2022},
abstract = {Haptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.},
series = {CHI '22},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Guenther2022smooth.pdf},
 video = {https://www.youtube.com/watch?v=9q6zZCJ9rLg},
 teaservideo = {https://www.youtube.com/watch?v=glEOP48qVCE},
}

[CHI '22] BikeAR: Understanding Cyclists' Crossing Decision-Making at Uncontrolled Intersections using Augmented Reality

A. Matviienko, F. Müller, D. Schön, P. Seesemann, S. Günther, M. Mühlhäuser

ABSTRACT - Cycling has become increasingly popular as a means of transportation. However, cyclists remain a highly vulnerable group of road users. According to accident reports, one of the most dangerous situations for cyclists are uncontrolled intersections, where cars approach from both directions. To address this issue and assist cyclists in crossing decision-making at uncontrolled intersections, we designed two visualizations that: (1) highlight occluded cars through an X-ray vision and (2) depict the remaining time the intersection is safe to cross via a Countdown. To investigate the efficiency of these visualizations, we proposed an Augmented Reality simulation as a novel evaluation method, in which the above visualizations are represented as AR, and conducted a controlled experiment with 24 participants indoors. We found that the X-ray ensures a fast selection of shorter gaps between cars, while the Countdown facilitates a feeling of safety and provides a better intersection overview.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3517560    PDF    Full Video   
@inproceedings{Matviienko2022bikear,
author = {Matviienko, Andrii and Müller, Florian and Schön, Dominik and Seesemann, Paul and Günther, Sebastian and Mühlhäuser, Max},
title = {BikeAR: Understanding Cyclists' Crossing Decision-Making at Uncontrolled Intersections using Augmented Reality},
year = {2022},
  booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491102.3517560},
doi = {10.1145/3491102.3517560},
keywords = {augmented reality, cyclist safety, crossing decision-making},
location = {New Orleans, LA, USA},
series = {CHI '22},
abstract = {Cycling has become increasingly popular as a means of transportation. However, cyclists remain a highly vulnerable group of road users. According to accident reports, one of the most dangerous situations for cyclists are uncontrolled intersections, where cars approach from both directions. To address this issue and assist cyclists in crossing decision-making at uncontrolled intersections, we designed two visualizations that: (1) highlight occluded cars through an X-ray vision and (2) depict the remaining time the intersection is safe to cross via a Countdown. To investigate the efficiency of these visualizations, we proposed an Augmented Reality simulation as a novel evaluation method, in which the above visualizations are represented as AR, and conducted a controlled experiment with 24 participants indoors. We found that the X-ray ensures a fast selection of shorter gaps between cars, while the Countdown facilitates a feeling of safety and provides a better intersection overview.},
video={https://www.youtube.com/watch?v=YKsDlPmSd68},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022bikear.pdf}
}
  

[CHI '22] Reducing Virtual Reality Sickness for Cyclists in VR Bicycle Simulators

A. Matviienko, F. Müller, M. Zickler, L. Gasche, J. Abels, T. Steinert, M. Mühlhäuser

ABSTRACT - Virtual Reality (VR) bicycle simulations aim to recreate the feeling of riding a bicycle and are commonly used in many application areas. However, current solutions still create mismatches between the visuals and physical movement, which causes VR sickness and diminishes the cycling experience. To reduce VR sickness in bicycle simulators, we conducted two controlled lab experiments addressing two main causes of VR sickness: (1) steering methods and (2) cycling trajectory. In the frst experiment (N = 18) we compared handlebar, HMD, and upper-body steering methods. In the second experiment (N = 24) we explored three types of movement in VR (1D, 2D, and 3D trajectories) and three countermeasures (airfow, vibration, and dynamic Field-of-View) to reduce VR sickness. We found that handlebar steering leads to the lowest VR sickness without decreasing cycling performance and airfow suggests to be the most promising method to reduce VR sickness for all three types of trajectories.

In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
10.1145/3491102.3501959    PDF   
@inproceedings{Matviienko2022reducingmotionsickness,
author = {Matviienko, Andrii and Müller, Florian and Zickler, Marcel and Gasche, Lisa and Abels, Julia and Steinert, Till and Mühlhäuser, Max},
title = {Reducing Virtual Reality Sickness for Cyclists in VR Bicycle Simulators},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491102.3501959},
doi = {10.1145/3491102.3501959},
keywords = {virtual reality, cycling, VR sickness, bicycle simulators},
location = {New Orleans, LA, USA},
series = {CHI '22},
booktitle = {Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022reducingmotionsickness.pdf},
 abstract={Virtual Reality (VR) bicycle simulations aim to recreate the feeling of riding a bicycle and are commonly used in many application areas. However, current solutions still create mismatches between the visuals and physical movement, which causes VR sickness and diminishes the cycling experience. To reduce VR sickness in bicycle simulators, we conducted two controlled lab experiments addressing two main causes of VR sickness: (1) steering methods and (2) cycling trajectory. In the frst experiment (N = 18) we compared handlebar, HMD, and upper-body steering methods. In the second experiment (N = 24) we explored three types of movement in VR (1D, 2D, and 3D trajectories) and three countermeasures (airfow, vibration, and dynamic Field-of-View) to reduce VR sickness. We found that handlebar steering leads to the lowest VR sickness without decreasing cycling performance and airfow suggests to be the most promising method to reduce VR sickness for all three types of trajectories.}
}

[CHI EA '22] E-ScootAR: Exploring Unimodal Warnings for E-Scooter Riders in Augmented Reality

A. Matviienko, F. Müller, D. Schön, R. Fayard, S. Abaspur, Y. Li, M. Mühlhäuser

ABSTRACT - Micro-mobility is becoming a more popular means of transportation. However, this increased popularity brings its challenges. In particular, the accident rates for E-Scooter riders increase, which endangers the riders and other road users. In this paper, we explore the idea of augmenting E-Scooters with unimodal warnings to prevent collisions with other road users, which include Augmented Reality (AR) notifcations, vibrotactile feedback on the handlebar, and auditory signals in the AR glasses. We conducted an outdoor experiment (N = 13) using an Augmented Reality simulation and compared these types of warnings in terms of reaction time, accident rate, and feeling of safety. Our results indicate that AR and auditory warnings lead to shorter reaction times, have a better perception, and create a better feeling of safety than vibrotactile warnings. Moreover, auditory signals have a higher acceptance by the riders compared to the other two types of warnings.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '22 Extended Abstracts)
10.1145/3491101.3519831    PDF   
@inproceedings{Matviienko2022escootar,
author = {Matviienko, Andrii and Müller, Florian and Schön, Dominik and Fayard, Régis and Abaspur, Salar and Li, Yi and Mühlhäuser, Max},
title = {E-ScootAR: Exploring Unimodal Warnings for E-Scooter Riders in Augmented Reality},
year = {2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3491101.3519831},
doi = {10.1145/3491101.3519831},
keywords = {E-Scooter, micro-mobility, traffic safety, augmented reality},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '22 Extended Abstracts)},
location = {New Orleans, LA, USA},
series = {CHI EA '22},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2022/Matviienko2022escootar.pdf},
 abstract={Micro-mobility is becoming a more popular means of transportation. However, this increased popularity brings its challenges. In particular, the accident rates for E-Scooter riders increase, which endangers the riders and other road users. In this paper, we explore the idea of augmenting E-Scooters with unimodal warnings to prevent collisions with other road users, which include Augmented Reality (AR) notifcations, vibrotactile feedback on the handlebar, and auditory signals in the AR glasses. We conducted an outdoor experiment (N = 13) using an Augmented Reality simulation and compared these types of warnings in terms of reaction time, accident rate, and feeling of safety. Our results indicate that AR and auditory warnings lead to shorter reaction times, have a better perception, and create a better feeling of safety than vibrotactile warnings. Moreover, auditory signals have a higher acceptance by the riders compared to the other two types of warnings.}
}
  



[DIS '21] CameraReady: Assessing the Influence of Display Types and Visualizations on Posture Guidance

H. Elsayed, P. Hoffmann, S. Günther, M. Schmitz, M. Weigel, M. Mühlhäuser, F. Müller

ABSTRACT - Computer-supported posture guidance is used in sports, dance training, expression of art with movements, and learning gestures for interaction. At present, the influence of display types and visualizations have not been investigated in the literature. These factors are important as they directly impact perception and cognitive load, and hence influence the performance of participants. In this paper, we conducted a controlled experiment with 20 participants to compare the use of five display types with different screen sizes: smartphones, tablets, desktop monitors, TVs, and large displays. On each device, we compared three common visualizations for posture guidance: skeletons, silhouettes, and 3d body models. To conduct our assessment, we developed a mobile and cross-platform system that only requires a single camera. Our results show that compared to a smartphone display, larger displays show a lower error. Regarding the choice of visualization, participants rated 3D body models as significantly more usable in comparison to a skeleton visualization.

In Designing Interactive Systems Conference 2021
10.1145/3461778.3462026    PDF   
@inproceedings{Elsayed2021cameraready,
abstract = {Computer-supported posture guidance is used in sports, dance training, expression of art with movements, and learning gestures for interaction. At present, the influence of display types and visualizations have not been investigated in the literature. These factors are important as they directly impact perception and cognitive load, and hence influence the performance of participants. In this paper, we conducted a controlled experiment with 20 participants to compare the use of five display types with different screen sizes: smartphones, tablets, desktop monitors, TVs, and large displays. On each device, we compared three common visualizations for posture guidance: skeletons, silhouettes, and 3d body models. To conduct our assessment, we developed a mobile and cross-platform system that only requires a single camera. Our results show that compared to a smartphone display, larger displays show a lower error. Regarding the choice of visualization, participants rated 3D body models as significantly more usable in comparison to a skeleton visualization.},
address = {New York, NY, USA},
author = {Elsayed, Hesham and Hoffmann, Philipp and G\"{u}nther, Sebastian and Schmitz, Martin and Weigel, Martin and M\"{u}hlh\"{a}user, Max and M\"{u}ller, Florian},
booktitle = {Designing Interactive Systems Conference 2021},
doi = {10.1145/3461778.3462026},
isbn = {9781450384766},
month = {jun},
pages = {1046--1055},
publisher = {ACM},
title = {CameraReady: Assessing the Influence of Display Types and Visualizations on Posture Guidance},
url = {https://dl.acm.org/doi/10.1145/3461778.3462026},
year = {2021},
series = {DIS '21},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/elsayed2021cameraready.pdf}
}

[EICS '21] ActuBoard: An Open Rapid Prototyping Platform to Integrate Hardware Actuators in Remote Applications

S. Günther, F. Müller, F. Hübner, M. Mühlhäuser, A. Matviienko

ABSTRACT - Prototyping is an essential step in developing tangible experiences and novel devices, ranging from haptic feedback to wearables. However, prototyping of actuated devices nowadays often requires repetitive and time-consuming steps, such as wiring, soldering, and programming basic communication, before HCI researchers and designers can focus on their primary interest: designing interaction. In this paper, we present ActuBoard, a prototyping platform to support 1) quick assembly, 2) less preparation work, and 3) the inclusion of non-tech-savvy users. With ActuBoard, users are not required to create complex circuitry, write a single line of firmware, or implementing communication protocols. Acknowledging existing systems, our platform combines the flexibility of low-level microcontrollers and ease-of-use of abstracted tinker platforms to control actuators from separate applications. As further contribution, we highlight the technical specifications and published the ActuBoard platform as Open Source.

In Companion of the 2021 ACM SIGCHI Symposium on Engineering Interactive Computing Systems
10.1145/3459926.3464757    PDF   
@inproceedings{Guenther2021actuboard,
author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and H\"{u}bner, Felix and M\"{u}hlh\"{a}user, Max and Matviienko, Andrii},
title = {ActuBoard: An Open Rapid Prototyping Platform to Integrate Hardware Actuators in Remote Applications},
year = {2021},
isbn = {9781450384490},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3459926.3464757},
doi = {10.1145/3459926.3464757},
abstract = { Prototyping is an essential step in developing tangible experiences and novel devices, ranging from haptic feedback to wearables. However, prototyping of actuated devices nowadays often requires repetitive and time-consuming steps, such as wiring, soldering, and programming basic communication, before HCI researchers and designers can focus on their primary interest: designing interaction. In this paper, we present ActuBoard, a prototyping platform to support 1) quick assembly, 2) less preparation work, and 3) the inclusion of non-tech-savvy users. With ActuBoard, users are not required to create complex circuitry, write a single line of firmware, or implementing communication protocols. Acknowledging existing systems, our platform combines the flexibility of low-level microcontrollers and ease-of-use of abstracted tinker platforms to control actuators from separate applications. As further contribution, we highlight the technical specifications and published the ActuBoard platform as Open Source.},
booktitle = {Companion of the 2021 ACM SIGCHI Symposium on Engineering Interactive Computing Systems},
pages = {70–76},
numpages = {7},
keywords = {hardware, tinkering, actuators, haptics, rapid prototyping, open source, virtual reality},
location = {Virtual Event, Netherlands},
series = {EICS '21},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/Guenther2021actuboard.pdf}
}

 [CHI '21] Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive Touchscreens

M. Schmitz, F. Müller, M. Mühlhäuser, J. Riemann, H. Le

ABSTRACT - Tangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction. To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.

In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
10.1145/3411764.3445502    PDF    Teaser Video   
@inproceedings{schmitz2021itsybits,
  title = {Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive Touchscreens},
  booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  author = {Schmitz, Martin and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Riemann, Jan and Le, Huy Viet},
  year = {2021},
  publisher = {ACM},
  address = {New York, NY, USA},
  doi = {10.1145/3411764.3445502},
  abstract = {Tangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction. To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.},
  isbn = {978-1-4503-8096-6},
  series = {CHI '21},
  teaservideo = {https://www.youtube.com/watch?v=55vHxnOKl6k},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/schmitz2021itsybits.pdf},
 award={Honorable Mention}
}

 [CHI '21] Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics

M. Schmitz, J. Riemann, F. Müller, S. Kreis, M. Mühlhäuser

ABSTRACT - 3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.

In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
10.1145/3411764.3445641    PDF    Teaser Video   
@inproceedings{schmitz2021ohsnap,
  title = {Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics},
  booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  author = {Schmitz, Martin and Riemann, Jan and M{\"u}ller, Florian and Kreis, Steffen and M{\"u}hlh{\"a}user, Max},
  year = {2021},
  publisher = {ACM},
  address = {New York, NY, USA},
  doi = {10.1145/3411764.3445641},
  abstract = {3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.},
  isbn = {978-1-4503-8096-6},
  series = {CHI '21},
teaservideo = {https://www.youtube.com/watch?v=ado4a_chzqo},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/schmitz2021ohsnap.pdf},
 award={Best Paper}
}

[CHI '21] Let's Frets! Assisting Guitar Students During Practice via Capacitive Sensing

K. Marky, . Wei{\ss

ABSTRACT - Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let's Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let's Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let's Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let's Frets enables independent practice sessions that can be translated to other musical instruments.

In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
10.1145/3411764.3445595    PDF    Teaser Video   
@inproceedings{Marky2021letsfrets,
abstract = {Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let's Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let's Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let's Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let's Frets enables independent practice sessions that can be translated to other musical instruments.},
address = {New York, NY, USA},
author = {Marky, Karola and Wei{\ss}, Andreas and Matviienko, Andrii and Brandherm, Florian and Wolf, Sebastian and Schmitz, Martin and Krell, Florian and M{\"{u}}ller, Florian and M{\"{u}}hlh{\"{a}}user, Max and Kosch, Thomas},
booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
doi = {10.1145/3411764.3445595},
isbn = {9781450380966},
keywords = {capacitive sensing,musical instruments,support setup},
month = {may},
pages = {1--12},
publisher = {ACM},
series = {CHI '21},
title = {Let's Frets! Assisting Guitar Students During Practice via Capacitive Sensing},
url = {https://doi.org/10.1145/3411764.3445595},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/marky2021lets.pdf},
 teaservideo = {https://www.youtube.com/watch?v=vFx8c5aF6vA},
year = {2021}
}


[CHI EA ’21] VRtangibles: Assisting Children in Creating Virtual Scenes using Tangible Objects and Touch Input

A. Matviienko, M. Langer, F. Müller, M. Schmitz, M. Mühlhäuser

ABSTRACT - Children are increasingly exposed to virtual reality (VR) technology as end-users. However, they miss an opportunity to become active creators due to the barrier of insufficient technical background. Creating scenes in VR requires considerable programming knowledge and excludes non-tech-savvy users, e.g., school children. In this paper, we showcase a system called VRtangibles, which combines tangible objects and touch input to create virtual scenes without programming. With VRtangibles, we aim to engage children in the active creation of virtual scenes via playful hands-on activities. From the lab study with six school children, we discovered that the majority of children were successful in creating virtual scenes using VRtangibles and found it engaging and fun to use.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)
10.1145/3411763.3451671    PDF   
@inproceedings{Matviienko2021vrtangibles,
author = {Matviienko, Andrii and Langer, Marcel and Müller, Florian and Schmitz, Martin and Mühlhäuser, Max},
title = {VRtangibles: Assisting Children in Creating Virtual Scenes using Tangible Objects and Touch Input},
year = {2021},
isbn = {978-1-4503-8095-9/21/05},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411763.3451671},
doi = {10.1145/3411763.3451671},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)},
pages = {1–7},
numpages = {7},
keywords = {virtual reality, tangibles, touch input, children, education},
location = {Yokohama, Japan},
series = {CHI EA ’21},
abstract={Children are increasingly exposed to virtual reality (VR) technology as end-users. However, they miss an opportunity to become active creators due to the barrier of insufficient technical background. Creating scenes in VR requires considerable programming knowledge and excludes non-tech-savvy users, e.g., school children. In this paper, we showcase a system called VRtangibles, which combines tangible objects and touch input to create virtual scenes without programming. With VRtangibles, we aim to engage children in the active creation of virtual scenes via playful hands-on activities. From the lab study with six school children, we discovered that the majority of children were successful in creating virtual scenes using VRtangibles and found it engaging and fun to use.}
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/matviienko2021vrtangibles.pdf}
}
  

[CHI EA ’21] Quantified Cycling Safety: Towards a Mobile Sensing Platform to Understand Perceived Safety of Cyclists

A. Matviienko, F. Heller, B. Pfleging

ABSTRACT - Today’s level of cyclists’ road safety is primarily estimated using accident reports and self-reported measures. However, the former is focused on post-accident situations and the latter relies on subjective input. In our work, we aim to extend the landscape of cyclists’ safety assessment methods via a two-dimensional taxonomy, which covers data source (internal/external) and type of measurement (objective/subjective). Based on this taxonomy, we classify existing methods and present a mobile sensing concept for quantified cycling safety that fills the identified methodological gap by collecting data about body movements and physiological data. Finally, we outline a list of use cases and future research directions within the scope of the proposed taxonomy and sensing concept.

In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)
10.1145/3411763.3451678    PDF   
@inproceedings{Matviienko2021quantisafety,
author = {Matviienko, Andrii and Heller, Florian and Pfleging, Bastian},
title = {Quantified Cycling Safety: Towards a Mobile Sensing Platform to Understand Perceived Safety of Cyclists},
year = {2021},
isbn = {978-1-4503-8095-9/21/05},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411763.3451678},
doi = {10.1145/3411763.3451678},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts)},
pages = {1–6},
numpages = {6},
keywords = {Cyclist safety taxonomy, on-body sensing, head movements, perceived road safety},
location = {Yokohama, Japan},
series = {CHI EA ’21},
abstract={Today’s level of cyclists’ road safety is primarily estimated using accident reports and self-reported measures. However, the former is focused on post-accident situations and the latter relies on subjective input. In our work, we aim to extend the landscape of cyclists’ safety assessment methods via a two-dimensional taxonomy, which covers data source (internal/external) and type of measurement (objective/subjective). Based on this taxonomy, we classify existing methods and present a mobile sensing concept for quantified cycling safety that fills the identified methodological gap by collecting data about body movements and physiological data. Finally, we outline a list of use cases and future research directions within the scope of the proposed taxonomy and sensing concept.}
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/matviienko2021quantified.pdf}
}
  



[IMWUT '20] VibroMap: Understanding the Spacing of Vibrotactile Actuators across the Body

H. Elsayed, M. Weigel, F. Müller, M. Schmitz, K. Marky, S. Günther, J. Riemann, M. Mühlhäuser

ABSTRACT - In spite of the great potential of on-body vibrotactile displays for a variety of applications, research lacks an understanding of the spacing between vibrotactile actuators. Through two experiments, we systematically investigate vibrotactile perception on the wrist, forearm, upper arm, back, torso, thigh, and leg, each in transverse and longitudinal body orientation. In the first experiment, we address the maximum distance between vibration motors that still preserves the ability to generate phantom sensations. In the second experiment, we investigate the perceptual accuracy of localizing vibrations in order to establish the minimum distance between vibration motors. Based on the results, we derive VibroMap, a spatial map of the functional range of inter-motor distances across the body. VibroMap supports hardware and interaction designers with design guidelines for constructing body-worn vibrotactile displays.

In Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
10.1145/3432189    PDF   
@article{elsayed2020vibromap,
author = {Elsayed, Hesham and Weigel, Martin and M\"{u}ller, Florian and Schmitz, Martin and Marky, Karola and G\"{u}nther, Sebastian and Riemann, Jan and M\"{u}hlh\"{a}user, Max},
title = {VibroMap: Understanding the Spacing of Vibrotactile Actuators across the Body},
year = {2020},
issue_date = {December 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {4},
number = {4},
url = {https://doi.org/10.1145/3432189},
doi = {10.1145/3432189},
abstract = {In spite of the great potential of on-body vibrotactile displays for a variety of applications, research lacks an understanding of the spacing between vibrotactile actuators. Through two experiments, we systematically investigate vibrotactile perception on the wrist, forearm, upper arm, back, torso, thigh, and leg, each in transverse and longitudinal body orientation. In the first experiment, we address the maximum distance between vibration motors that still preserves the ability to generate phantom sensations. In the second experiment, we investigate the perceptual accuracy of localizing vibrations in order to establish the minimum distance between vibration motors. Based on the results, we derive VibroMap, a spatial map of the functional range of inter-motor distances across the body. VibroMap supports hardware and interaction designers with design guidelines for constructing body-worn vibrotactile displays.},
journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
month = dec,
articleno = {125},
numpages = {16},
keywords = {vibrotactile interfaces, wearable computing, actuator spacing, phantom sensation, haptic output, ERM vibration motors, design implications},
series = {IMWUT '20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/elsayed2020vibromap.pdf}
}


[VRST '20] VRSketchPen: Unconstrained Haptic Assistance for Sketching in Virtual 3D Environments

H. Elsayed, M. Barrera Machuca, C. Schaarschmidt, K. Marky, F. Müller, J. Riemann, A. Matviienko, M. Schmitz, M. Weigel, M. Mühlhäuser

ABSTRACT - Accurate sketching in virtual 3D environments is challenging due to aspects like limited depth perception or the absence of physical support. To address this issue, we propose VRSketchPen – a pen that uses two haptic modalities to support virtual sketching without constraining user actions: (1) pneumatic force feedback to simulate the contact pressure of the pen against virtual surfaces and (2) vibrotactile feedback to mimic textures while moving the pen over virtual surfaces. To evaluate VRSketchPen, we conducted a lab experiment with 20 participants to compare (1) pneumatic, (2) vibrotactile and (3) a combination of both with (4) snapping and no assistance for flat and curved surfaces in a 3D virtual environment. Our findings show that usage of pneumatic, vibrotactile and their combination significantly improves 2D shape accuracy and leads to diminished depth errors for flat and curved surfaces. Qualitative results indicate that users find the addition of unconstraining haptic feedback to significantly improve convenience, confidence and user experience.

In 26th ACM Symposium on Virtual Reality Software and Technology
10.1145/3385956.3418953    PDF   
@inproceedings{elsayed2020vrsketchpen,
author = {Elsayed, Hesham and Barrera Machuca, Mayra Donaji and Schaarschmidt, Christian and Marky, Karola and M\"{u}ller, Florian and Riemann, Jan and Matviienko, Andrii and Schmitz, Martin and Weigel, Martin and M\"{u}hlh\"{a}user, Max},
title = {VRSketchPen: Unconstrained Haptic Assistance for Sketching in Virtual 3D Environments},
year = {2020},
isbn = {9781450376198},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3385956.3418953},
doi = {10.1145/3385956.3418953},
abstract = { Accurate sketching in virtual 3D environments is challenging due to aspects like limited depth perception or the absence of physical support. To address this issue, we propose VRSketchPen – a pen that uses two haptic modalities to support virtual sketching without constraining user actions: (1) pneumatic force feedback to simulate the contact pressure of the pen against virtual surfaces and (2) vibrotactile feedback to mimic textures while moving the pen over virtual surfaces. To evaluate VRSketchPen, we conducted a lab experiment with 20 participants to compare (1) pneumatic, (2) vibrotactile and (3) a combination of both with (4) snapping and no assistance for flat and curved surfaces in a 3D virtual environment. Our findings show that usage of pneumatic, vibrotactile and their combination significantly improves 2D shape accuracy and leads to diminished depth errors for flat and curved surfaces. Qualitative results indicate that users find the addition of unconstraining haptic feedback to significantly improve convenience, confidence and user experience.},
booktitle = {26th ACM Symposium on Virtual Reality Software and Technology},
articleno = {3},
numpages = {11},
keywords = {3D User Interfaces, Pneumatic Actuation, Vibrotactile Actuation, Haptics, Sketching, Virtual Reality},
location = {Virtual Event, Canada},
series = {VRST '20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2021/elsayed2020vrsketchpen.pdf}
}



[PerDis ’20] Reminding Child Cyclists about Safety Gestures

A. Matviienko, S. Ananthanarayan, R. Kappes, W. Heuten, S. Boll

ABSTRACT - Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children's behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.

In Proceedings of the 9TH ACM International Symposium on Pervasive Displays
10.1145/3393712.3394120    PDF    Full Video   
@inproceedings{matviienko2020remindingcyclists,
author = {Matviienko, Andrii and Ananthanarayan, Swamy and Kappes, Raphael and Heuten, Wilko and Boll, Susanne},
title = {Reminding Child Cyclists about Safety Gestures},
year = {2020},
isbn = {9781450379861},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3393712.3394120},
doi = {10.1145/3393712.3394120},
booktitle = {Proceedings of the 9TH ACM International Symposium on Pervasive Displays},
pages = {1–7},
numpages = {7},
keywords = {HUD glasses, safety gestures, child cyclists, cycling safety},
location = {Manchester, United Kingdom},
series = {PerDis ’20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/matviienko2020remindingcyclists.pdf},
 abstract={Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children's behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.},
 video = {https://www.youtube.com/watch?v=cSKD-MoZ-54},
}
  


[CHI '20] 3D-Auth: Two-Factor Authentication with Personalized 3D-Printed Items

K. Marky, M. Schmitz, V. Zimmermann, M. Herbers, K. Kunze, M. Mühlhäuser

ABSTRACT - Two-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376189    PDF    Teaser Video   
@inproceedings{marky20203dauth,
author = {Marky, Karola and Schmitz, Martin and Zimmermann, Verena and Herbers, Martin and Kunze, Kai and M{\"u}hlh{\"a}user, Max},
title = {3D-Auth: Two-Factor Authentication with Personalized 3D-Printed Items},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx.doi.org/10.1145/3313831.3376189},
teaservideo = {https://youtu.be/_dHihnJTRek},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/marky2020auth3d.pdf},
doi = {10.1145/3313831.3376189},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Two-Factor Authentication, 3D Printing, Capacitive Sensing},
abstract = {Two-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.}
}

[CHI '20] Podoportation: Foot-Based Locomotion in Virtual Reality

J. von Willich, M. Schmitz, F. Müller, D. Schmitt, M. Mühlhäuser

ABSTRACT - Virtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based and hands-free locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for hands-free and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376626    PDF    Teaser Video   
@inproceedings{willich2020podoportation,
author = {von Willich, Julius and Schmitz, Martin  and M{\"u}ller, Florian and Schmitt, Daniel and M{\"u}hlh{\"a}user, Max},
title = {Podoportation: Foot-Based Locomotion in Virtual Reality},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx.doi.org/10.1145/3313831.3376626},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/willich2020podoportation.pdf},
teaservideo = {https://www.youtube.com/watch?v=HGP5MN_e-k0},
doi = {10.1145/3313831.3376626},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Virtual Reality, Locomotion, Foot-based input},
abstract = {Virtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based and hands-free locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for hands-free and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.}
}

[CHI '20] Improving the Usability and UX of the Swiss Internet Voting Interface

K. Marky, V. Zimmermann, M. Funk, J. Daubert, K. Bleck, M. Mühlhäuser

ABSTRACT - Up to 20% of residential votes and up to 70% of absentee votes in Switzerland are cast online. The Swiss scheme aims to provide individual verifiability by different verification codes. The voters have to carry out verification on their own, making the usability and UX of the interface of great importance. To improve the usability, we first performed an evaluation with 12 human-computer interaction experts to uncover usability weaknesses of the Swiss Internet voting interface. Based on the experts' findings, related work, and an exploratory user study with 36 participants, we propose a redesign that we evaluated in a user study with 49 participants. Our study confirmed that the redesign indeed improves the detection of incorrect votes by 33% and increases the trust and understanding of the voters. Our studies furthermore contribute important recommendations for designing verifiable e-voting systems in general.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376769    PDF   
@inproceedings{marky2020swissvoting,
author = {Marky, Karola and Zimmermann, Verena and Funk, Markus and Daubert, J{\"o}rg and Bleck, Kira and M{\"u}hlh{\"a}user, Max},
title = {Improving the Usability and UX of the Swiss Internet Voting Interface},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx. doi.org/10.1145/3313831.3376769},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/marky2020swissvoting.pdf},
doi = {10.1145/3313831.3376769},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {E-Voting, Individual Verifiability, Usability Evaluation},
abstract = {Up to 20% of residential votes and up to 70% of absentee votes in Switzerland are cast online. The Swiss scheme aims to provide individual verifiability by different verification codes. The voters have to carry out verification on their own, making the usability and UX of the interface of great importance. To improve the usability, we first performed an evaluation with 12 human-computer interaction experts to uncover usability weaknesses of the Swiss Internet voting interface. Based on the experts' findings, related work, and an exploratory user study with 36 participants, we propose a redesign that we evaluated in a user study with 49 participants. Our study confirmed that the redesign indeed improves the detection of incorrect votes by 33% and increases the trust and understanding of the voters. Our studies furthermore contribute important recommendations for designing verifiable e-voting systems in general.}
}


[CHI '20] Therminator: Understanding the Interdependency of Visual and On-Body Thermal Feedback in Virtual Reality

S. Günther, F. Müller, D. Schön, O. Elmoghazy, M. Schmitz, M. Mühlhäuser

ABSTRACT - Recent advances have made Virtual Reality (VR) more realistic than ever before. This improved realism is attributed to today's ability to increasingly appeal to human sensations, such as visual, auditory or tactile. While research also examines temperature sensation as an important aspect, the interdependency of visual and thermal perception in VR is still underexplored. In this paper, we propose Therminator, a thermal display concept that provides warm and cold on-body feedback in VR through heat conduction of flowing liquids with different temperatures. Further, we systematically evaluate the interdependency of different visual and thermal stimuli on the temperature perception of arm and abdomen with 25 participants. As part of the results, we found varying temperature perception depending on the stimuli, as well as increasing involvement of users during conditions with matching stimuli.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376195    PDF    Teaser Video    Full Video   
@inproceedings{guenther2020therminator,
 author = {G{\"u}nther, Sebastian and M{\"u}ller, Florian and Sch{\"o}n, Dominik and Elmoghazy, Omar and Schmitz, Martin and M{\"u}hlh{\"a}user, Max},
 title = {Therminator: Understanding the Interdependency of Visual and On-Body Thermal Feedback in Virtual Reality},
 booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3313831.3376195},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/guenther2020therminator.pdf},
 video = {https://www.youtube.com/watch?v=q5lkmqAua78},
 teaservideo = {https://youtu.be/w9FnG1eoWD8},
 doi = {10.1145/3313831.3376195},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Haptics, Temperature, Thermal Feedback, Virtual Reality},
 abstract = {Recent advances have made Virtual Reality (VR) more realistic than ever before. This improved realism is attributed to today's ability to increasingly appeal to human sensations, such as visual, auditory or tactile. While research also examines temperature sensation as an important aspect, the interdependency of visual and thermal perception in VR is still underexplored. In this paper, we propose Therminator, a thermal display concept that provides warm and cold on-body feedback in VR through heat conduction of flowing liquids with different temperatures. Further, we systematically evaluate the interdependency of different visual and thermal stimuli on the temperature perception of arm and abdomen with 25 participants. As part of the results, we found varying temperature perception depending on the stimuli, as well as increasing involvement of users during conditions with matching stimuli.}
}

[CHI '20] Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted Displays

F. Müller, M. Schmitz, D. Schmitt, S. Günther, M. Funk, M. Mühlhäuser

ABSTRACT - Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376852    PDF    Teaser Video    Full Video   
@inproceedings{mueller2020walktheline,
 author = {M{\"u}ller, Florian and Schmitz, Martin and Schmitt, Daniel and G{\"u}nther, Sebastian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
 title = {Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted Displays},
 booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3313831.3376852},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/mueller2020walktheline.pdf},
 video = {https://youtu.be/ylAlzFqWx7g},
 teaservideo = {https://youtu.be/6-XrF6J9cTc},
 doi = {10.1145/3313831.3376852},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Augmented Reality, Head-Mounted Display, Input, Walking},
 abstract = {Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.}
}



[CHI EA '20] PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation

S. Günther, D. Schön, F. Müller, M. Mühlhäuser, M. Schmitz

ABSTRACT - Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: a VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.

In Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3334480.3382916    PDF    Teaser Video    Full Video   
@inproceedings{guenther2020pneumovolley,
 author = {G{\"u}nther, Sebastian and Sch{\"o}n, Dominik and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Schmitz, Martin},
 title = {PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation},
 booktitle = {Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
 series = {CHI EA '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3334480.3382916},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/guenther2020pneumovolley.pdf},
 video = {https://www.youtube.com/watch?v=ZKnV8HrUx9M},
 teaservideo = {https://www.youtube.com/watch?v=-SlrCqF-5m4},
 doi = {10.1145/3334480.3382916},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Haptics, Pressure, Volleyball, Virtual Reality, Blobbyvolley},
 abstract = {Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: a VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.}
}



[DIS '19] You Invaded my Tracking Space!Using Augmented Virtuality for Spotting Passersby inRoom-Scale Virtual Reality

J. von Willich, M. Funk, F. Müller, K. Marky, J. Riemann, M. Mühlhäuser

ABSTRACT - With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a 3D-Scan, showing an Avatar, and showing a 2D-Image of the passerby. Our results show that while a 2D-Image and an Avatar are the fastest representations to spot passersby, the Avatar and the 3D-Scan representations were the most accurate.

In Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19
10.1145/3322276.3322334    Full Video   
@inproceedings{willich2019tracking,
title = {You Invaded my Tracking Space!Using Augmented Virtuality for Spotting Passersby inRoom-Scale Virtual Reality},
author = {von Willich, Julius and Funk, Markus and M{\"u}ller, Florian and Marky, Karola and Riemann, Jan and  M{\"u}hlh{\"a}user, Max},
doi = {10.1145/3322276.3322334},
booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19},
keywords = {Virtual Reality; Augmented Reality; Passersby Visualization},
year = {2019},
series = {DIS '19},
video = {https://www.youtube.com/watch?v=SGOFeRX0tmk},
abstract = {With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR  system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a 3D-Scan, showing an Avatar, and showing a 2D-Image of the passerby. Our results show that while a 2D-Image and an Avatar are the fastest representations to spot passersby, the Avatar  and the 3D-Scan representations were the most accurate.}
}


[DIS '19] PneumAct: Pneumatic Kinesthetic Actuation of Body Joints in Virtual Reality Environments

S. Günther, M. Makhija, F. Müller, D. Schön, M. Mühlhäuser, M. Funk

ABSTRACT - Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.

In Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19
10.1145/3322276.3322302    Teaser Video   
@inproceedings{guenther2019pneumact,
title = {PneumAct: Pneumatic Kinesthetic Actuation of Body Joints in Virtual Reality Environments},
author = {G{\"u}nther, Sebastian and Makhija, Mohit and M{\"u}ller, Florian and Sch{\"o}n, Dominik and M{\"u}hlh{\"a}user, Max and Funk, Markus},
doi = {10.1145/3322276.3322302},
booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19},
keywords = {Compressed Air,Force Feedback,Kinesthetic,Pneumatic,haptics,virtual Reality},
year = {2019},
series = {DIS '19},
teaservideo = {https://youtu.be/4lRWxzs4Rgs},
abstract={Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.}
}


[CHI EA '19] Slappyfications: Towards Ubiquitous Physical and Embodied Notifications

S. Günther, F. Müller, M. Funk, M. Mühlhäuser

ABSTRACT - With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications over a distance. Our proof-of-concept prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3311780    PDF    Full Video   
@inproceedings{guenther2019slappyfications,
title={Slappyfications: Towards Ubiquitous Physical and Embodied Notifications},
author={G{\"u}nther, Sebastian and M{\"u}ller, Florian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3311780},
year={2019},
video = {https://www.youtube.com/watch?v=qDmrSgyV20s},
abstract={With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications over a distance. Our proof-of-concept prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/guenther2019slappyfications.pdf}
}

 [CHI '19] Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays

F. Müller, J. McManus, S. Günther, M. Schmitz, M. Mühlhäuser, M. Funk

ABSTRACT - From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.

In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300707    PDF    Teaser Video    Full Video   
@inproceedings{mueller2019mind,
title={Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays},
author={M{\"u}ller, Florian and McManus, Joshua and G{\"u}nther, Sebastian and Schmitz, Martin and M{\"u}hlh{\"a}user, Max and Funk, Markus},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
doi={10.1145/3290605.3300707},
year={2019},
series = {CHI '19},
teaservideo={https://www.youtube.com/watch?v=RhabMsP0X14},
video={https://www.youtube.com/watch?v=D5hTVIEb7iA},
abstract={From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/mueller2019mindthetap.pdf},
 award={Honorable Mention}
}

[CHI '19] Assessing the Accuracy of Point & Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories

M. Funk, F. Müller, M. Fendrich, M. Shene, M. Kolvenbach, N. Dobbertin, S. Günther, M. Mühlhäuser

ABSTRACT - Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three diferent point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.

In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300377    PDF    Teaser Video    Full Video   
@inproceedings{funk2019assessing,
title={Assessing the Accuracy of Point \& Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories},
author={Funk, Markus and M{\"u}ller, Florian and Fendrich, Marco and Shene, Megan and Kolvenbach, Moritz and Dobbertin, Niclas and G{\"u}nther, Sebastian and M{\"u}hlh{\"a}user, Max},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
doi={10.1145/3290605.3300377},
year={2019},
series = {CHI '19},
teaservideo={https://www.youtube.com/watch?v=klu82WxeBlA},
video={https://www.youtube.com/watch?v=uXctClcQu_g},
abstract={Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three diferent point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/funk2019assessing.pdf}
}


[CHI '19] ./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects

M. Schmitz, M. Stitz, F. Müller, M. Funk, M. Mühlhäuser
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300684    PDF    Teaser Video   
@inproceedings{schmitz2019trilaterate,
title={./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects},
author={Schmitz, Martin and Stitz, Martin and M{\"u}ller, Florian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
series = {CHI '19},
doi={10.1145/3290605.3300684},
year={2019},
teaservideo={https://www.youtube.com/watch?v=QJNmH_IvarY},
abstract={Hover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. 
To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/schmitz2019trilaterate.pdf}
}

[CHI EA '19] LookUnlock: Using Spatial-Targets for User-Authentication on HMDs

M. Funk, K. Marky, I. Mizutani, M. Kritzler, S. Mayer, F. Michahelles

ABSTRACT - With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3312959    PDF    Teaser Video   
@inproceedings{funk2019lookunlock,
title={LookUnlock: Using Spatial-Targets for User-Authentication on HMDs},
author={Funk, Markus and Marky, Karola and Mizutani, Iori and Kritzler, Mareike and Mayer, Simon and Michahelles, Florian},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3312959},
year={2019},
teaservideo={https://www.youtube.com/watch?v=NA0EMlK0zrI},
abstract={With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/funk2019lookunlock.pdf}
}

[CHI EA '19] Usability of Code Voting Modalities

K. Marky, M. Schmitz, F. Lange, M. Mühlhäuser

ABSTRACT - Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.

In CHI Conference on Human Factors in Computing Systems Late Breaking Work
10.1145/3290607.3312971    Teaser Video   
@inproceedings{marky2019usability,
title = {Usability of Code Voting Modalities},
publisher = {ACM},
year = {2019},
author = {Marky, Karola and Schmitz, Martin and Lange, Felix and M{\"u}hlh{\"a}user, Max},
booktitle = {CHI Conference on Human Factors in Computing Systems Late Breaking Work},
series = {CHI EA '19},
keywords = {E-Voting; Code Voting; Tangibles; Usability Evaluation},
abstract = {Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of  voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.},
url = {http://tubiblio.ulb.tu-darmstadt.de/111897/},
doi = {10.1145/3290607.3312971},
teaservideo = {https://www.youtube.com/watch?v=tykP_IrVOIk},
}


[CHI EA '19] VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games

J. von Willich, D. Schön, S. Günther, F. Müller, M. Mühlhäuser, M. Funk

ABSTRACT - Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration,we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer introduces a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3313254    PDF    Teaser Video    Full Video   
@inproceedings{willich2019vrchairracer,
title={VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games},
author={von Willich, Julius and Sch{\"o}n, Dominik and G{\"u}nther, Sebastian and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Funk, Markus},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3313254},
year={2019},
teaservideo={https://www.youtube.com/watch?v=8ukVghWoTlE},
video={https://www.youtube.com/watch?v=v906aGntoKY},
abstract={Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration,we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer introduces a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/willich2019vrchairracer.pdf}
}

[PETRA '19] APS: A 3D Human Body Posture Set as a Baseline for Posture Guidance

H. Elsayed, M. Weigel, J. von Willich, M. Funk, M. Mühlhäuser

ABSTRACT - Human body postures are an important input modality for motion guidance and other application domains in HCI, e.g. games, character animations, and interaction with public displays. However, for training and guidance of body postures prior research had to define their own whole body gesture sets. Hence, the interaction designs and evaluation results are difficult to compare, due to a lack of a standardized posture set. In this work, we contribute APS (APS Posture Set), a novel posture set including 40 body postures. It is based on prior research, sports, and body language. For each identified posture, we collected 3D posture data using a Microsoft Kinect. We make the skeleton data, 3D mesh objects and SMPL data available for future research. Taken together, APS can be used to facilitate design of interfaces that use body gestures and as a reference set for future user studies and system evaluations.

In Proceedings of the 12th PErvasive Technologies Related to Assistive Environments Conference
10.1145/3316782.3324012   
@inproceedings{elsayed2019aps,
title = { APS: A 3D Human Body Posture Set as a Baseline for Posture Guidance },
author = {Elsayed, Hesham and Weigel, Martin and von Willich, Julius and Funk, Markus and M{\"u}hlh{\"a}user, Max },
doi = {10.1145/3316782.3324012},
booktitle = {Proceedings of the 12th PErvasive Technologies Related to Assistive Environments Conference},
year = {2019},
series = {PETRA '19},
acmid = {3324012},
publisher = {ACM},
address = {New York, NY, USA},
abstract = {Human body postures are an important input modality for motion guidance and other application domains in HCI, e.g. games, character animations, and interaction with public displays.  However, for training and guidance of body postures prior research had to define their own whole body gesture sets. Hence, the interaction designs and evaluation results are difficult to compare, due to a lack of a standardized posture set. In this work, we contribute APS (APS Posture Set), a novel posture set including 40 body postures. It is based on prior research, sports, and body language. For each identified posture, we collected 3D posture data using a Microsoft Kinect. We make the skeleton data, 3D mesh objects and SMPL data available for future research. Taken together, APS can be used to facilitate design of interfaces that use body gestures and as a reference set for future user studies and system evaluations.}
}

[PETRA '18] TactileGlove: Assistive Spatial Guidance in 3D Space Through Vibrotactile Navigation

S. Günther, F. Müller, M. Funk, J. Kirchner, N. Dezfuli, M. Mühlhäuser

ABSTRACT - With the recent advance in computing technology, more and more environments are becoming interactive. For interacting with these environments, traditionally 2D input and output elements are being used. However, recently interaction spaces also expanded to 3D space, which enabled new possibilities but also led to challenges in assisting users with interacting in such a 3D space. Usually, this challenge of communicating 3D positions is solved visually. This paper explores a different approach: spatial guidance through vibrotactile instructions. Therefore, we introduce TactileGlove, a smart glove equipped with vibrotactile actuators for providing spatial guidance in 3D space. We contribute a user study with 15 participants to explore how a different number of actuators and metaphors affect the user performance. As a result, we found that using a Pull metaphor for vibrotactile navigation instructions is preferred by our participants. Further, we found that using a higher number of actuators reduces the target acquisition time than when using a low number.

In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference
10.1145/3197768.3197785    PDF   
@inproceedings{guenther2018tactileglove,
 author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and Funk, Markus and Kirchner, Jan and Dezfuli, Niloofar and M\"{u}hlh\"{a}user, Max},
 title = {TactileGlove: Assistive Spatial Guidance in 3D Space Through Vibrotactile Navigation},
 booktitle = {Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference},
 series = {PETRA '18},
 year = {2018},
 isbn = {978-1-4503-6390-7},
 location = {Corfu, Greece},
 pages = {273--280},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/3197768.3197785},
 doi = {10.1145/3197768.3197785},
 acmid = {3197785},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3D-Space, Assistive Technology, Haptics, Navigation, Pull Push Metaphors, Spatial Guidance, Vibrotactile},
 abstract={With the recent advance in computing technology, more and more environments are becoming interactive. For interacting with these environments, traditionally 2D input and output elements are being used. However, recently interaction spaces also expanded to 3D space, which enabled new possibilities but also led to challenges in assisting users with interacting in such a 3D space. Usually, this challenge of communicating 3D positions is solved visually. This paper explores a different approach: spatial guidance through vibrotactile instructions. Therefore, we introduce TactileGlove, a smart glove equipped with vibrotactile actuators for providing spatial guidance in 3D space. We contribute a user study with 15 participants to explore how a different number of actuators and metaphors affect the user performance. As a result, we found that using a Pull metaphor for vibrotactile navigation instructions is preferred by our participants. Further, we found that using a higher number of actuators reduces the target acquisition time than when using a low number.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/gunther2018tactileglove.pdf}
}


[CHI EA '18] CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction

S. Günther, F. Müller, M. Schmitz, J. Riemann, N. Dezfuli, M. Funk, D. Schön, M. Mühlhäuser

ABSTRACT - The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.

In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3170427.3188647    PDF    Teaser Video   
@inproceedings{guenther2018checkmate,
 author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and Schmitz, Martin and Riemann, Jan and Dezfuli, Niloofar and Funk, Markus and Sch\"{o}n, Dominik and M\"{u}hlh\"{a}user, Max},
 title = {CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction},
 booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI EA '18},
 year = {2018},
 isbn = {978-1-4503-5621-3},
 location = {Montreal QC, Canada},
 pages = {LBW570:1--LBW570:6},
 articleno = {LBW570},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/3170427.3188647},
 doi = {10.1145/3170427.3188647},
 acmid = {3188647},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3d fabrication, augmented reality, chess, mixed reality, remote collaboration, tabletops, tangibles},
 teaservideo={https://www.youtube.com/watch?v=Geyr95Nl8mc},
 abstract={The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/guenther2018checkmate.pdf}
}

[CHI EA '18] Personalized User-Carried Single Button Interfaces As Shortcuts for Interacting with Smart Devices

F. Müller, M. Schmitz, M. Funk, S. Günther, N. Dezfuli, M. Mühlhäuser

ABSTRACT - We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices. In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.

In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3170427.3188661    PDF    Teaser Video   
@inproceedings{mueller2018pucsbi,
 author = {M\"{u}ller, Florian and Schmitz, Martin and Funk, Markus and G\"{u}nther, Sebastian and Dezfuli, Niloofar and M\"{u}hlh\"{a}user, Max},
 title = {Personalized User-Carried Single Button Interfaces As Shortcuts for Interacting with Smart Devices},
 booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI EA '18},
 year = {2018},
 isbn = {978-1-4503-5621-3},
 location = {Montreal QC, Canada},
 pages = {LBW602:1--LBW602:6},
 articleno = {LBW602},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/3170427.3188661},
 doi = {10.1145/3170427.3188661},
 acmid = {3188661},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {human factors, interaction, smart devices},
 teaservideo={https://www.youtube.com/watch?v=Z5wicorfmxU},
 abstract={We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices. In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/mueller_pucsbi.pdf}
}

 [CHI '18] Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects

M. Schmitz, M. Herbers, N. Dezfuli, S. Günther, M. Mühlhäuser

ABSTRACT - Embedding sensors into objects allow them to recognize various interactions.However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.

In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3173574.3173756    PDF    Teaser Video   
@inproceedings{schmitz2018offline,
 author = {Schmitz, Martin and Herbers, Martin and Dezfuli, Niloofar and G\"{u}nther, Sebastian and M\"{u}hlh\"{a}user, Max},
 title = {Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects},
 booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '18},
 year = {2018},
 isbn = {978-1-4503-5620-6},
 location = {Montreal QC, Canada},
 pages = {182:1--182:8},
 articleno = {182},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/3173574.3173756},
 doi = {10.1145/3173574.3173756},
 acmid = {3173756},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3d printing, capacitive sensing, digital fabrication, input, mechanism, metamaterial, sensors},
 teaservideo={https://www.youtube.com/watch?v=19dDaeBEnPM},
 abstract={Embedding sensors into objects allow them to recognize various interactions.However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/schmitz2018offline.pdf},
 award={Best Paper}
}

[IMWUT '18] FlowPut: Environment-Aware Interactivity for Tangible 3D Objects

J. Riemann, M. Schmitz, A. Hendrich, M. Mühlhäuser

ABSTRACT - Tangible interaction has shown to be beneficial in a wide variety of scenarios since it provides more direct manipulation and haptic feedback. Further, inherently three-dimensional information is represented more naturally by a 3D object than by a flat picture on a screen. Yet, today's tangibles have often pre-defined form factors and limited input and output facilities. To overcome this issue, the combination of projection and depth cameras is used as a fast and flexible way of non-intrusively adding input and output to tangibles. However, tangibles are often quite small and hence the space for output and interaction on their surface is limited. Therefore, we propose FlowPut: an environment-aware framework that utilizes the space available on and around a tangible object for projected visual output. By means of an optimization-based layout approach, FlowPut considers the environment of the objects to avoid interference between projection and real-world objects. Moreover, we contribute an occlusion resilient object recognition and tracking for tangible objects based on their 3D model and a point-cloud based multi-touch detection, that allows sensing touches also on the side of a tangible. Flowput is validated through a series of technical experiments, a user study, and two example applications.

In Proceedings ACM Interact. Mobile Wearable Ubiquitous Technologies
10.1145/3191763    PDF   
@article{riemann2018flowput,
 author = {Riemann, Jan and Schmitz, Martin and Hendrich, Alexander and M\"{u}hlh\"{a}user, Max},
 title = {FlowPut: Environment-Aware Interactivity for Tangible 3D Objects},
 journal = {Proceedings ACM Interact. Mobile Wearable Ubiquitous Technologies},
 issue_date = {March 2018},
 series = {IMWUT '18},
 volume = {2},
 number = {1},
 month = mar,
 year = {2018},
 issn = {2474-9567},
 pages = {31:1--31:23},
 articleno = {31},
 numpages = {23},
 url = {http://doi.acm.org/10.1145/3191763},
 doi = {10.1145/3191763},
 acmid = {3191763},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Displays, layout, object tracking, optimization, projection, touch},
 abstract={Tangible interaction has shown to be beneficial in a wide variety of scenarios since it provides more direct manipulation and haptic feedback. Further, inherently three-dimensional information is represented more naturally by a 3D object than by a flat picture on a screen. Yet, today's tangibles have often pre-defined form factors and limited input and output facilities. To overcome this issue, the combination of projection and depth cameras is used as a fast and flexible way of non-intrusively adding input and output to tangibles. However, tangibles are often quite small and hence the space for output and interaction on their surface is limited. Therefore, we propose FlowPut: an environment-aware framework that utilizes the space available on and around a tangible object for projected visual output. By means of an optimization-based layout approach, FlowPut considers the environment of the objects to avoid interference between projection and real-world objects. Moreover, we contribute an occlusion resilient object recognition and tracking for tangible objects based on their 3D model and a point-cloud based multi-touch detection, that allows sensing touches also on the side of a tangible. Flowput is validated through a series of technical experiments, a user study, and two example applications.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/riemann2018flowput.pdf}
} 

You can find more of our research on our institute's website.

#teamdarmstadt

Alumni

GET IN TOUCH

Telecooperation Lab TU Darmstadt