Our Research

a selection of work done by #teamdarmstadt

[PerDis ’20] Reminding Child Cyclists about Safety Gestures

A. Matviienko, S. Ananthanarayan, R. Kappes, W. Heuten, S. Boll

ABSTRACT - Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children's behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.

In Proceedings of the 9TH ACM International Symposium on Pervasive Displays
10.1145/3393712.3394120    PDF    Full Video   
@inproceedings{matviienko2020remindingcyclists,
author = {Matviienko, Andrii and Ananthanarayan, Swamy and Kappes, Raphael and Heuten, Wilko and Boll, Susanne},
title = {Reminding Child Cyclists about Safety Gestures},
year = {2020},
isbn = {9781450379861},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3393712.3394120},
doi = {10.1145/3393712.3394120},
booktitle = {Proceedings of the 9TH ACM International Symposium on Pervasive Displays},
pages = {1–7},
numpages = {7},
keywords = {HUD glasses, safety gestures, child cyclists, cycling safety},
location = {Manchester, United Kingdom},
series = {PerDis ’20},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/matviienko2020remindingcyclists.pdf},
 abstract={Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children's behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.},
 video = {https://www.youtube.com/watch?v=cSKD-MoZ-54},
}
  


[CHI '20] 3D-Auth: Two-Factor Authentication with Personalized 3D-Printed Items

K. Marky, M. Schmitz, V. Zimmermann, M. Herbers, K. Kunze, M. Mühlhäuser

ABSTRACT - Two-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376189    PDF    Teaser Video   
@inproceedings{marky20203dauth,
author = {Marky, Karola and Schmitz, Martin and Zimmermann, Verena and Herbers, Martin and Kunze, Kai and M{\"u}hlh{\"a}user, Max},
title = {3D-Auth: Two-Factor Authentication with Personalized 3D-Printed Items},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx.doi.org/10.1145/3313831.3376189},
teaservideo = {https://youtu.be/_dHihnJTRek},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/marky2020auth3d.pdf},
doi = {10.1145/3313831.3376189},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Two-Factor Authentication, 3D Printing, Capacitive Sensing},
abstract = {Two-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.}
}

[CHI '20] Podoportation: Foot-Based Locomotion in Virtual Reality

J. von Willich, M. Schmitz, F. Müller, D. Schmitt, M. Mühlhäuser

ABSTRACT - Virtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based and hands-free locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for hands-free and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376626    PDF    Teaser Video   
@inproceedings{willich2020podoportation,
author = {von Willich, Julius and Schmitz, Martin  and M{\"u}ller, Florian and Schmitt, Daniel and M{\"u}hlh{\"a}user, Max},
title = {Podoportation: Foot-Based Locomotion in Virtual Reality},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx.doi.org/10.1145/3313831.3376626},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/willich2020podoportation.pdf},
teaservideo = {https://www.youtube.com/watch?v=HGP5MN_e-k0},
doi = {10.1145/3313831.3376626},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Virtual Reality, Locomotion, Foot-based input},
abstract = {Virtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based and hands-free locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for hands-free and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.}
}

[CHI '20] Improving the Usability and UX of the Swiss Internet Voting Interface

K. Marky, V. Zimmermann, M. Funk, J. Daubert, K. Bleck, M. Mühlhäuser

ABSTRACT - Up to 20% of residential votes and up to 70% of absentee votes in Switzerland are cast online. The Swiss scheme aims to provide individual verifiability by different verification codes. The voters have to carry out verification on their own, making the usability and UX of the interface of great importance. To improve the usability, we first performed an evaluation with 12 human-computer interaction experts to uncover usability weaknesses of the Swiss Internet voting interface. Based on the experts' findings, related work, and an exploratory user study with 36 participants, we propose a redesign that we evaluated in a user study with 49 participants. Our study confirmed that the redesign indeed improves the detection of incorrect votes by 33% and increases the trust and understanding of the voters. Our studies furthermore contribute important recommendations for designing verifiable e-voting systems in general.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376769    PDF   
@inproceedings{marky2020swissvoting,
author = {Marky, Karola and Zimmermann, Verena and Funk, Markus and Daubert, J{\"o}rg and Bleck, Kira and M{\"u}hlh{\"a}user, Max},
title = {Improving the Usability and UX of the Swiss Internet Voting Interface},
booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
series = {CHI '20},
year = {2020},
isbn = {978-1-4503-6708-0},
location = {Honolulu, HI, USA},
url = {http://dx. doi.org/10.1145/3313831.3376769},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/marky2020swissvoting.pdf},
doi = {10.1145/3313831.3376769},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {E-Voting, Individual Verifiability, Usability Evaluation},
abstract = {Up to 20% of residential votes and up to 70% of absentee votes in Switzerland are cast online. The Swiss scheme aims to provide individual verifiability by different verification codes. The voters have to carry out verification on their own, making the usability and UX of the interface of great importance. To improve the usability, we first performed an evaluation with 12 human-computer interaction experts to uncover usability weaknesses of the Swiss Internet voting interface. Based on the experts' findings, related work, and an exploratory user study with 36 participants, we propose a redesign that we evaluated in a user study with 49 participants. Our study confirmed that the redesign indeed improves the detection of incorrect votes by 33% and increases the trust and understanding of the voters. Our studies furthermore contribute important recommendations for designing verifiable e-voting systems in general.}
}


[CHI '20] Therminator: Understanding the Interdependency of Visual and On-Body Thermal Feedback in Virtual Reality

S. Günther, F. Müller, D. Schön, O. Elmoghazy, M. Schmitz, M. Mühlhäuser

ABSTRACT - Recent advances have made Virtual Reality (VR) more realistic than ever before. This improved realism is attributed to today's ability to increasingly appeal to human sensations, such as visual, auditory or tactile. While research also examines temperature sensation as an important aspect, the interdependency of visual and thermal perception in VR is still underexplored. In this paper, we propose Therminator, a thermal display concept that provides warm and cold on-body feedback in VR through heat conduction of flowing liquids with different temperatures. Further, we systematically evaluate the interdependency of different visual and thermal stimuli on the temperature perception of arm and abdomen with 25 participants. As part of the results, we found varying temperature perception depending on the stimuli, as well as increasing involvement of users during conditions with matching stimuli.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376195    PDF    Teaser Video    Full Video   
@inproceedings{guenther2020therminator,
 author = {G{\"u}nther, Sebastian and M{\"u}ller, Florian and Sch{\"o}n, Dominik and Elmoghazy, Omar and Schmitz, Martin and M{\"u}hlh{\"a}user, Max},
 title = {Therminator: Understanding the Interdependency of Visual and On-Body Thermal Feedback in Virtual Reality},
 booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3313831.3376195},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/guenther2020therminator.pdf},
 video = {https://www.youtube.com/watch?v=q5lkmqAua78},
 teaservideo = {https://youtu.be/w9FnG1eoWD8},
 doi = {10.1145/3313831.3376195},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Haptics, Temperature, Thermal Feedback, Virtual Reality},
 abstract = {Recent advances have made Virtual Reality (VR) more realistic than ever before. This improved realism is attributed to today's ability to increasingly appeal to human sensations, such as visual, auditory or tactile. While research also examines temperature sensation as an important aspect, the interdependency of visual and thermal perception in VR is still underexplored. In this paper, we propose Therminator, a thermal display concept that provides warm and cold on-body feedback in VR through heat conduction of flowing liquids with different temperatures. Further, we systematically evaluate the interdependency of different visual and thermal stimuli on the temperature perception of arm and abdomen with 25 participants. As part of the results, we found varying temperature perception depending on the stimuli, as well as increasing involvement of users during conditions with matching stimuli.}
}

[CHI '20] Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted Displays

F. Müller, M. Schmitz, D. Schmitt, S. Günther, M. Funk, M. Mühlhäuser

ABSTRACT - Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.

In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
10.1145/3313831.3376852    PDF    Teaser Video    Full Video   
@inproceedings{mueller2020walktheline,
 author = {M{\"u}ller, Florian and Schmitz, Martin and Schmitt, Daniel and G{\"u}nther, Sebastian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
 title = {Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted Displays},
 booktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3313831.3376852},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/mueller2020walktheline.pdf},
 video = {https://youtu.be/ylAlzFqWx7g},
 teaservideo = {https://youtu.be/6-XrF6J9cTc},
 doi = {10.1145/3313831.3376852},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Augmented Reality, Head-Mounted Display, Input, Walking},
 abstract = {Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.}
}



[CHI EA '20] PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation

S. Günther, D. Schön, F. Müller, M. Mühlhäuser, M. Schmitz

ABSTRACT - Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: a VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.

In Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3334480.3382916    PDF    Teaser Video    Full Video   
@inproceedings{guenther2020pneumovolley,
 author = {G{\"u}nther, Sebastian and Sch{\"o}n, Dominik and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Schmitz, Martin},
 title = {PneumoVolley: Pressure-based Haptic Feedback on the Head through Pneumatic Actuation},
 booktitle = {Proceedings of the 2020 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
 series = {CHI EA '20},
 year = {2020},
 isbn = {978-1-4503-6708-0},
 location = {Honolulu, HI, USA},
 url = {http://dx.doi.org/10.1145/3334480.3382916},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2020/guenther2020pneumovolley.pdf},
 video = {https://www.youtube.com/watch?v=ZKnV8HrUx9M},
 teaservideo = {https://www.youtube.com/watch?v=-SlrCqF-5m4},
 doi = {10.1145/3334480.3382916},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Haptics, Pressure, Volleyball, Virtual Reality, Blobbyvolley},
 abstract = {Haptic Feedback brings immersion and presence in Virtual Reality (VR) to the next level. While research proposes the usage of various tactile sensations, such as vibration or ultrasound approaches, the potential applicability of pressure feedback on the head is still underexplored. In this paper, we contribute concepts and design considerations for pressure-based feedback on the head through pneumatic actuation. As a proof-of-concept implementing our pressure-based haptics, we further present PneumoVolley: a VR experience similar to the classic Volleyball game but played with the head. In an exploratory user study with 9 participants, we evaluated our concepts and identified a significantly increased involvement compared to a no-haptics baseline along with high realism and enjoyment ratings using pressure-based feedback on the head in VR.}
}



[DIS '19] You Invaded my Tracking Space!Using Augmented Virtuality for Spotting Passersby inRoom-Scale Virtual Reality

J. von Willich, M. Funk, F. Müller, K. Marky, J. Riemann, M. Mühlhäuser

ABSTRACT - With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a 3D-Scan, showing an Avatar, and showing a 2D-Image of the passerby. Our results show that while a 2D-Image and an Avatar are the fastest representations to spot passersby, the Avatar and the 3D-Scan representations were the most accurate.

In Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19
10.1145/3322276.3322334    Full Video   
@inproceedings{willich2019tracking,
title = {You Invaded my Tracking Space!Using Augmented Virtuality for Spotting Passersby inRoom-Scale Virtual Reality},
author = {von Willich, Julius and Funk, Markus and M{\"u}ller, Florian and Marky, Karola and Riemann, Jan and  M{\"u}hlh{\"a}user, Max},
doi = {10.1145/3322276.3322334},
booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19},
keywords = {Virtual Reality; Augmented Reality; Passersby Visualization},
year = {2019},
series = {DIS '19},
video = {https://www.youtube.com/watch?v=SGOFeRX0tmk},
abstract = {With the proliferation of room-scale Virtual Reality (VR), more and more users install a VR  system in their homes. When users are in VR, they are usually completely immersed in their application. However, sometimes passersby invade these tracking spaces and walk up to users that are currently immersed in VR to try and interact with them. As this either scares the user in VR or breaks the user's immersion, research has yet to find a way to seamlessly represent physical passersby in virtual worlds. In this paper, we propose and evaluate three different ways to represent physical passersby in a Virtual Environment using Augmented Virtuality. The representations encompass showing a 3D-Scan, showing an Avatar, and showing a 2D-Image of the passerby. Our results show that while a 2D-Image and an Avatar are the fastest representations to spot passersby, the Avatar  and the 3D-Scan representations were the most accurate.}
}


[DIS '19] PneumAct: Pneumatic Kinesthetic Actuation of Body Joints in Virtual Reality Environments

S. Günther, M. Makhija, F. Müller, D. Schön, M. Mühlhäuser, M. Funk

ABSTRACT - Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.

In Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19
10.1145/3322276.3322302    Teaser Video   
@inproceedings{guenther2019pneumact,
title = {PneumAct: Pneumatic Kinesthetic Actuation of Body Joints in Virtual Reality Environments},
author = {G{\"u}nther, Sebastian and Makhija, Mohit and M{\"u}ller, Florian and Sch{\"o}n, Dominik and M{\"u}hlh{\"a}user, Max and Funk, Markus},
doi = {10.1145/3322276.3322302},
booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems, DIS '19},
keywords = {Compressed Air,Force Feedback,Kinesthetic,Pneumatic,haptics,virtual Reality},
year = {2019},
series = {DIS '19},
teaservideo = {https://youtu.be/4lRWxzs4Rgs},
abstract={Virtual Reality Environments (VRE) create an immersive user experience through visual, aural, and haptic sensations. However, the latter is often limited to vibrotactile sensations that are not able to actively provide kinesthetic motion actuation. Further, such sensations do not cover natural representations of physical forces, for example, when lifting a weight. We present PneumAct, a jacket to enable pneumatically actuated kinesthetic movements of arm joints in VRE. It integrates two types of actuators inflated through compressed air: a Contraction Actuator and an Extension Actuator. We evaluate our PneumAct jacket through two user studies with a total of 32 participants: First, we perform a technical evaluation measuring the contraction and extension angles of different inflation patterns and inflation durations. Second, we evaluate PneumAct in three VRE scenarios comparing our system to traditional controller-based vibrotactile and a baseline without haptic feedback.}
}


[CHI EA '19] Slappyfications: Towards Ubiquitous Physical and Embodied Notifications

S. Günther, F. Müller, M. Funk, M. Mühlhäuser

ABSTRACT - With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications over a distance. Our proof-of-concept prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3311780    PDF    Full Video   
@inproceedings{guenther2019slappyfications,
title={Slappyfications: Towards Ubiquitous Physical and Embodied Notifications},
author={G{\"u}nther, Sebastian and M{\"u}ller, Florian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3311780},
year={2019},
video = {https://www.youtube.com/watch?v=qDmrSgyV20s},
abstract={With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications over a distance. Our proof-of-concept prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/guenther2019slappyfications.pdf}
}

[CHI '19] Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays

F. Müller, J. McManus, S. Günther, M. Schmitz, M. Mühlhäuser, M. Funk

ABSTRACT - From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.

In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300707    PDF    Teaser Video    Full Video   
@inproceedings{mueller2019mind,
title={Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays},
author={M{\"u}ller, Florian and McManus, Joshua and G{\"u}nther, Sebastian and Schmitz, Martin and M{\"u}hlh{\"a}user, Max and Funk, Markus},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
doi={10.1145/3290605.3300707},
year={2019},
series = {CHI '19},
teaservideo={https://www.youtube.com/watch?v=RhabMsP0X14},
video={https://www.youtube.com/watch?v=D5hTVIEb7iA},
abstract={From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/mueller2019mindthetap.pdf}
}

[CHI '19] Assessing the Accuracy of Point & Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories

M. Funk, F. Müller, M. Fendrich, M. Shene, M. Kolvenbach, N. Dobbertin, S. Günther, M. Mühlhäuser

ABSTRACT - Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three diferent point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.

In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300377    PDF    Teaser Video    Full Video   
@inproceedings{funk2019assessing,
title={Assessing the Accuracy of Point \& Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories},
author={Funk, Markus and M{\"u}ller, Florian and Fendrich, Marco and Shene, Megan and Kolvenbach, Moritz and Dobbertin, Niclas and G{\"u}nther, Sebastian and M{\"u}hlh{\"a}user, Max},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
doi={10.1145/3290605.3300377},
year={2019},
series = {CHI '19},
teaservideo={https://www.youtube.com/watch?v=klu82WxeBlA},
video={https://www.youtube.com/watch?v=uXctClcQu_g},
abstract={Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three diferent point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/funk2019assessing.pdf}
}


[CHI '19] ./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects

M. Schmitz, M. Stitz, F. Müller, M. Funk, M. Mühlhäuser
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
10.1145/3290605.3300684    PDF    Teaser Video   
@inproceedings{schmitz2019trilaterate,
title={./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects},
author={Schmitz, Martin and Stitz, Martin and M{\"u}ller, Florian and Funk, Markus and M{\"u}hlh{\"a}user, Max},
booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
series = {CHI '19},
doi={10.1145/3290605.3300684},
year={2019},
teaservideo={https://www.youtube.com/watch?v=QJNmH_IvarY},
abstract={Hover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. 
To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/schmitz2019trilaterate.pdf}
}

[CHI EA '19] LookUnlock: Using Spatial-Targets for User-Authentication on HMDs

M. Funk, K. Marky, I. Mizutani, M. Kritzler, S. Mayer, F. Michahelles

ABSTRACT - With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3312959    PDF    Teaser Video   
@inproceedings{funk2019lookunlock,
title={LookUnlock: Using Spatial-Targets for User-Authentication on HMDs},
author={Funk, Markus and Marky, Karola and Mizutani, Iori and Kritzler, Mareike and Mayer, Simon and Michahelles, Florian},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3312959},
year={2019},
teaservideo={https://www.youtube.com/watch?v=NA0EMlK0zrI},
abstract={With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/funk2019lookunlock.pdf}
}

[CHI EA '19] Usability of Code Voting Modalities

K. Marky, M. Schmitz, F. Lange, M. Mühlhäuser

ABSTRACT - Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.

In CHI Conference on Human Factors in Computing Systems Late Breaking Work
10.1145/3290607.3312971    Teaser Video   
@inproceedings{marky2019usability,
title = {Usability of Code Voting Modalities},
publisher = {ACM},
year = {2019},
author = {Marky, Karola and Schmitz, Martin and Lange, Felix and M{\"u}hlh{\"a}user, Max},
booktitle = {CHI Conference on Human Factors in Computing Systems Late Breaking Work},
series = {CHI EA '19},
keywords = {E-Voting; Code Voting; Tangibles; Usability Evaluation},
abstract = {Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter's choice. Code voting aims to protect the voter's choice by the introduction of  voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.},
url = {http://tubiblio.ulb.tu-darmstadt.de/111897/},
doi = {10.1145/3290607.3312971},
teaservideo = {https://www.youtube.com/watch?v=tykP_IrVOIk},
}


[CHI EA '19] VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games

J. von Willich, D. Schön, S. Günther, F. Müller, M. Mühlhäuser, M. Funk

ABSTRACT - Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration,we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer introduces a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.

In Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems
10.1145/3290607.3313254    PDF    Teaser Video    Full Video   
@inproceedings{willich2019vrchairracer,
title={VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games},
author={von Willich, Julius and Sch{\"o}n, Dominik and G{\"u}nther, Sebastian and M{\"u}ller, Florian and M{\"u}hlh{\"a}user, Max and Funk, Markus},
booktitle = {Proceedings of the 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '19},
doi={10.1145/3290607.3313254},
year={2019},
teaservideo={https://www.youtube.com/watch?v=8ukVghWoTlE},
video={https://www.youtube.com/watch?v=v906aGntoKY},
abstract={Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration,we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer introduces a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.},
file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/willich2019vrchairracer.pdf}
}

[PETRA '19] APS: A 3D Human Body Posture Set as a Baseline for Posture Guidance

H. Elsayed, M. Weigel, J. von Willich, M. Funk, M. Mühlhäuser

ABSTRACT - Human body postures are an important input modality for motion guidance and other application domains in HCI, e.g. games, character animations, and interaction with public displays. However, for training and guidance of body postures prior research had to define their own whole body gesture sets. Hence, the interaction designs and evaluation results are difficult to compare, due to a lack of a standardized posture set. In this work, we contribute APS (APS Posture Set), a novel posture set including 40 body postures. It is based on prior research, sports, and body language. For each identified posture, we collected 3D posture data using a Microsoft Kinect. We make the skeleton data, 3D mesh objects and SMPL data available for future research. Taken together, APS can be used to facilitate design of interfaces that use body gestures and as a reference set for future user studies and system evaluations.

In Proceedings of the 12th PErvasive Technologies Related to Assistive Environments Conference
10.1145/3316782.3324012   
@inproceedings{elsayed2019aps,
title = { APS: A 3D Human Body Posture Set as a Baseline for Posture Guidance },
author = {Elsayed, Hesham and Weigel, Martin and von Willich, Julius and Funk, Markus and M{\"u}hlh{\"a}user, Max },
doi = {10.1145/3316782.3324012},
booktitle = {Proceedings of the 12th PErvasive Technologies Related to Assistive Environments Conference},
year = {2019},
series = {PETRA '19},
acmid = {3324012},
publisher = {ACM},
address = {New York, NY, USA},
abstract = {Human body postures are an important input modality for motion guidance and other application domains in HCI, e.g. games, character animations, and interaction with public displays.  However, for training and guidance of body postures prior research had to define their own whole body gesture sets. Hence, the interaction designs and evaluation results are difficult to compare, due to a lack of a standardized posture set. In this work, we contribute APS (APS Posture Set), a novel posture set including 40 body postures. It is based on prior research, sports, and body language. For each identified posture, we collected 3D posture data using a Microsoft Kinect. We make the skeleton data, 3D mesh objects and SMPL data available for future research. Taken together, APS can be used to facilitate design of interfaces that use body gestures and as a reference set for future user studies and system evaluations.}
}

[PETRA '18] TactileGlove: Assistive Spatial Guidance in 3D Space Through Vibrotactile Navigation

S. Günther, F. Müller, M. Funk, J. Kirchner, N. Dezfuli, M. Mühlhäuser

ABSTRACT - With the recent advance in computing technology, more and more environments are becoming interactive. For interacting with these environments, traditionally 2D input and output elements are being used. However, recently interaction spaces also expanded to 3D space, which enabled new possibilities but also led to challenges in assisting users with interacting in such a 3D space. Usually, this challenge of communicating 3D positions is solved visually. This paper explores a different approach: spatial guidance through vibrotactile instructions. Therefore, we introduce TactileGlove, a smart glove equipped with vibrotactile actuators for providing spatial guidance in 3D space. We contribute a user study with 15 participants to explore how a different number of actuators and metaphors affect the user performance. As a result, we found that using a Pull metaphor for vibrotactile navigation instructions is preferred by our participants. Further, we found that using a higher number of actuators reduces the target acquisition time than when using a low number.

In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference
10.1145/3197768.3197785    PDF   
@inproceedings{guenther2018tactileglove,
 author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and Funk, Markus and Kirchner, Jan and Dezfuli, Niloofar and M\"{u}hlh\"{a}user, Max},
 title = {TactileGlove: Assistive Spatial Guidance in 3D Space Through Vibrotactile Navigation},
 booktitle = {Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference},
 series = {PETRA '18},
 year = {2018},
 isbn = {978-1-4503-6390-7},
 location = {Corfu, Greece},
 pages = {273--280},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/3197768.3197785},
 doi = {10.1145/3197768.3197785},
 acmid = {3197785},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3D-Space, Assistive Technology, Haptics, Navigation, Pull Push Metaphors, Spatial Guidance, Vibrotactile},
 abstract={With the recent advance in computing technology, more and more environments are becoming interactive. For interacting with these environments, traditionally 2D input and output elements are being used. However, recently interaction spaces also expanded to 3D space, which enabled new possibilities but also led to challenges in assisting users with interacting in such a 3D space. Usually, this challenge of communicating 3D positions is solved visually. This paper explores a different approach: spatial guidance through vibrotactile instructions. Therefore, we introduce TactileGlove, a smart glove equipped with vibrotactile actuators for providing spatial guidance in 3D space. We contribute a user study with 15 participants to explore how a different number of actuators and metaphors affect the user performance. As a result, we found that using a Pull metaphor for vibrotactile navigation instructions is preferred by our participants. Further, we found that using a higher number of actuators reduces the target acquisition time than when using a low number.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/gunther2018tactileglove.pdf}
}


[CHI EA '18] CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction

S. Günther, F. Müller, M. Schmitz, J. Riemann, N. Dezfuli, M. Funk, D. Schön, M. Mühlhäuser

ABSTRACT - The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.

In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3170427.3188647    PDF    Teaser Video   
@inproceedings{guenther2018checkmate,
 author = {G\"{u}nther, Sebastian and M\"{u}ller, Florian and Schmitz, Martin and Riemann, Jan and Dezfuli, Niloofar and Funk, Markus and Sch\"{o}n, Dominik and M\"{u}hlh\"{a}user, Max},
 title = {CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction},
 booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI EA '18},
 year = {2018},
 isbn = {978-1-4503-5621-3},
 location = {Montreal QC, Canada},
 pages = {LBW570:1--LBW570:6},
 articleno = {LBW570},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/3170427.3188647},
 doi = {10.1145/3170427.3188647},
 acmid = {3188647},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3d fabrication, augmented reality, chess, mixed reality, remote collaboration, tabletops, tangibles},
 teaservideo={https://www.youtube.com/watch?v=Geyr95Nl8mc},
 abstract={The digitalized world comes with increasing Internet capabilities, allowing to connect persons over distance easier than ever before. Video conferencing and similar online applications create great benefits bringing people who physically cannot spend as much time as they want virtually together. However, such remote experiences can also tend to lose the feeling of traditional experiences. People lack direct visual presence and no haptic feedback is available. In this paper, we tackle this problem by introducing our system called CheckMate. We combine Augmented Reality and capacitive 3D printed objects that can be sensed on an interactive surface to enable remote interaction while providing the same tangible experience as in co-located scenarios. As a proof-of-concept, we implemented a sample application based on the traditional chess game.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2019/guenther2018checkmate.pdf}
}

[CHI EA '18] Personalized User-Carried Single Button Interfaces As Shortcuts for Interacting with Smart Devices

F. Müller, M. Schmitz, M. Funk, S. Günther, N. Dezfuli, M. Mühlhäuser

ABSTRACT - We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices. In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.

In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3170427.3188661    PDF    Teaser Video   
@inproceedings{mueller2018pucsbi,
 author = {M\"{u}ller, Florian and Schmitz, Martin and Funk, Markus and G\"{u}nther, Sebastian and Dezfuli, Niloofar and M\"{u}hlh\"{a}user, Max},
 title = {Personalized User-Carried Single Button Interfaces As Shortcuts for Interacting with Smart Devices},
 booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI EA '18},
 year = {2018},
 isbn = {978-1-4503-5621-3},
 location = {Montreal QC, Canada},
 pages = {LBW602:1--LBW602:6},
 articleno = {LBW602},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/3170427.3188661},
 doi = {10.1145/3170427.3188661},
 acmid = {3188661},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {human factors, interaction, smart devices},
 teaservideo={https://www.youtube.com/watch?v=Z5wicorfmxU},
 abstract={We are experiencing a trend of integrating computing functionality into more and more common and popular devices. While these so-called smart devices offer many possibilities for automation and personalization of everyday routines, interacting with them and customizing them requires either programming efforts or a smartphone app to control the devices. In this work, we propose and classify Personalized User-Carried Single Button Interfaces as shortcuts for interacting with smart devices. We implement a proof-of-concept of such an interface for a coffee machine. Through an in-the-wild deployment of the coffee machine for approximately three months, we report first initial experiences from 40 participants of using PUCSBIs for interacting with smart devices.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/mueller_pucsbi.pdf}
}

[CHI '18] Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects

M. Schmitz, M. Herbers, N. Dezfuli, S. Günther, M. Mühlhäuser

ABSTRACT - Embedding sensors into objects allow them to recognize various interactions.However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.

In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
10.1145/3173574.3173756    PDF    Teaser Video   
@inproceedings{schmitz2018offline,
 author = {Schmitz, Martin and Herbers, Martin and Dezfuli, Niloofar and G\"{u}nther, Sebastian and M\"{u}hlh\"{a}user, Max},
 title = {Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed Objects},
 booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
 series = {CHI '18},
 year = {2018},
 isbn = {978-1-4503-5620-6},
 location = {Montreal QC, Canada},
 pages = {182:1--182:8},
 articleno = {182},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/3173574.3173756},
 doi = {10.1145/3173574.3173756},
 acmid = {3173756},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {3d printing, capacitive sensing, digital fabrication, input, mechanism, metamaterial, sensors},
 teaservideo={https://www.youtube.com/watch?v=19dDaeBEnPM},
 abstract={Embedding sensors into objects allow them to recognize various interactions.However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/schmitz2018offline.pdf}
}

[IMWUT '18] FlowPut: Environment-Aware Interactivity for Tangible 3D Objects

J. Riemann, M. Schmitz, A. Hendrich, M. Mühlhäuser

ABSTRACT - Tangible interaction has shown to be beneficial in a wide variety of scenarios since it provides more direct manipulation and haptic feedback. Further, inherently three-dimensional information is represented more naturally by a 3D object than by a flat picture on a screen. Yet, today's tangibles have often pre-defined form factors and limited input and output facilities. To overcome this issue, the combination of projection and depth cameras is used as a fast and flexible way of non-intrusively adding input and output to tangibles. However, tangibles are often quite small and hence the space for output and interaction on their surface is limited. Therefore, we propose FlowPut: an environment-aware framework that utilizes the space available on and around a tangible object for projected visual output. By means of an optimization-based layout approach, FlowPut considers the environment of the objects to avoid interference between projection and real-world objects. Moreover, we contribute an occlusion resilient object recognition and tracking for tangible objects based on their 3D model and a point-cloud based multi-touch detection, that allows sensing touches also on the side of a tangible. Flowput is validated through a series of technical experiments, a user study, and two example applications.

In Proceedings ACM Interact. Mobile Wearable Ubiquitous Technologies
10.1145/3191763    PDF   
@article{riemann2018flowput,
 author = {Riemann, Jan and Schmitz, Martin and Hendrich, Alexander and M\"{u}hlh\"{a}user, Max},
 title = {FlowPut: Environment-Aware Interactivity for Tangible 3D Objects},
 journal = {Proceedings ACM Interact. Mobile Wearable Ubiquitous Technologies},
 issue_date = {March 2018},
 series = {IMWUT '18},
 volume = {2},
 number = {1},
 month = mar,
 year = {2018},
 issn = {2474-9567},
 pages = {31:1--31:23},
 articleno = {31},
 numpages = {23},
 url = {http://doi.acm.org/10.1145/3191763},
 doi = {10.1145/3191763},
 acmid = {3191763},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Displays, layout, object tracking, optimization, projection, touch},
 abstract={Tangible interaction has shown to be beneficial in a wide variety of scenarios since it provides more direct manipulation and haptic feedback. Further, inherently three-dimensional information is represented more naturally by a 3D object than by a flat picture on a screen. Yet, today's tangibles have often pre-defined form factors and limited input and output facilities. To overcome this issue, the combination of projection and depth cameras is used as a fast and flexible way of non-intrusively adding input and output to tangibles. However, tangibles are often quite small and hence the space for output and interaction on their surface is limited. Therefore, we propose FlowPut: an environment-aware framework that utilizes the space available on and around a tangible object for projected visual output. By means of an optimization-based layout approach, FlowPut considers the environment of the objects to avoid interference between projection and real-world objects. Moreover, we contribute an occlusion resilient object recognition and tracking for tangible objects based on their 3D model and a point-cloud based multi-touch detection, that allows sensing touches also on the side of a tangible. Flowput is validated through a series of technical experiments, a user study, and two example applications.},
 file={https://fileserver.tk.informatik.tu-darmstadt.de/Publications/2018/riemann2018flowput.pdf}
} 

You can find more of our research on our institute's website.

#teamdarmstadt

also with us

GET IN TOUCH

Telecooperation Lab TU Darmstadt