Selected research projects

ChromaGlasses

ChromaGlasses overview: (Left Top) Standard Ishihara test marker as seen through non-active glasses and cropped region. People suffering from red-green colour vision deficiency tend to see "21" instead of the correct "74". (Left Bottom) The same test marker when seen through active ChromaGlasses. A pixel-precise overlay causes a shift revealing the correct "74". However, depending on the severity, a less drastic shift might be sufficient. (Middle and Right) ChromaGlasses prototype for creating a precise correction overlay utilizing current optical-see through head-mounted displays extended by custom cameras demonstrating possible miniaturization.

Prescription glasses are used by many people as a simple, and even fashionable way, to correct refractive problems of the eye. However, there are other visual impairments that cannot be treated with an optical lens in conventional glasses. In this work we present ChromaGlasses, Computational Glasses using optical head-mounted displays for compensating colour vision deficiency. Unlike prior work that required users to look at a screen in their visual periphery rather than at the environment directly, ChromaGlasses allow users to directly see the environment using a novel head-mounted displays design that analyzes the environment in real-time and changes the appearance of the environment with pixel precision to compensate the impairment of the user. In this work, we present first prototypes for ChromaGlasses and report on the results from several studies showing that ChromaGlasses are an effective method for managing colour blindness.

Related publications:

Tobias Langlotz, Jonathan Sutton, Stefanie Zollmann, Yuta Itoh, and Holger Regenbrecht (2018) ChromaGlasses: Computational Glasses for Compensating Colour Blindness Conditionally accepted for ACM CHI Conference on Human Factors in Computing Systemts (ACM CHI 2018), Montreal, 2018, Honourable Mention Award
Supplemental Material
Video
BibTex

MREP

Left: User in the mixed voxel reality seeing himself in a virtual mirror with a mix of real and virtual objects and another person (inset: real-world view as captured by a web cam); Right: User with two chairs (one real, one virtual)

Mixed Reality aims at combining virtual reality with the user’s surrounding real environment in a way that they form one, coherent reality. A coherent visual quality is of utmost importance, expressed in measures of e.g. resolution, framerate, and latency for both the real and the virtual domains. For years, researchers have focused on maximizing the quality of the virtual visualization mimicking the real world to get closer to visual coherence. This however, makes Mixed Reality systems overly complex and requires high computational power. In this project, we propose a different approach by decreasing the realism of one or both visual realms, real and virtual, to achieve visual coherence. Our system coarsely voxelizes the real and virtual environments, objects, and people to provide a believable, coherent mixed voxel reality. Our mixed voxel reality system serves as a platform for low-cost presence research and studies on human perception and cognition, a host of diagnostic and therapeutic applications, and for a variety of Mixed Reality applications where users’ embodiment is important. Our findings challenge some commonplace assumptions on “more is better“ approaches in mixed reality research and practice—sometimes less can be more.

Related publications:

Holger Regenbrecht, Arne Reepen, Katrin Meng, Stephan Beck, and Tobias Langlotz (2017) Mixed Voxel Reality: Presence and Embodiment in Low Fidelity, Visually Coherent, Mixed Reality Environments Accepted for IEEE International Symposium on Mixed and Augmented Reality (IEEE ISMAR) 2017
BibTex

Download Unity Source Code of MREP-Virtual Reality Photo Booth Application

QuickReview

Traditional app review system and QuickReview. a) and b) App review interface as used in the Google Play Store. c) Proposed QuickReview interface extending traditional interface by presenting problematic features (e.g. GPS and Time) extracted by mining the existing text reviews and displaying them. d) Selecting a feature (here GPS) displays corresponding issues, also extracted using data mining, for this specific feature such as “lost” and “stops”. Users can confirm one or multiple issues for the selected feature without typing lengthy reviews. e) Users can still provide additional information via text comments.

User-reviews of mobile applications provide information that benefits other users and developers. Even though reviews contain feedback about an app’s performance and problematic features, users and app developers need to spend considerable effort reading and analyzing the feedback provided. In this work, we introduce and evaluate QuickReview, an intelligent user interface for reporting problematic app features. Preliminary user evaluations show that QuickReview facilitates users to add reviews swiftly with ease, and also helps developers with quick interpretation of submitted reviews by presenting a ranked list of commonly reported features.

Related publications:

Tavita Su’a, Sherlock A. Licorish, Bastin Tony Roy Savarimuthu, Tobias Langlotz (2017) QuickReview: A Data-Driven Mobile Interface for Providing App Reviews Accepted for ACM Intelligent User Interfaces 2017 (ACM IUI 2017)
BibTex

PWC_teaser

(Left) A virtual avatar as shown within a head-mounted display (HTC Vive) for physiotherapists to assess driving performance. (Middle) Users operating a desktop version of our power wheelchair simulator. (Right) Real power wheelchair as used in our studies to define the driving behaviour and driving tasks for our simulator.

Virtual Reality based driving simulators are increasingly used to train and assess users’ abilities to operate vehicles in a controlled and safe way. For the development of those simulators it is important to identify and evaluate design factors affecting perception, behaviour, and driving performance. We developed a PC-based power wheelchair simulator and empirically tested it in three different studies.

This work is done in collaboration with RATA South Dunedin/NZ and McGill University Montreal/Canada.

Related publications:

Abdul Alshaer, Holger Regenbrecht, David O'Hare (2017) Immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator Applied Ergonomics: Human Factors in Technology and Society, Applied Ergonomics 58 (January 2017), 1-12.
BibTex

Abdul Alshaer, David O'Hare, Simon Hoermann, Holger Regenbrecht (2016) The impact of the visual representation of the input device on driving performance in a power wheelchair simulator. Accepted for publication the proceedings of the 11th Intl Conf. Disability, Virtual Reality & Associated Technologies Los Angeles, California, USA, 2016.
BibTex

Abdul Alshaer, Holger Regenbrecht, David O'Hare (2015) Investigating Visual Dominance with a Virtual Driving Task Proceedings of IEEE Virtual Reality, Arles, 145-146.
BibTex

Alshaer, A., Hoermann, S. & Regenbrecht, H. (2013) Influence of peripheral and stereoscopic vision on driving performance in a power wheelchair simulator system. Proceedings of ICVR, International Conference on Virtual Rehabilitation, August 26-29 2013, Philadelphia, USA.
BibTex

RadiometricCompensation

Overview of the results of our system allowing for real-time radiometric compensation for optical-see through head-mounted displays. (Left) Head-mounted display prototype utilizing a beam-splitter to capture the image as seen by the user allowing for the computation of a pixel-precise compensation image. (Middle) Naive overlay as in standard head-mounted displays showing color artifacts caused by color-blending between the background (here a color chart) and the displayed image. (Right) Our solution mitigates the effect of color blending by applying a pixel-precise radiometric compensation. The small inlay image shows the desired image

Optical see-through head-mounted displays are currently seeing a transition out of research labs towards the consumer-oriented market. However, whilst availability has improved and prices have decreased, the technology has not matured much. Most commercially available optical see-through head mounted displays follow a similar principle and use an optical combiner blending the physical environment with digital information. This approach yields problems as the colors for the overlaid digital information can not be correctly reproduced. The perceived pixel colors are always a result of the displayed pixel color and the color of the current physical environment seen through the head-mounted display. In this paper we present an initial approach for mitigating the effect of color-blending in optical see-through head-mounted displays by introducing a real-time radiometric compensation. Our approach is based on a novel prototype for an optical see-through head-mounted display that allows the capture of the current environment as seen by the user’s eye. We present three different algorithms using this prototype to compensate color blending in real-time and with pixel-accuracy. We demonstrate the benefits and performance as well as the results of a user study. We see application for all common Augmented Reality scenarios but also for other areas such as Diminished Reality or supporting color-blind people.

Related publications:

Tobias Langlotz, Matthew Cook, Holger Regenbrecht (2016) Real-Time Radiometric Compensation for Optical See-Through Head-Mounted Displays IEEE Transactions on Visualisation and Computer Graphics (Special issue capturing best papers of the International Symposium on Mixed and Augmented Reality/ISMAR), 2016.
BibTex

PanoVC

Conceptual illustration of our implemented PanoVC prototype. (Left) A user of our PanoVC system (the local user) shares the environment by capturing it with the mobile phone, (Middle) a distant user (the remote user) receives a camera stream, and builds and updates a panoramic representation of the distant environment. Using orientation tracking the phone becomes a window into the distant environment as both users can independently control their current view. (Right) By providing a window into the distant environment, users of PanoVC experience the feeling of presence as they are virtually "being there together".

We are presenting PanoVC - a mobile telepresence system based on continuously updated panoramic images. We are showing that the experience of telepresence, i.e. the sense of “being there together” at a distant location can be achieved with standard state-of-the-art mobile phones. Because mobile phones are always on hand users can share their environments with others in a pervasive way. Our approach is opening up the pathway for applications in a variety of domains such as the exploration of remote environments or novel forms of videoconferencing. We present implementation details, technical evaluation results, and the findings of a user study of an indoor- outdoor environments sharing task as proof of concept.

Related publications:

Jörg Müller, Tobias Langlotz, Holger Regenbrecht (2016) PanoVC: Pervasive Telepresence using Mobile Phones. In Proceedings of IEEE Pervasive Computing and Communications (IEEE PerCom), 2016.
BibTex
YouTube

VirtualStress

(Left) Instructor's view of the environment. (Right) User's view of the environment with head-mounted display or monitor

The experience of Virtual Reality (VR) can lead to unwanted or wanted psychological stress reactions. Highly immersive VR games for instance utilise extreme, life-threatening or dangerous situations to achieve those responses from their players. There is also sufficient evidence, that in clinical settings and specific situations, like fear-of-heights, or post-traumatic stress, virtual stimuli can lead to perceived stress for the clients. However, there is a gap in research targeting everyday, mild emotional stimuli, which are neither extreme nor specific and which are not presented in an immersive system. To what extent can common stimuli in a non-immersive virtual environment elicit actual stress reactions for its users? We developed a desktop VR system and evaluated it in empirical studies. We could show that virtual stimuli in a common, domestic family environment led to a significant increase of perceived stress. We also developed new communication mechanisms for such an environment to allow an instructor to interact with a client in a non-obtrusive way. In addition, domain experts rated the feasibility of the system and we transferred our technology to a practitioner's office. Our system and findings have implications for the design and implementation of immersive and non-immersive VR systems intending or avoiding to increase psychological stress reactions.

This work is in collaboration with Well South Dunedin/NZ.

Related publications:

Mohammed Alghamdi, Holger Regenbrecht, Simon Hoermann, and Nicola Swain (2017) Mild Stress Stimuli built into a Non-Immersive Virtual Environment can elicit actual Stress Responses Behaviour & Information Technology. London/UK: Taylor & Francis
BibTex

Mohammed Alghamdi, Holger Regenbrecht, Simon Hoermann, Tobias Langlotz, Colin Aldridge (2016) Social Presence and Mode of Videocommunication in a Collaborative Virtual Environment Proceedings of the 20th Pacific Asia Conference on Information Systems (PACIS 2016), 2016.
BibTex

PervasiveAR

(Left) Contrasting aspects of conventional and pervasive AR. (Right) Identified context targets and sources relevant for AR systems. The numbers in the circles indicate the number of papers in the associated category, papers can be present in multiple categories. Interactive versions of these graphics are available under http://bit.ly/towardspar

Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user’s current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.

Related publications:

Jens Grubert, Tobias Langlotz, Stefanie Zollmann, Holger Regenbrecht (2016) Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality IEEE Transactions on Visualization and Computer Graphics (IEEE TVCG).
BibTex

MutualGaze

(Left) Eye-to-eye contact: Separation of camera and screen affecting mutual gaze. (Right) Factors impacted by mutual gaze

Videoconferencing allows geographically dispersed parties to communicate by simultaneous audio and video transmissions. It is used in a variety of application scenarios with a wide range of coordination needs and efforts, such as private chat, discussion meetings, and negotiation tasks. In particular, in scenarios requiring certain levels of trust and judgement non-verbal communication, cues are highly important for effective communication. Mutual gaze support plays a central role in those high coordination need scenarios but generally lacks adequate technical support from videoconferencing systems. In this paper, we review technical concepts and implementations for mutual gaze support in videoconferencing, classify them, evaluate them according to a defined set of criteria, and give recommendations for future developments. Our review gives decision makers, researchers, and developers a tool to systematically apply and further develop videoconferencing systems in "serious" settings requiring mutual gaze. This should lead to well-informed decisions regarding the use and development of this technology and to a more widespread exploitation of the benefits of videoconferencing in general. For example, if videoconferencing systems supported high-quality mutual gaze in an easy-to-set-up and easy-to-use way, we could hold more effective and efficient recruitment interviews, court hearings, or contract negotiations.

Related publications:

Holger Regenbrecht, Tobias Langlotz (2015) Mutual Gaze Support in Videoconferencing Reviewed. Accepted for Communications of the Association for Information Systems.
BibTex

FlyAR

Augmented Reality supported flight management for aerial reconstruction. (Left) Aerial reconstruction of a building. (Middle) The depth estimation for a hovering MAV in the distance is complicated due to missing depth cues. (Right) Augmented Reality provides additional graphical cues for understanding the position of the vehicle.

Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicle’s position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the user’s view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.

Related publications:

Stefanie Zollmann, Christof Hoppe, Tobias Langlotz, Gerhard Reitmayr (2014) Flyar: Augmented Reality Supported Micro Aerial Vehicle Navigation,IEEE Transactions on Visualization and Computer Graphics (TVCG), March 2014.
BibTex
YouTube

xRay

AR views using different approaches of extracting depth cues from a video image. (Left) Random occlusion cues randomly preserve image information but can not transport the depth order. (Middle) Only edges are preserved to provide depth cues. (Right) Using important image regions based on a saliency computation as depth cues creates the impression of subsurface objects.

This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augmented Reality (AR). Our focus is on approaches supporting a better scene un- derstanding and in particular a better sense of depth order between physical objects and digital objects. One of the main goals of this work is to provide effective X-ray visualization techniques that work in unprepared outdoor environments. In order to achieve this goal, we focus on methods that automatically extract depth cues from video images. The extracted depth cues are combined in ghosting maps that are used to assign each video image pixel a trans- parency value to control the overlay in the AR view. Within our study, we analyze three different types of ghosting maps, 1) alpha-blending which uses a uniform alpha value within the ghosting map, 2) edge-based ghosting which is based on edge extraction and 3) image-based ghosting which incorpo- rates perceptual grouping, saliency information, edges and texture details. Our study results demonstrate that the latter technique helps the user to understand the subsurface location of virtual objects better than using alpha-blending or the edge-based ghosting.

Related publications:

Stefanie Zollmann, Raphael Grasset, Gerhard Reimayr, and Tobias Langlotz (2014) The effect of image-based X-Ray visualisation techniques on spatial order understanding in Augmented Reality, Proceedings of ACM OzCHI, Sydney, Australia.
BibTex

xRay

Examples of applying our adaptive radiance transfer computations for probeless light estimation in Augmented Reality. (Left) Augmented Reality based interior shopping application. (Middle, Right) Physical gaming using depth cameras.

Photorealistic Augmented Reality (AR) requires knowledge of the scene geometry and environment lighting to compute photometric registration. Recent work has introduced probeless photometric registration, where environment lighting is estimated directly from observations of reflections in the scene rather than through an invasive probe such as a reflective ball. However, computing the dense radiance transfer of a dynamically changing scene is computationally challenging. In this work, we present an improved radiance transfer sampling approach, which combines adaptive sampling in image and visibility space with robust caching of radiance transfer to yield real time framerates for photorealistic AR scenes with dynamically changing scene geometry and environment lighting.

Lukas Gruber, Tobias Langlotz, Pradeep Sen, Tobias Hollerer, and Dieter Schmalstieg (2014) Efficient and Robust Radiance Transfer for Probeless Photorealistic Augmented Reality Proceedings of IEEE Virtual Reality 2014, Minnesota, MN, USA, March 2014.
BibTex

ARRecordReplay

Overview of the presented approach for creating and overlaying video augmentations on mobile phones. (Left) Skateboarder was recorded with a mobile phone while performing his actions using a standard video application. (Middle) Frame of the recorded video sequence that is later processed on the phone and analysed for foreground-background information and image features. (Right) The same action overlayed within our skateboard tutoring application captured from an iPhone 4. The video information is integrated into the current camera view by matching image features between the camera feed and the video.

In this paper we present a novel approach to record and replay video content composited in-situ with a live view of the real environment. Our real-time technique works on mobile phones, and uses a panorama-based tracker to create visually seamless and spatially registered overlay of video content. We apply a temporal foreground- background segmentation of video footage and show how the segmented information can be precisely registered in real-time in the camera view of a mobile phone. We describe the user interface and the video post effects implemented in our prototype as well as our approach with a skateboard training application. Our technique can also be used with online video material and supports the creation of augmented situated documentaries.

Tobias Langlotz, Mathäus Zingerle, Raphael Grasset, Hannes Kaufman, Gerhard Reitmayr (2012) AR Record&Replay:Situated Compositing of Video Content in Mobile Augmented Reality, Proceedings of the 24th Australian Computer-Human Interaction Conference (OzCHI) Pages 318-326, 2012.
BibTex
YouTube

TapeDrawings

(Left) Tape drawings are applied to a clay model. (Middle) Augmented tape drawings: Tape drawings and additional accentuations, like blue windows, are made in Photoshop and simultaneously projected onto the clay model. (Right) Creating digital tape drawings with an tracked IR-LED pen on a clay model.

Tape drawings are an important part of the form finding process in the automotive industry and thus for creating the final design and shape of cars during the product development process. Up to now this step is done on white boards in 2D and on clay models. In this poster we present a system that supports designers during the tape drawing process by transferring drawings created in 2D to the clay model by using projector-based spatial augmented reality. Further- more we show an optional 3D input method for creating tape draw- ings directly on the clay model that additionally allows the trans- mission of information into a 2D representation using a registered projector-camera system. This system guarantees the consistency of information in different media and dimensions during the design process.

Stefanie Zollmann, Langlotz Tobias (2009) Spatially Augmented Tape Drawing, Proceedings of IEEE Symposium on 3D User Interfaces 2009 (3DUI), 2009
BibTex
YouTube