Current Projects and Funding

Real-time Three-Dimensional Diminished Reality (3DDR) develops a novel software framework for real-time diminished reality (DR), a variant of mixed reality (MR) which changes a user's real time perception so that unwanted objects disappear. Our new approach can do this in full 3D and in real time, so that users can move arbitrarily and see through or behind objects. Previous techniques have been limited to the assumption that the region in the image that must be diminished (replaced) is flat, which is often not the case in real-world applications. Our method restores a three-dimensional, non-flat background, and can also fill in the background if no previous observation exists. This type of filling -- a three-dimensional variant of so-called "inpainting" -- must be temporally coherent, so that perspective changes resulting from a moving user position do not destroy the illusion of DR. Our method lends itself to visual inspection tasks, aestethic improvements of mixed reality scenes and telepresence applications with high visual quality.
Fördergeber
  • Österreichischer Wissenschaftsfonds FWF, FWF
Beginn: 31.12.2020

Instant Avatar - Instant Photorealistic Generation of Human Avatars with Mobile Devices

Capturing humans in every detail and displaying them in a photo-realistic way is one of the hardest challenges of computer graphics and computer vision. The human body shows a huge variety in shape and appearance, and capturing every aspect of it is necessary to create a believable virtual representation of someone. Many technologies struggle with the so called uncanny valley effect: This term describes the fact that “humanoid objects which appear almost, but not fully, like real human beings elicit revulsion in observers”. State-of-the-art approaches in this area can generate digital human avatars, but do not reach photo-realism and fully believable results. Furthermore, current solutions require complex setups and powerful hardware to create a digital representation of a person. This limits the widespread use of the technology and therefore also impedes commercial success. Being able to overcome these problems would open up a large range of use cases. Applications in the fields of health care, education, e-commerce and entertainment, to only name a few, would benefit from such a technology. The activities of companies in this field emphasize the relevance of this topic. Almost every aspect of our digital communication will benefit from realistic avatars. To achieve this, the technical solution must be mobile and easy to use without the need for special hardware. Thus, the project partners will combine 4D avatar capturing and image-based rendering in a system incorporating principles of photogrammetry and surface light fields to capture fine details and view-dependent effects which are true to the original images of the subject. Facial expressions and body motion will be captured using novel tracking and classification algorithms and can then be applied to avatars to create a realistic and believably human impression. The Institute of Computer Graphics and Vision, lead of Prof. Dieter Schmalstieg, has decades of experience on the topics of 3D reconstruction, deep learning, augmented and virtual reality, visualisation and other topics relevant to this project. A team of experts focused on research and development of mobile AR solutions and has released commercial products in this field. Together with the funding of this project, all requirements to achieve both academic and commercial success are given.
Fördergeber
  • Österreichische Forschungsförderungsgesellschaft mbH (FFG) , FFG
  • Reactive Reality GmbH
Beginn: 30.11.2019

NNVRIK - Neural Network based Inverse Kinematics for Human Upper Body in Virtual Reality

Current consumer-grade virtual reality devices provide positional tracking for head and hands of the user. Using this information, virtual hands can be displayed to the user and by that be used for interaction in the virtual environment. However, other body parts are usually not visible to the user since there is no tracking information available. Furthermore, tracking information might be noisy or temporarily unavailable. In this project, we aim to close the gap between such incomplete tracking data and full body tracking, which might only become available in future consumer-grade virtual reality. Our aim is a combination of data driven (neural network-based) and inverse kinematics solvers for estimating joint data for the (upper body) pose of the user. Our previous work shows that, even if the pose is not correct, arms animated by inverse kinematics are preferred over having hands only and that they can also be used for interaction. Using neural networks and sufficient motion capture data, we expect to increase the accuracy even further and thus also increase the feeling of embodiment.
Fördergeber
  • Facebook Technologies, LLC
Beginn: 30.04.2019

CAMed - Clinical Additive Manufacturing for Medical Applications

Additive manufacturing (AM) has developed rapidly into a technology enabling the creation of products with virtually limitless design geometries from an increasing number of materials spanning polymers, ceramics and metals. The innovative potential of AM is mirrored, in particular, by its impact on the medical manufacturing industry, which is supplying clinics with a steadily growing number of devices created by AM. The truly revolutionary and as yet completely unrealized potential of AM for medical applications lies, however, in the printing of personalized implants directly in the clinic. Current treatment of fractures and/or lesions caused by trauma or tumour infiltration is often based on improvised usage of sub-optimal implants. Complex pelvic fractures are, for example, fixed by intraoperative bending of standard metal plates to fit the fracture. The downsides of this are lengthy surgery and the creation of predetermined breaking points in the metal leading to later implant failure (and, thus, more surgery and attendant potential morbidity). Ribs removed in the course of tumour resection are similarly replaced by relatively thin metal bands requiring intraoperative adaptation to patient specific curvature of the thorax. These implants are, moreover, rigid in character and consequently eventually deformed by movement associated with breathing and the forces acting upon the thorax, necessitating remedial surgery (often, multiple times). Reconstruction following tumor-associated craniectomy automatically requires a second round of surgery because the customized implants used for this are currently produced by external manufacturers (with a typical waiting time of six weeks). The potential benefits of customization in this case are, however, often forfeited due to changes to the shape of the cranial opening that occur during the wait for the implant. Additive manufacturing using optimized application-specific materials in the clinic itself would greatly improve overall outcome for these patients. It would, moreover, significantly reduce overall patient stress by obviating the need for a second surgical intervention (and others necessitated by possible complications). Successful clinical implementation of AM will, however, necessitate the improvement and adaptation of key associated processes such as imaging, segmentation, material development and the selection, in each case, of the most appropriate AM technology. Subsequent developments will furthermore require effective integration of both the medical standpoint and the expertise of the (industrial) developers. The CAMed project shall bring together clinicians, medical and material scientists and industry partners in a tightly networked, focused cooperation to develop AM-based processes for the clinical manufacture of custom-fit implants for specified medical applications. Medical professionals and their patients will, as a result, begin to reap the enormous potential benefits of clinical 3D printing.
Fördergeber
  • Medizinische Universität Graz
  • Steirische Wirtschaftsförderungsgesellschaft m.b.H., SFG
  • Österreichische Forschungsförderungsgesellschaft mbH (FFG) , FFG
Externe Partner
  • Medizinische Universität Graz
  • Montanuniversität Leoben, Institut für Werkstoffkunde und -prüfung der Kunststoffe
  • JOANNEUM RESEARCH Forschungsgesellschaft mbH, MATERIALS - Institut für Oberflächentechnologien und Photonik
  • Montanuniversität Leoben, Lehrstuhl für Kunststoffverarbeitung
  • Max-Planck-Institut für Intelligente Systeme
Beginn: 31.10.2018

MiReBooks: Mixed Reality Handbooks for Mining Education

MiReBooks produces a series of Virtual and Augmented Reality based (=Mixed Reality MR) interactive mining handbooks as a new digital standard for higher mining education across Europe. Many current challenges in mining education will be met in an innovative new way, by combining classical paper based teaching materials with MR materials and their transformation into pedagogically and didactically coherent MR handbooks for integrative classroom use.

MiReBooks is a new digital learning experience that will change the way we teach, learn and subsequently apply mining. By taking traditional paper based education material and enriching it with virtual and augmented reality based experiences (Mixed Reality), teachers can now convey and students now experience phenomena in the classroom that are usually not easily accessible in the real world. Complex issues of mining are no longer a challenging barrier for learning progress and students complete their studies with a more thorough comprehension of their discipline. Through thought through pedagogic inclusion in teaching plans, students will be able to take advantage of new ways of participation that are suitable for the needs of their generation. With MiReBooks, the way of teaching will change as instructors will be able engage their students in a more effective way and offer them an enriched content repertoire as well as a hightened comprehension opportunity.

The array of possible industrial mine environment examples that students can be immersed into becomes endless and thus the industry will receive graduates that are familiarized in-depth with a holistic view on the industrial context. Students will enter the job market skilled as digital natives and will highly influence the way the industry will work and develop in the future. Mixed Reality is certainly a most promising way to enable users to make the most of their learning experience and thus leverage the improvement of operational efficiencies and innovation. The tool is hence also attractive for industry application in professional training, to bring existing employees up-to-speed with the latest standards. MiReBooks will be the lubricant of change and innovation in the mining sector in terms of society and environment, safe and healthy working conditions and mining processes and equipment.

External Partners

Montanuniversität Leoben, Austria (Lead Partner)
Epiroc Rock Drills AB, Sweden
KGHM Cuprum sp. z o.o. Centrum Badawczo-Rozwojowe (KGHM Cuprum Ltd. Research & Development Centre), Poland
LTU Business AB, Sweden
Luleå University of Technology (LTU), Sweden
Luossavaara-Kiirunavaara AB, LKAB, Sweden
Rheinisch-Westfaelische Technische Hochschule Aachen (RWTH Aachen), Germany
TalTech University, Estonia
Technische Universitaet Graz (Graz University of Technology), Austria
Technische Universität Bergakademie Freiberg (TUBAF), Germany
Teknologian tutkimuskeskus VTT (Technical Research Centre of Finland Ltd. VTT), Finland
Università degli Studi di Trento, Italy
University of New South Wales (UNSW), Australia
Voest Alpine Erzberg (VA Erzberg), Austria

Funding sources

  • European Institute of Innovation and Technology RawMaterials

Start: 01.09.2018

[project page]


enFaced - Virtual and Augmented Reality for 3D Reconstruction

This interdisciplinary research project between computer science and medicine targets the development of a comprehensive image-guided tool for head and neck surgery with the main focus on mandibular fractures and mid-facial fractures. The tool supports the physicians in all treatment stages (diagnosis, planning, intervention and monitoring) and can also be used during clinical traineeship. A key point is the development of an algorithm for the semi-automatic and automatic segmentation of bone structures and soft tissue in CT and MRI acquisitions. A segmentation enables a three-dimensional localization, quantification and visualization of biological structures in a very short time. Based on the radiological images from the clinical routine, treatment diagnosis can be visualized, and surgical planning processes or alternative surgery options can be simulated. In addition, the postoperative simulation results can be presented preoperatively in a photo-realistic manner. In particular, this will be applied in the reposition of bony structures, the reconstruction of facial defects and the removal of tumors in complex anatomical areas. Moreover, we plan the development of patient-individual three-dimensional implants, which will 3D printed in-house for comparison with the external manufactured ones and for clinical usage. Furthermore, the medical personnel can be supported in real-time during surgery by an interactive navigation, where Augmented Reality integrates computer-generated objects into the operation field. The application reduces surgical complications and ensures a successful treatment result by pre- and intraoperative simulations. Hence, the number of necessary corrective surgeries and the operation time needed can be reduced, and a higher patient survival rate can be achieved. In addition, the combination with Virtual Reality glasses allows the virtual training of operations during medical education. The objective of the project is the establishment of an open source tool as basis for further research and developments for the participating universities. Currently, comparable tools are protected by strict licensing conditions and not accessible without restrictions for academic research. Existing software for clinical practice is not functionally stable enough and has too high error rates. Additionally, the usage imposes high financial costs, which makes a widespread application not possible. Finally, Augmented Reality in the clinical routine is an important topic for current surgical research and cannot be ignored by modern medical universities.
Fördergeber
  • Österreichischer Wissenschaftsfonds FWF, FWF
Externe Partner
  • Medizinische Universität Graz, Institut für Zellbiologie, Histologie und Embryologie
Beginn: 30.06.2018

Virtual Welding NEXT - Integration of Simulated and Real Welding

Through the research and development of innovative and efficient methods, a novel training simulator shall be developed for welding, which enables a smooth transition between the virtual training and the real environment, to improve the flexibility and usability of welding as core technology in production in Industry 4.0. The system prototype, including housing, burner, the helmet and the table) including the virtual welding processing unit is to be remodelled as realistically as possible. The concept includes the design and development of an intuitive graphical user interface to enable users the operation of the welding machine in a fast and economic manner, at the same time enriching the know-how of the user and the understanding of safety and working concepts. The project Virtual Welding NEXT shall lower emotional barriers to use virtual and augmented reality technology and aims at an improved excitement in welding, enabling the actual usage of gained theoretical know-how in a production environment. Novel technologies in power supply technology, data communication and visualization improve the user experience and user efficiency, assisted by simulations and optimization procedures, likewise helping the training personnel to plan and guide learning and teaching experiences and interactive operation.
Fördergeber
  • FRONIUS International GmbH
  • Österreichische Forschungsförderungsgesellschaft mbH (FFG) , FFG
Externe Partner
  • FH JOANNEUM Gesellschaft mbH
  • FH Campus Wien
Beginn: 30.04.2017

MATAHARI - Maintenance through Assistive Telepresence And Human-centered Augmented Reality in Industry

Augmented Reality promises great advantages for empowering workers to perform increasingly complex jobs in the era of industry 4.0. In this project, we focus on how Augmented Reality interfaces can be used for tele-assistance, letting a remote expert support a worker in an industrial facility when it comes to inspection, maintenance, construction and similar tasks. This encompasses efficient mobile 3D tracking and scanning, annotating 3D scans with augmented information and transmitting the 3D data between mobile workers and remote experts.
Fördergeber
  • Österreichische Forschungsförderungsgesellschaft mbH (FFG) , FFG
Beginn: 31.03.2017

Fully Programmable GPU Pipelines

A modern computer system has a graphics processing unit (GPU) with enormous computational power. However, the overall hardware architecture of the GPU has not changed for almost 15 years. A fixed-function pipeline with a static sequence of stages supports a certain type of graphics application, primarily designed to deliver computer games. With new GPU programming languages such as CUDA, we can use the GPU power for computing simulations, but we cannot easily build new graphics pipelines, because certain parts of the GPU are not available using CUDA. In this project, we propose to overcome this restriction with a new software framework, which supplements the missing parts of a graphics pipeline as a framework for a GPU programming language. This is challenging, because the framework has to be very efficient, or it will not be able to compete with a standard graphics pipeline. However, we can make use of the high flexibility afforded by a software implementation to produce a competitive framework. More important than pure efficiency is that, with this software framework for graphics pipelines, we can then overcome all the restrictions of the conventional pipeline. For example, we can replace a rectangular dense framebuffer with a much more efficient irregular pixel structure. We can also replace uniform sampling patters for pixels with non-uniform ones, feeding into new Virtual Reality devices such as the Oculus Rift.
Fördergeber
  • Österreichischer Wissenschaftsfonds FWF, FWF
Externe Partner
  • Max-Planck-Institut für Informatik
Beginn: 28.02.2017

ARxX - Augmented Reality by Example

Augmented Reality (AR) by Example is a novel approach to generate AR applications from existing video and image demonstrations. Previous techniques create AR applications using a time-consuming process, which requires skills with 3D modeling and animation software, in addition to knowledge of the technical components of an AR system. While any AR application will benefit from a simple authoring process, this project will mainly target the generation of AR applications for knowledge presentation in tutorials. While traditionally images have been used to provide visual instructions, with the success of video sharing platforms a large body of video tutorials is additionally available. Both, image and video tutorials effectively convey complex motions, but are difficult to follow precisely because of their 2D nature. AR tutorials have been revealed to be more effective. This research projects brings the advantages of 2D and 3D instructions together by automatically creating three-dimensional AR tutorials from conventional 2D data. Unlike previous work, we will not simply overlay video, but synthesize 3D registered motion from the input data. Since the information in the resulting AR tutorial is registered to 3D objects, the user can freely change the viewpoint without degrading the AR experience. This is achieved by investigating a number of different techniques. First, we have to extract the author's 3D environment and all 3D motion from the input data. Second, we need to provide tools for editing and retargeting the resulting 3D scene to the user's current environment. Third, we will develop comprehensible visualization techniques, specific for instructions in AR environments to effectively communicate the instructions extracted before. The research is complemented by qualitative and quantitative evaluations of the effects the investigated techniques have on users. This research project has great potential in applications concerned with crowed sourced training and teaching.
Fördergeber
  • Österreichischer Wissenschaftsfonds FWF, FWF
Beginn: 28.02.2018

Aortic Dissection

In the Lead project a consortium of scientists from biomechanical-, civil-, electrical-, and mechanical engineering, computer science, mathematics, and physics from TU Graz has set itself the goal of unraveling the cause and the formation of the various stages of an aortic dissection (AD). Advanced computational tools and algorithms will be developed to assist clinicians with the diagnosis, treatment, and management of AD patients. In addition, related topics such as the optimization of implants, the better design for tissue engineering and of coatings and stent platforms for drug delivery will be investigated. 

In particular, new multiscale constitutive models that include innovative parameters and failure criteria will be developed, which will allow the simulation of the rupture of aortic tissue and propagation of the false lumen. The development of thrombus in the false lumen will be modeled by using the theory of porous media, while the blood will be modeled as a non-Newtonian fluid. The 3D geometry of patient-specific morphologies will be reconstructed from medical images of carefully selected AD patients. Finally, computational fluid-structure interaction simulations will be performed in order to investigate the wall stresses, the hemodynamics, the false lumen propagation, and the thrombus formation and growth, etc. In addition, the 3D computational simulation results will be visualized by virtual reality techniques. We expect that this project will improve awareness of this life-threatening disease, and lead to its more effective treatment and control within the general public of Austria and beyond. 

The Lead project will be carried out in the frame of the Graz Center of Computational Engineering (GCCE), which has been founded in 2016 as an interdisciplinary cooperation platform for basic research in the realm of computational science and engineering. The mission of GCCE is to improve computational techniques and its applicability by bringing together the expertise of leading scientists form different areas.

Funding sources

  • LEAD project of TU Graz

Start: 1.1.2018


Augur - Augmented Reality for Measurement Instruments

Development Measuring distances
Fördergeber
  • Österreichische Forschungsförderungsgesellschaft mbH (FFG) , FFG
  • VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, VRVis
Beginn: 01.02.2015

Simplify - Creation of Augmented Reality Support for User Manuals

The goal of this project is the creation of AR-supported user manuals for products manufactured and marketed by AVL List GmbH. Existing manuals as printed copies are scanned and automatically processed to create AR instructions. WP1 deals with the creation of the required infrastructure, WP2 deals with 2 manuals describing products of AVL List GmbH in detail. WP3 deals with the creation of many more of these manuals.

Funding source

  • VRVis Competence Center

Start: 01.07.2015