Skip to main content
project

Fire 2025

In Mixed Reality (MR), the physical world becomes enriched with digital (3D) information by simultaneously determining the position of the headset and the 3D structure of the room (SLAM) via colour and depth cameras. Using MR, emergency workers would be able to efficiently navigate through a burning building despite significant smoke development (visibility <1m), by projecting the 3D structure of the room onto their visor. The merged depth images form a real-time 3D model of the building with the positions of the emergency services workers in it. Based on this information, intervention leaders can better coordinate the operation and locate the heat source more efficiently.

Recently, in addition to the Microsoft Hololens MR headset, several applications have been developed for depth acquisition based on depth cameras. Robots and cars navigate autonomously using for example LIDAR and radar. Microsoft Kinect recognizes the pose of people for entertainment via short-wave infrared projections. However, these depth sensors have their limitations: LIDAR and shortwave infrared are sensitive to disturbing elements and radar has a low resolution. Research into depth acquisition in smoke development is limited to stereo matching on thermal images. However, a lack of texture in the images leads to poor results. The goal of our research is to improve depth acquisition by developing a sensor that introduces texture via Time of Flight (ToF) and Structured Light (SL) principles on the long-wave infrared wavelength. In addition, we want to apply the latest computervision algorithms to overcome the lack of texture. Based on these results we want to test the efficiency of SLAM on these depth images and improve this if necessary.

Results and algorithms will be evaluated qualitatively by visualizing the measured point clouds and quantitatively by measuring the impact of an increasing amount of smoke on the measured depths.