In 2020, 43.2 million people worldwide were affected by blindness and 295.3 million people suffered from mild to severe visual impairments. Impairments in visual perception present those affected with a wide range of challenges in everyday life, often resulting in limited participation in everyday life and a reduced quality of life.
The research project LOMOBI aims to develop an interactive assistance system for the navigation of visually impaired people in everyday life. Compared to the previously developed demonstrator, this includes both outdoor and indoor areas (see picture).
The aim of this work is the evaluation and implementation of a neural network for the environment segmentation of walkable, less walkable and non-walkable surfaces in order to enable the navigation algorithm to plan a corresponding path. In order to distinguish between these surfaces, it is necessary to record and classify corresponding training data and to train an appropriate neural network for segmentation. Furthermore, the trained network should be integrated into ROS2 and run on a mobile computing unit (Jetson Xavier NX).
Notes / Requirements
Basic knowledge of programming, deep learning and image processing should be available. The software module is to be implemented in ROS2, so prior knowledge of C++ or Python and ROS is advantageous, but not necessarily required. The scope and objectives can be adapted according to the previous knowledge.
Further information is available on request by email.