Visual disability is one of the most serious troubles that may afflict an individual. Despite the fact that 80 percent of all visual impairment is claimed to be preventable and even curable, still blindness/partial-sight represent a serious problem worldwide (WHO). Besides the great efforts spent in medicine, neuro-science and biotechnologies to find an ultimate solution to such problems, technologies can provide tools to support those people by providing basic functionalities such as the ability to navigate and recognize their entourage independently, to improve their quality of life and allow better integration into the society. This objective is ambitious but not out-of-reach, thanks to the recent technological advances.

Usually, blind people are able to move independently only along the routes which they have already learned together with a sighted guide, a white cane or a guide dog. In the last years, efforts have been made to develop devices for assisting blind people.

Electronic support systems, offering different levels of environmental information and using different types of sensors have been developed. These systems employ sensors for obstacle avoidance such as laser pointers,  or a robot connected at the tip of a white cane, video cameras, or GPS-based navigation aids. While these devices are able to detect close obstacles and give partial instructions to allow blind people to avoid them, they have not been adequately oriented towards way finding aids. Moreover, these solutions have been mainly adopted for outdoor navigation.

Still a lot needs to be done since:

  1. Just few and unconsolidated solutions are worldwide available.
  2. Such solutions are mostly oriented toward obstacle detection.
  3. They have been thought for outdoor navigation.
  4. They do not provide the user with information about surrounding environment.

In this context, we propose in this project a new design that incorporates guidance and recognition capabilities into a single prototype.have very scarcely been coupled together in the current navigation systems. The tool is designed for indoor use, and is fully based on computer-vision technologies. The components of the system include a portable camera attached to a wearable jacket, a processing unit, and a headset for commands/feedback. The navigation system is launched as soon as the prototype is powered on, and keeps instructing the blind person whenever he/she moves across the indoor environment. In order to avoid information flooding, the recognition system is activated upon request of the user. The prototype was implemented and tested in an indoor environment, showing good performance in terms of both navigation and recognition accuracy.

Our developed prototype helps blind people in moving autonomously and recognizing objects and people in indoor environments. It accommodates two complementary units: (i) a guidance system, and (ii) a recognition system. The former works online and takes charge of guiding the blind person through the indoor environment from his/her current location and leading him/her to the desired destination, while allowing avoiding static as well as moving obstacles. By contrast, the latter works on demand. The whole prototype is based on computer vision and machine learning techniques.

The project involved research units from the University of Trento (Italy) and the Ain Shams University (Egypt) as well as associations of visually impaired people. It lasted three years and was articulated on four main work packages.