Get featured on INDIAai

Contribute your expertise or opinions and become part of the ecosystem!

Problem / Objective

The difficulties people with visual impairments experience in navigating through the world—especially in public spaces—are both obvious and, many times, profound. Their lives and opportunities are often seriously impacted by the need to depend on others for assistance in simply getting to work, going shopping, meeting friends, and on and on. What sighted people take for granted often becomes an enormous obstacle that the visually impaired must deal with on a daily basis. Meanwhile, visual assistance systems for navigation are fairly limited and range from Global Positioning System-based, voice-assisted smartphone apps to camera-enabled smart walking stick solutions. These systems lack the depth perception necessary to facilitate independent navigation.

Solution / Approach

Artificial intelligence (AI) developer Jagadish K. Mahendran and his team designed an AI-powered, voice-activated backpack that can help the visually impaired navigate and perceive the world around them. The system is housed inside a small backpack containing a host computing unit, such as a laptop. A vest jacket conceals a camera, and a fanny pack is used to hold a pocket-size battery pack capable of providing approximately eight hours of use. A Luxonis OAK-D spatial AI camera can be affixed to either the vest or fanny pack, then connected to the computing unit in the backpack. Three tiny holes in the vest provide viewports for the OAK-D, which is attached to the inside of the vest.

The OAK-D unit is a versatile and powerful AI device that runs on Intel Movidius VPU and the Intel Distribution of OpenVINO toolkit for on-chip edge AI inferencing. It is capable of running advanced neural networks while providing accelerated computer vision functions and a real-time depth map from its stereo pair, as well as colour information from a single 4k camera.

A Bluetooth-enabled earphone lets the user interact with the system via voice queries and commands, and the system responds with verbal information. As the user moves through their environment, the system audibly conveys information about common obstacles including signs, tree branches and pedestrians. It also warns of upcoming crosswalks, curbs, staircases and entryways.

Impact / Implementation

The backpack helps detect common challenges such as traffic signs, hanging obstacles, crosswalks, moving objects and changing elevations, all while running on a low-power, interactive device. Visually impaired users can take advantage of the system for both indoor and outdoor navigation, as well as for gaining an understanding of their local environment. Simple to put on, it’s inconspicuous, and the user can take advantage of the system to walk freely on public streets without attracting undesired attention. Jagadish K. Mahendran won the grand prize in the OpenCV Spatial AI 2020 Competition, the world’s largest spatial AI competition.


Sources of Case study

Source: Intel

Image from Intel

Want your Case study to get published?

Submit your case study and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in