Naimul Mefraz Khan
Software Systems
With the advent of AI, computer vision, and wearable devices, it is possible to create a miniature wearable device (think google glass) that can help hearing impaired people in navigating day-to-day life easily.
The objective is to create a Google glass-style wearable augmented reality device that can project the type of sound being heard through a microphone onto the display.
1. The device should have an on-board microphone to listen to audio in real-time.
3. The wearable computer should be able to perform two tasks: text to speech, and sound categorization.
4. The text-to-speech and categorization output should be shown on the AR Glass display.
1. Raspberry pi-based AR display (look for transparent OLED displays for pi)
2. Create a housing to hold the pi, display, and microphone.
3. machine learning/deep learning algorithms for sound categorization (look up "audio classification").
4. For speech-to-text, off-the-shelf solutions can be used.
See below
study OLED display, configure housing
sound categorization
speech-to-text
integration
NMK06: Augmented reality smart glass for the hearing impaired | Naimul Mefraz Khan | Thursday September 1st 2022 at 10:14 PM