Wondering if anyone has some tips for building accessibility functionality for visually impaired.
I’m aware of different ways to trigger audio playback of content, whether that’s through text to speech pre-canned menu readouts, or using Open AI whisper or something similar to ingest and speak.
However, I’m looking for a bit more details on anyone that has found success with:
- Implementing a screen-reader 3rd party service.
- tips for creating a structure for voice navigation.
- tips for selection of items, buttons, or content on a screen, and moving focus from one item to the next.
Personally, I can think of lots ways to do this, but I don’t want to reinvent the wheel. I’m curious if anyone has designed this functionality, and methods that worked best, and pitfalls to watch out for. Thanks!