Earlier this month, Google’s I/O conference brought together developers from around the globe for talks and hands-on learning with experts at Google. It also gave a first look at Google’s latest products for developers.
In his keynote address, Sundar Pichai (CEO, Google) highlighted ongoing projects that would enable people with disabilities to live more independently. To this end, the tech giant is focusing on using advances in Artificial Intelligence (AI) and voice recognition to build increasingly accessibility oriented products and apps.
This information really excited us. Here are Team Avaz’s top 3 picks from the I/O Accessibility updates.
Most aspects of our lives involve communicating with others and in turn, being understood by others as well. This ends up being taken for granted by most of us. Indeed this is something that could be extremely challenging for those with speech disabilities arising from various conditions. To support those with speech disabilities, Google is working on a fix that can train computers and mobile phones to understand them better. This project is being developed in partnership with the ALS Therapy Development Institute and the ALS Residence Initiative. One of the things being done for this is the recording of voices of people who have ALS and combining it with AI.
The AI algorithms that are currently under development work only for English speakers and for challenges typically associated with ALS. However, Google hopes the research can be applied to larger groups of people with different speech impairments in the near future.
Live Transcribe is easily one of the best features Google is developing. This feature will enable people who are deaf to follow what is being said when someone is speaking, in real time. The app will transcribe everything it hears so that anyone hard of hearing can follow the conversation and reply. Live Transcribe will be made available in beta version soon, first on Pixel 3 and then on other devices.
Live Relay enables people who are deaf or hard of hearing to make and receive phone calls. This feature uses on-device speech recognition and text-to-speech conversion. Further, it allows the phone to listen and speak on the user’s behalf while they type. As the responses are instant and use predictive writing suggestions, the typing is fast enough to hold asynchronous phone call.
It can also be used by people who may be in a meeting or can’t take a call, but are able to type instead.
To read up about these updates and the others in greater detail, you can visit here. 2019 promises to be an exciting year for Google. The accessibility updates have given us a lot to innovate and build on.
What do you think of some of these updates and what do you think of the possibilities these bring? Any particular update that caught your eye? We would love to hear from you on firstname.lastname@example.org or in the comments below!
Let us work together in Making Every Voice Heard!
Picture Credits – Google