The company has previewed a suite of features to enhance cognitive, vision and speech on the iPhone, iPad and Mac later this year
Trust Apple to come raise the decibel levels months before their new range of products hits the market. The company has previewed a slew of features aimed at improving cognitive, vision and speech accessibility on its upcoming products ranging from the iPhone15 to the new iPads and MacBooks in about three to four months time.
The company also made a specific note that the preview is aimed to generate feedback from specially abled communities, especially since the new feature comes home to roost under the hood of both the iOS and the iPadOs under the name Assistive Access. It is designed for people with cognitive disabilities and streamlines the interface between the iPhone and iPad.
Creating a world of easy communication
The focus is on simplifying the effort to speak with loved ones, share pictures and listen to music and a good example is how Apple has merged the iPhone and Facetime apps. The design also is easier to consume with large icons, increased contrast, clearer text labels – in fact everything that should make a user more comfortable.
Of course, the company has kept its customization DNA intact by ensuring that these visual features can be tweaked to a user’s preference and once achieved could be ported over to any app that is compatible with Assistive Access. Developers around the world also need to wake up to the needs of the specially abled, Apple seems to be saying.
Makes life easier for the visually impaired
The existing Magnifier helps blind and low vision users to use their device to locate nearby doors, people or signs. Now the company is introducing a new feature called Point and Speak that uses the camera and LiDAR scanner to help visually challenged users to interact with physical objects that have several text labels.
For example if a visually impaired user needs to heat up food in a microwave, they could use this new feature to figure out the difference between the various buttons and choose the right one based on the dish they want hot. When the device identifies the text, it reads it out loud and for now it can read in English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese and Ukrainian.
Now create your own personal voice, not Siri
Another interesting feature is called Personal Voice that creates an automated voice sounding just like the user instead of the standard Siri. It’s designed for folks who may be at risk of losing their vocal speaking ability from conditions such as ALS. The Personal Voice can be generated by the user in 15 minutes of reading randomly chosen text prompts clearly into their microphone. Thereafter machine learning takes over and the audio is processed locally on the Apple device to create the Personal Voice.
Though there is a fear of how such a repository could spell disaster in the wrong hands, Apple is quite sure that the custom voice data will never get shared outside or even with itself. The company confirms that the personal voice is not even connected to one’s Apple ID, which means a user can make it on a device and share it around with her other devices only.
Another feature is Live Speech, which is available across all Apple devices, and allows people to type what they want to say so that it can be spoken aloud. The tool is available on the lock screen but can also be used by other apps such as FaceTime. There is also a facility to store preset phrases in Live Speech.
In addition to these new features, Apple is also upgrading its speech-to-text tools with Voice Control now adding phonetic text editing that makes it easier for those who type with their voice to quickly correct errors. This feature would be available in English, French, Spanish and German for now, the company says.