“I See [Online]” Said the Blind Man

Whether you’re aimlessly scrolling through Sarah Silverman’s Twitter feed or languidly thumbing through your childhood friend’s marriage proposal album, one thing you likely aren’t considering is your seamless intake of this visual information. A platform like Facebook, hinged upon the sharing of millions of photos and providing countless options for interaction, is understandably complex to make accessible for a blind user. These users can utilize a screen reader, a software that describes aloud the various elements displayed on a screen. Still, roadblocks may appear in the form of website designs that don’t allow the software to work— for instance, the reader could detect “button, button, button,” instead of giving the user an indication of what each button does. For those who can’t hear or see, a lack of effective tools for navigating the internet— particularly social media platforms, where countless interactions play out every second— can contribute to feelings of exclusion.

Each of the tech giants has recently rolled out new features in hopes of making the greater social conversation accessible to all.

Image_1-700x466.jpg

Photo via Flickr

Deep Learning

The accessibility horizon has been broadened with a feature that went live on Facebook for iOS on April 5th: imagine, a photo of a In-N-Out burger you shared to evoke jealousy in your East Coast friends can now be detected and described by an algorithm. More specifically, it’s a function of machine learning, which uses algorithms to make predictions. Machine, or deep, learning is another name for artificial neural networks (ANNs) which “are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown.” These networks are presented as “systems of interconnected ‘neurons’ which exchange messages between each other.” Based on experience, these neurons are capable of learning.

Learning a new language? Check out our free placement test to see how your level measures up!

This means that with enough exposure to photos of burgers, the “automatic alternative text” feature that Facebook’s accessibility team has developed will be able to identify burgers in future posted photos. The growth of the feature will be aided by Facebook’s enormous scale: users upload a combined 2 billion photos a day across all of its products. If the feature is able to identify a photo with 80 percent certainty and suggest a tag for a photo, it will then tap into iPhone’s VoiceOver feature to read descriptions of the photos aloud to users. Still, at this stage the descriptions are limited to what it’s familiar with, and are rather simplistic— think “sunset,”or “smiling”— but, through machine learning, the feature will become more accurate and intelligent with time.

Though through deep learning we can arrive at beneficial capabilities, with every startlingly futuristic development (like that of a 6-foot-tall humanoid robot, created by Google-owned Boston Dynamics) come science and tech experts’ concerns about AI’s potentially threatening implications. Stephen Hawking, along with Elon Musk, Steve Wozniak, and hundreds of others, issued a letter at the International Joint Conference in July 2015 that warned of artificial intelligence being potentially more dangerous than nuclear weapons.

So How Does It Work?

Twitter is taking its approach one step further on iOS and Android and allowing its users to add descriptions for their own tweeted images as they post them, like “braille for photos.” Rather than relying on an algorithm to interpret an image, allowing the photo owner to take responsibility ideally will lead to more detailed photo descriptions (using up to 420 characters) that’s specific to the photo taker. The descriptions are able to be read like any other text posts with the user’s assistive technology.

Of course, while this may allow for more accuracy, the difficulty is in getting Twitter’s users to implement this feature, and to take the time to add descriptions to all images they upload. Twitter user Michelle Hackman, who describes herself as part of the “large but often neglected chunk of internet users” who would benefit from these new accessibility features, is skeptical that the greater Twitter community will implement the feature enough for it to make a notable difference in the overall user experience. She points to the indirect process of enabling alt text as a deterrent for a casual user: users must seek out the feature through Twitter’s accessibility settings, rather than being prompted to enable it within the app.

Photo21.jpg

Photo via @GoMedia Twitter

One potential incentive for users to include “alt text” in images is that it can function as a kind of metadata, making it easier for search engines to index specific tweets. Image content is one of the elements that the top search engines may look at in searches, and the addition of this feature could help with that. Still, it will most likely be developers and publishers, like BBC and the New York Times, who will make use of the feature on its tweets for the purpose of reaching a larger audience.

While currently only in a beta English release, another technological development that aims to promote inclusiveness is Google’s Voice Access app. The app enables users to control their Android phones entirely through voice commands. The basic voice commands allow for navigation, such as “Go home” or “Tap go back.” Important buttons with numbered labels are overlaid so that the user can “tap” on nearly everything with their voice. Similarly, while composing a Google Doc to share with coworkers, there’s now the capability use your voice to direct the program to type, edit and format. This is a good example of a feature that is easily accessed and enabled by users for varied purposes— one user may want to jot down notes quickly using hands-free dictation in a one-off situation, while another user may be physically unable to use a keyboard.

The Future Is Now

As the Internet and technology continue to develop at a rapid pace, it’s vital for the tech companies to keep all types of users in mind and continue to integrate features that make accessibility natural and intuitive, rather than relegating them to a space where the user has to search for them. There’s incentive for the companies themselves, in that they are able to grow their reach and user base if those who’ve previously experienced technological exclusion are able to be brought into the fold. The more that these kinds of features become embedded into programs and technologies that are utilized day-to-day, the greater interconnectivity that can be fostered between all communities to create an inclusive space. Still, with technologies that utilize machine learning and that allow for the automation of certain jobs, it is worth emphasizing that we should continue to use these advancements for societal good.

Comments on “I See [Online]” Said the Blind Man