Technology Development: Making Accessibility More Accessible

Published on

By: Lana Wehbeh

The role of technology in accessibility is one with endless possibilities, and one where we have already seen massive developments. Whether in improving the quality of life for disabled users or in reframing the limitations of disabilities, the social impact of specific technologies in the realm of accessibility cannot be ignored, and it is a development which must continue as we push for a more equitable world.

With such a wide range of accessible technologies, narrowing the focus of our discussion is necessary, and with the rise of tools like audiobooks and voice assistants, assistive devices show particularly strong potential for improvement.

Devices that aim to support a person with hearing loss or voice and speech disorders fall under the umbrella of assistive technology. This term can and is often used to refer to tools that support other disabilities or disorders, but for the purposes of this discussion, this term will mainly refer to the above narrowed definition. With Canadians spending an average of 4.4 hours a day on mobile apps in 2020, assistive technologies are no longer uncommon and have become standard in some areas (Somos, 2022).

Take for example your standard word processors, which include features such as text-to-speech and vice versa. Found in Google Docs, Microsoft Word, and many other processors, these may not immediately stand out as explicit assistive technology, but the underlying mechanisms, the development of field-of-speech synthesis for these purposes, is directly tied into that of Augmentative and Alternative Communication (AAC) devices which serve as tools for many with communication disorders (U.S. Department of Health).

Likewise, popular calling apps like Google Meet and Discord include the option of reducing background noise in calls, relying on digital noise reduction schemes whose development can be traced alongside that of hearing aids. Nowadays, assistive listening devices (ALDs), which amplify specific noise and minimise background, match the very same schema that has become helpful in online communications (Bentler & Chiou, 2006). That is to say, the development of these assistive devices and their online counterparts are intrinsically linked, and the connection between accessibility developments and an increasingly online-oriented society underlines the necessary discussions on making accessibility a standard.

The main developments within this field beyond the mechanical aspects lie in the development of natural language processing, a method which combines artificial intelligence, linguistics, and computer science to work towards mimicking human language on computers. Developments in natural language processing result in more accurate captioning software (think auto-generated captions), increased fluidity of speech in text-to-speech generation, and overall enhanced accuracy in identifying the different elements of language used to improve autocorrecting capabilities. The role of linguistics is especially powerful in assisting to categorize and contextualize elements of speech in a conversation to maximize understanding even when input is difficult to decipher, such as when the person is speaking in a loud room (Bentler & Chiou, 2006). Tools that are fine-tuned as a result of improvements in this area are handy for many, regardless of ability. Auto-generated captions are helpful even for those who don’t struggle with their hearing, and many simply prefer to narrate rather than type. However, these minute differences in quality make a much larger impact for those who rely on these tools as a sole means of communication. The development in the fluency of a text-to-speech reader, for example, is a huge quality-of-life improvement for those who rely on it in their communications.

With these assistive devices’ presence and relevance established, the question arises: how can these accessibility features become a key consideration for technology developers in their designs?

This can be addressed using the development history  the most common assistive tools for those who struggle with hearing: closed captions and subtitles. It is first important to note that the two terms are not interchangeable. Subtitles are typically meant for non-native speakers of the language the audio is in; the text here is only a translation of dialogue. Closed captions are a more comprehensive version of those subtitles, which are more helpful for those with hearing difficulties; they include dialogue and audio descriptions of the environment-based sounds, such as background noises and music shifts (“Captions/subtitles”).

The development of captions and subtitles was a natural advancement, stemming from how films began as silent movies. These first films, which did not include sound and were thus accessible for hard-of-hearing audiences, included “intertitles”, text which followed or preceded a scene to describe dialogue or important information. With the rise of sound in movies in the 1920s, these movies began to fall out of fashion and accessibility became an issue again. Pushback against this change rose, but caption developments were limited as techniques to implement them into film were still being created. In the U.S, where many of these developments were centred, the 1990 Americans with Disabilities Act marked the first requirement for televisions to be able to decode captions (“The history of closed captioning”, 2022).

Now, digital captions are often auto-generated, and in the past decade, people have seen marked improvements in the prevalence of captioning and subtitles, especially with the U.S. Federal Communications Commission mandating their inclusion in many online spaces. With many online platforms in the Western world headquartered in the U.S, the availability of subtitles and captions have become commonplace. Simply put, in the case of many assistive tools, their availability is a benefit to the population as a whole, although admittedly not as much of a necessity as to those who require it for their day-to-day life (Gernsbacher, 2015).

71% of students without hearing difficulties admit to using captions at least some of the time, with a huge motivation being that they significantly improve focus and comprehension. A 2015 study further corroborates that across children with hearing difficulties, children without, and non-native speakers of the video’s language, comprehension showed significant increase (Gernsbacher, 2015). This pattern of assistive tools serving as supports for the general population tends to be a huge motivator for their inclusion, and with studies showing that they improve engagement, their increased inclusion and ease of use on social media and other online platforms is natural, both from an accessibility regulation standpoint and from a business-conservation view (Klein, 2022).

However, though accessibility guidelines exist and inclusivity has undoubtedly become much more of a focus nowadays, it would still be difficult to say that the world, digital or real, is fully accessible. Niche supports that don’t have many potential applications or benefits to the general population beyond those with accessibility needs receive significantly less development, and thus, are much more expensive. This makes them inaccessible to those who truly need them.

The past decade has seen a mass of technology advancements, and these should alsocontribute to social impact. Assistive technology makes a significant difference to even those without the need for them. The push for accessibility standards has to come while universal standards in technology are still being developed, lest the opportunity to make accessibility truly accessible be missed.

Works Cited

Alternative text. WebAIM. (n.d.). Retrieved December 12, 2023, from https://webaim.org/techniques/alttext/

Bentler, R., & Chiou, L.-K. (2006, June). Digital Noise Reduction: An overview. Trends in amplification. Retrieved December 19, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111515/

Captions/subtitles. (n.d.). Web Accessibility Initiative (WAI). Retrieved December 19, 2023, from https://www.w3.org/WAI/media/av/captions/

Gernsbacher, M. A. (2015, October). Video captions benefit everyone. Policy insights from the behavioral and brain sciences. Retrieved December 19, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5214590/

Klein, R. (2022, November 5). How many people use captions and subtitles? 3Play Media. Retrieved December 13, 2023, from https://www.3playmedia.com/blog/who-uses-closed-captions-not-just-the-deaf-or-hard-of-hearing/

Somos, C. (2022, January 12). Canadians spent 4.4 hours on mobile apps a day in 2021: Report. CTVNews. Retrieved December 19, 2023, from https://www.ctvnews.ca/sci-tech/canadians-spent-4-4-hours-on-mobile-apps-a-day-in-2021-report-1.5737073#:~:text=A%20new%20report%20by%20analytics,in%20the%20same%20time%20period.

The history of closed captioning: The Analog Era to today. Rev. (2022, August 10). Retrieved December 14, 2023, from https://www.rev.com/blog/caption-blog/history-of-closed-captions

U.S. Department of Health and Human Services. (n.d.). Assistive devices for people with hearing, voice, speech, or language disorders. National Institute of Deafness and Other Communication Disorders. Retrieved December 12, 2023, from https://www.nidcd.nih.gov/health/assistive-devices-people-hearing-voice-speech-or-language-disorders

Assistive tech: a universal approach to design | Courier - Mailchimp, n.d. Retrieved March 17, 2023, from https://mailchimp.com/courier/article/assistive-tech/

More posts by Anna Li.
Technology Development: Making Accessibility More Accessible
Share
Twitter icon Facebook icon