Back to Blog Posts
Introduction to Creating Accessible Web Content
Posted October 04, 2016
Share this post
Creating assistive technologies that work seamlessly with all computing systems has been a daunting task where these technologies struggle to keep pace with an ever-evolving set of systems. This task becomes even more complex when you factor in the dynamic nature of the internet and its hundreds of millions of content creators. To fully appreciate the complex nature of creating accessible systems, it is important to see how we got to this point.
From The Terminal to a Window
Think back to the days of text-based operating systems such as DOS. With those systems, characters were written to a screen and a cursor maintained a pointer reference to a position in the screen’s output buffer. Assistive technologies had only to look at this buffer to understand and manipulate the outputs. Words could be read, fonts could be magnified, and user-driven inputs were minimal.
This all changed with the advent of graphical user interfaces like Windows. Information was no longer delivered to a text-based screen output buffer. Information was now represented as pictures drawn on the screen. Buttons, dropdowns, and toolbars now dominated the computer’s visual output. These new interfaces were often built from complex object libraries and interpreting these interfaces from any system other than the primary operating system was extremely difficult. Combine this with the prominence of a new interactive input device, the mouse, and it is quite easy to see the challenges that exist for creating effective assistive technologies.
In the 1990s, assistive technologies had a difficult time keeping up with these new dynamic interfaces. As more and more visual computing systems were being introduced, assistive technologies had the difficult task of interpreting the meaning behind visual object calls, a challenging process that was wrought with error. Inevitably, during this time, assistive technologies were always one step behind emerging technology. Only after new technology was released could any attempt to interpret information begin.
Enter The Accessibility API
By the turn of the Century, to assist accessibility engineers, operating systems were beginning to introduce accessibility APIs. These API’s allowed content creators and operating systems to explicitly pass the necessary information, required for interpretation, directly to the assistive technology. While different operating systems implement different API structures, generally any accessibility API provides the following information about an object:
- State (Is the object active, disabled, hidden, focused?)
Role (What is the purpose of this object? – is it a selector, button, a header?)
Even better, assistive technology engineers no longer had to wait for technology to be released before attempting to interpret the meaning of objects. Now, they could query the accessibility API and receive standards-based responses to those queries, and all new content could be interpreted using this standard.
While the accessibility API offered enormous improvement and advancement in the development of assistive technologies for visual computing systems, the popularity of the Internet and the World Wide Web presented new challenges for this technology.
Accessibility API and the Web
In the early days of the Internet websites were static, based on HTML Markup that was used to deliver content to a web browser. The browser would interpret the markup and drew information in the screen’s browser window. At the time, markup was fairly limiting, developers could add text, images, tables, and simple forms. Once these simple webpages were drawn, a full-page POST (or GET) event was required to obtain any new content. Much like the early days of text-based computing, assistive technologies placed web content into a virtual buffer, on page load, allowing assistive technologies to engage with the content.
Like other operating systems, programs, and objects, web browsers support the accessibility API, and that support can go a long way to providing accessibility to static web content. After all, when developers use standard markup for buttons, tables, images, and inputs, assistive technologies can understand the role of the element. Attributes of these elements can be used to define other characteristics like name, value, and state.
While the personal computer was getting better and better at supporting assistive technologies, support for the web was falling further and further behind. An accessibility API for the web was needed.
WAI-ARIA – An Accessibility API For The Web
To help resolve issues created by the dynamic web, in the spring of 2014 the World Wide Web Consortium (W3C) introduced the Web Accessibility Initiative – Accessible Rich Internet Applications (WAI-ARIA).
Developers can use these attributes to define web elements beyond their semantic, element definitions. With WAI-ARIA, developers can further define an anchor as being part of a menu, or they can take an angularJS clickable element and define it as a button, or they can drive screen reader focus directly to a pop-up modal.
WAI-ARIA provides developers a great deal of flexibility in creating dynamic web applications, but it is not a perfect solution, and it comes with challenges, including:
An Emerging Technology
Not all browser and assistive technology combinations handle WAI-ARIA the same way (see browser support for WAI-ARIA).
Developers must learn this new system of tagging their content.
It requires that users have assistive technology to take advantage of the benefits
Developers can still script sites in ways that prevent WAI-ARIA from being successful.
Share this post
Subscribe to our blog for the latest stories about accessibility and AudioEye