Introduction to Creating Accessible Web Content
Ready to see AudioEye in action?
Watch Demo
Creating assistive technologies that work seamlessly with all computing systems has been a daunting task where these technologies struggle to keep pace with an ever-evolving set of systems. This task becomes even more complex when you factor in the dynamic nature of the internet and its hundreds of millions of content creators. To fully appreciate the complex nature of creating accessible systems, it is important to see how we got to this point.
From The Terminal to a Window
Think back to the days of text-based operating systems such as DOS. With those systems, characters were written to a screen and a cursor maintained a pointer reference to a position in the screen’s output buffer. Assistive technologies had only to look at this buffer to understand and manipulate the outputs. Words could be read, fonts could be magnified, and user-driven inputs were minimal.
This all changed with the advent of graphical user interfaces like Windows. Information was no longer delivered to a text-based screen output buffer. Information was now represented as pictures drawn on the screen. Buttons, dropdowns, and toolbars now dominated the computer’s visual output. These new interfaces were often built from complex object libraries and interpreting these interfaces from any system other than the primary operating system was extremely difficult. Combine this with the prominence of a new interactive input device, the mouse, and it is quite easy to see the challenges that exist for creating effective assistive technologies.
In the 1990s, assistive technologies had a difficult time keeping up with these new dynamic interfaces. As more and more visual computing systems were being introduced, assistive technologies had the difficult task of interpreting the meaning behind visual object calls, a challenging process that was wrought with error. Inevitably, during this time, assistive technologies were always one step behind emerging technology. Only after new technology was released could any attempt to interpret information begin.
Enter The Accessibility API
By the turn of the Century, to assist accessibility engineers, operating systems were beginning to introduce accessibility APIs. These API’s allowed content creators and operating systems to explicitly pass the necessary information, required for interpretation, directly to the assistive technology. While different operating systems implement different API structures, generally any accessibility API provides the following information about an object:
- Name
- Value
- State (Is the object active, disabled, hidden, focused?)
Role (What is the purpose of this object? – is it a selector, button, a header?)
Even better, assistive technology engineers no longer had to wait for technology to be released before attempting to interpret the meaning of objects. Now, they could query the accessibility API and receive standards-based responses to those queries, and all new content could be interpreted using this standard.
While the accessibility API offered enormous improvement and advancement in the development of assistive technologies for visual computing systems, the popularity of the Internet and the World Wide Web presented new challenges for this technology.
Accessibility API and the Web
In the early days of the Internet websites were static, based on HTML Markup that was used to deliver content to a web browser. The browser would interpret the markup and drew information in the screen’s browser window. At the time, markup was fairly limiting, developers could add text, images, tables, and simple forms. Once these simple webpages were drawn, a full-page POST (or GET) event was required to obtain any new content. Much like the early days of text-based computing, assistive technologies placed web content into a virtual buffer, on page load, allowing assistive technologies to engage with the content.
Like other operating systems, programs, and objects, web browsers support the accessibility API, and that support can go a long way to providing accessibility to static web content. After all, when developers use standard markup for buttons, tables, images, and inputs, assistive technologies can understand the role of the element. Attributes of these elements can be used to define other characteristics like name, value, and state.
Yet, just as traditional personal computers evolved over the last several decades, so too has the complexity of the Internet. With the introduction of JavaScript and cascading style sheets (CSS) the web has become far more advance in its capabilities. Gone are the days of the static web and, in its place, exist a highly complex, dynamic, user-driven system where information is often tailored specifically for the individual requesting it. Small portions of page content could easily be changed based on a user interacting with other content on the page. Virtual buffers were no longer the answer to this constantly evolving space.
While the personal computer was getting better and better at supporting assistive technologies, support for the web was falling further and further behind. An accessibility API for the web was needed.
WAI-ARIA – An Accessibility API For The Web
To help resolve issues created by the dynamic web, in the spring of 2014 the World Wide Web Consortium (W3C) introduced the Web Accessibility Initiative – Accessible Rich Internet Applications (WAI-ARIA).
WAI-ARIA is a series of attributes that can be added to existing HTML elements to enhance accessibility.
Developers can use these attributes to define web elements beyond their semantic, element definitions. With WAI-ARIA, developers can further define an anchor as being part of a menu, or they can take an angularJS clickable element and define it as a button, or they can drive screen reader focus directly to a pop-up modal.
WAI-ARIA provides developers a great deal of flexibility in creating dynamic web applications, but it is not a perfect solution, and it comes with challenges, including:
An Emerging Technology
Not all browser and assistive technology combinations handle WAI-ARIA the same way (see browser support for WAI-ARIA).
Developers must learn this new system of tagging their content.
It requires that users have assistive technology to take advantage of the benefits
Developers can still script sites in ways that prevent WAI-ARIA from being successful.
In this series, we will endeavor to introduce web developers to WAI-ARIA, simplify the complex documentation, explain how, when, and when not to leverage these techniques, and to illustrate how plug-in technologies like AudioEye can bring these accessibility benefits to all users of the web, regardless of whether or not they have access to and know how to use assistive technologies.
Ready to see AudioEye in action?
Watch Demo
Ready to test your website for accessibility?
Share post
Topics:
Keep Reading
7 Best Accessible Design Tools in 2025
From AI image generators to color contrast checkers, learn how AudioEye designers create accessible designs.
accessibility
December 12, 2024
Visual Accessibility Guidelines and Best Practices
Follow these visual accessibility guidelines and best practices to create a site or app that’s inclusive and meets legal accessibility standards.
accessibility
December 10, 2024
9 Best ADA Compliance Software and Services
Accessibility is important for an inclusive internet, and that’s why the ADA exists. These software and services help you achieve full legal compliance.
accessibility
compliance
December 09, 2024