I want to thank you all for coming. It is a pleasure to be back in the U.K. and I would like to thank the RNIB for inviting me to speak to you today. I come to you from the great state of Texas. I have been living in Texas for roughly 13 years. What I have been told is the United States is divided into two main areas - Texas and, well, everybody else. We also have great spicy food and I was told that well, England does not and I heard this from a colleague from Sheffield. He mentioned something about national food being “bubble and squeak.”
He did say, however, that what I really needed to try was "mushy peas." That was not the type of food I would think a country should be bragging about. Well, I am happy to say I did try them and they were quite good. Although you might consider serving hot sauce with it.
Seriously though, all of us do have a lot in common. We work in one of the most challenging fields in the computer industry – accessibility.
Those working in the accessibility field compare it to the legend from Greek mythology of King Sisyphus, who was cursed by the gods to repeatedly push a huge boulder up a hill for all eternity, only to watch it roll down again. As you, no doubt, experienced with your success, persistence, passion, and determination can lead to tremendous rewards along the way and make pushing that rock just a bit easier. That is no different than how we are approaching the rapidly changing Web environment.
My career in accessibility began in 1990 in the halls of the IBM Watson Research Center. At this time industry was switching from a text-based interface to one that was graphical. Screen reading technology, at the time, could easily gain access to the text on the screen but when it was converted to graphics the concern was that it would all come to a halt. Just imagine the fear that a blind individuals had when faced with the prospect of losing their jobs, or one of the vehicles for basic communication with friends and colleagues. In 1991, Joe Lazarro published an article called “Windows of Vulnerability” in Byte Magazine that described this fear as the industry moved from DOS to graphical interfaces such as Windows and IBM’s OS/2. Quietly, I had been working with a team of engineers at IBM to create the first Screen Reader for graphical user interfaces, which provided blind users with access to OS/2. By the way, graphical user interfaces are known in the industry as GUIs. In December 1991, I published an article called “Making the GUI Talk” in Byte magazine, that showed the community how we made it work. That day the earth shifted to meet the needs of users with disabilities. I recall my boss, Jim Thatcher, walking into a government building in Washington to discuss the loss of access to computers by blind users. When Jim approached the meeting room, stacks of the magazine containing the article greeted him at the door.
This early innovation was based on capturing what was drawn on the screen, using reverse engineering to guess what it represents such as a menu item or a button, and reproducing a textual representation that could be read to a user. Since this time we have defined new standards that allow desktop application developers to provide the textual information as well as semantics and structural information representing the rich desktop interface to assistive technologies.
This resulted in a much more reliable and usable experience. The first of these standards were the Microsoft Active Accessibility in 1995 and the Java Accessibility Application Programming Interface, a collaboration between IBM and Sun, in 1998. Looking back, both pieces of work were seminal in that they formed the foundation on which applications provide the necessary information required by assistive technologies for over a decade. If implemented properly, an assistive technology can ask for the information they need rather than guessing.
In 1999, the Worldwide Web Consortium published the first Web Content Accessibility Guidelines, which became known as WCAG 1.0. At the time WCAG 1.0 was published, the Web consisted of static documents where the idea of a dynamic page meant reloading a new one. Also, navigation of a web page, by a person with a disability was archaic, slow, and tedious requiring constant tabbing to navigate to get to anything on the page. Even worse, the markup language used to code Web pages, HTML, was not entirely keyboard accessible. This was largely because it was expected that the user would spend most of the time using a mouse for navigation. It was a time where the concept of making something accessible was limited mostly to adding alternative text and rudimentary semantics found in HTML; where usability for people with disabilities was an afterthought; and where you restricted the technologies used by the web site author to meet compliance. The Web experience, for many, was a step back from what users experienced on the desktop. Yet, the ability to access such a tremendous amount of information was so compelling and necessary that these drawbacks had to be accepted. That is changing. Like in 1990, another paradigm shift is now under way. It is called Web 2.0.
- This new accessibility information to work with existing HTML and with existing markup and all browser
- open source software, and
- community collaboration by industry leaders, assistive technology, and browser vendors
Today, this new accessibility information is provided in a new specification from the World Wide Web Consortium called the Web Accessibility Initiative Accessible Rich Internet Applications or WAI-ARIA. It is changing the way industry thinks about accessibility. WAI-ARIA is a way for authors to provide semantic “sugar” to a web page to allow a browser to support the accessibility services provide by an operating system which are used by assistive technologies. We call this “interoperability.” Prior to WAI-ARIA, industry’s notion of accessibility was more in line with what you needed in static HTML documents (headers, alternative text, and structural elements). With Web 2.0 applications authors create new user interface elements, such as a folder tree and notebook panels, where information is hidden until you are able to use it and the accessibility of the page changes as you operate it. This semantic “sugar,” or meta data, allows the author to tell an assistive technology, such as a screen reader, that the object is a folder tree and whether or not it is expanded. And it uses the document structure of HTML, and semantics, to tell you where you are in the folder tree hierarchy. We call this the “user context.” With WAI-ARIA we can also tell an assistive technology and the web browser the common landmarks on the page such as the main content, the banner, a search region, or the navigation section.
Prior to WAI-ARIA, keyboard navigation was limited to the use of form elements and anchors, which were all in the tab order. Everything else in HTML was not keyboard accessible. Imagine a mobility-impaired user having to tab through 1000 links on a page to get to the last link. With WAI-ARIA, keyboard accessibility is similar to that in the desktop environment where the Tab and Arrow keys can be used together to create a more efficient experience. All HTML elements can be keyboard accessible without requiring that they be included in the tab order. This constitutes a paradigm shift to one where you tab to significant elements on the page, such as a menu, which manage the information you see, and use the arrow key to navigate within. This is a huge step forward as now a Web page can have accessibility and usability, normally found only on the desktop, benefitting all users.
A second challenge will be better tooling designed to test the accessibility of Web 2.0 applications. Today, our accessibility test tools are largely geared to:
- WCAG 1.0 restrictions
- Static web content resulting in “snapshot” testing.
- Basic HTML (no “sugar” if needed)
This, as we can all see, is quite a limitation. Rather, tooling must now consider
- Taking a Functional Verification Approach where the tool repeatedly tests the page as the tester exercises the application
- Testing for proper implementation of WAI-ARIA
IBM has taken an open community approach to tooling. Through the Eclipse Foundation, we first provided an Accessibility Tools Framework to this open source tool platform and provided accessibility tool plug-ins for Web and Java applications. Most recently, we have started a new tools effort in the Open Ajax Alliance called the “Accessibility Tools Task Force.” We have pulled together in an open setting, many of the major tools providers including Microsoft, Deque, ParaSoft, Mozilla, Adobe, and educational institutions. Our intent is to deliver a collection of rules and reporting best practices to assist Web application developers in producing accessible solutions. In fact, we expect this work to be integrated not only into test tools but also into development tools used to create Web applications.
The third challenge comes with what is called “the programmable web” and the extensive use of rich graphical content. I believe these come hand in hand. Today the notion of how a Web application is built is changing rapidly. Today companies, and individuals, are creating reusable web application fragments through the use of web services as the new Application Program Interfaces or APIs. These APIs, listed at programmableweb.com, totaled 1,145 at the time I wrote this presentation. Last year at this time there were only 600. Companies, and in fact individuals, mash them up, hence the name “mashup,” into their own application leading to rapid application development. These services are then wired together in your web page. For example, you may choose an address of a customer contact on one side of your page and then an airport location on another side of your page. The address information from both supplies the source and destination of a travel route that is used to configure a graphical map in another. In this case, the source and destination may be provided through a web service that accesses this information in a contact and transportation data store respectively. The combination may then be wired to configure a Google map using their web service. This highlights a number of problems for users:
- The services are created autonomously and the combination may be unusable
- The services may not be accessible for a given set of users
- We don’t know the author and may be unable to get these problems fixed.
- The industry needs to use complex visualizations to represent complex data
- For some users it may not be possible to make these services accessible in their current form
These issues require us to look at accessibility at a much higher level. It requires us to look at the basic fabric of how the Web is constructed. In order to solve this problem we need to step away from the one-size-fits-all approach to accessibility where a single web page must be readily accessible to all users as-is. We must step outside our constraints of thought and think of the Web as one that should be flexible and personalized. To address this we need to look to the learning and educational space where they are dealing with delivering solutions based on inclusive design.
A flexible and personalized web is one where the user specifies how they would like to see the Web delivered to them. Imagine a Web where accessibility becomes a preference rather than a solution and what is accessible depends on what the user deems to be accessible or usable. To do that you would need to also know something about the capabilities of the resources forming your page.
For example, does the video on a page support closed captioning? Does the video support closed captioning in the language I speak? If not, do you have one that does? Great, give me that one!
Is your resource a complex visualization, such as a map? I prefer a text equivalent, such as the textual driving directions. Do you have one? Great! I will take that one and I can give it to my driver too.
Do you support large fonts? Great, please deliver the page in large fonts and in high contrast if you have it.
These concepts are not fantasy. In fact there is a specification called the Access-For-All specification from the IMS Global Learning Consortium and it is used in some new learning systems being developed like those by Angel Learning, Teacher’s Domain, and a project called Fluid led by the University of Toronto. What is not there is a standard way to use these specifications to arbitrate personalized content on the Web. This is now in the early stages. In the World Wide Web Consortium Ubiquitous Web Applications Working group we are in the process of harmonizing a core set of Access For All preferences into standards that define how content is delivered to all devices.
This is a first step toward making your mobile device a truly personalized digital assistant. We must then define standards for how we access the personalized information and tag the accessibility capability of resources, such as a map, to deliver a form that is consumable by the user. When that fabric is in place and is widely adopted companies, like IBM, will be able to deliver a personalized experience to all users.
The Web is, and continues to be, the most important vehicle for delivering information to all users. Its power is rooted in its open pipeline to the masses. Looking forward I encourage all of you to become part of that open community. The ability to contribute to open standards and open source has allowed innovations like WAI-ARIA to move the web ahead at tremendous speed and advance accessibility for all to new heights. I encourage you all to participate and set the course for a Web without barriers.
Working together, pushing that rock up the hill gets a lot easier!