OK so, during SXSW, I've gotten to know one of the presenters at this SXSW panel discussion: Susann Keohane. Now, you have to understand, I've heard of her before; she's a Master Inventor for IBM. What that means is she has like at least 17 patents under her belt at IBM. Not too shabby! And just in talking with her, I could tell she's really highly intelligent.
She and Brian Cragun (also an IBMer) talked about the importance of making large quantity of data accessible to all. Like in previous panels on accessibility, they both stressed that disabilities come in various forms, including situational disabilities. I've covered this before in a previous blog posting. To recap, it's a situational disability when you want to do something, but your environment (or situation) prevents you from doing it. Things like texting and driving. A big no-no. But what if you need to? Then you need assistive software to help you, like speech-to-text software.
What really amazed me is this notion that complex images had, in the past, been almost impossible to make accessible. Some images provide so much information in their visual presentation, that it would be nearly impossible to convey in words what all is being presented in the image. However, products such as IBM Many Eyes
allows users to attach to complex images the thoughts and impressions of multiple viewers. Users can add their own thoughts to complex images that could never fully be understood with just a basic alt content statement. Incredible!
Another interesting point both Susann and Brian mentioned is how to navigate through large amounts of data. They point out that aggregating the data in such a way so that visually impaired users can jump from zone to zone of the data presented is a powerful way for those who are visually challenged or blind can pinpoint the data they need. This brings to mind the importance of WAI-ARIA id attributes in web page coding. It is essentially treated in the same manner.