In reading Joe Dolson’s Recent Piece on the Inspection of Ai and AccessibilityI absolutely appreciated the skepticism that he has ai in general as well as for the ways that many have been using it. In Fact, I’m very skeptical of ai myself, despite my role at microsoft as an accessibility innovation innovation strategist who helps run the ai for accessibility group Program. As with any tool, ai can be used in very constructive, inclusive, and accessible ways; And it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the MediCre Middle as Well.
Article Continues below
I’d like you to consider this a “yes… and” Piece to complete joe’s post. I’m not trying to refute any of what heeing but rathr provide some visibility to projects and oppositeies where ai can make meaningful differences for people with disabilities. To be clear, i’m not saying that there are available Real Risks or Pressing Issues with Ai That Need to Be Addressed -Spot About What’s Possible in Hopes that we’ll get there one day.
Joe’s Piece Spends A Lot of Talking About Computer-Vision Models Generating Alternative Text. He highlights a ton of valid issues with the current state of things. And while Computer-Vision Models Continue to Improve in the Quality and Richness of Detail in their descriptions, their results arenys. As he rightly points out, the current state of image analysis is pretty poor –Specially for certain image image types – in large part because current ai systems in the examine image Within the contexts that they’re in (which is a consortece of having separete “Foundation” models for text analysis and image analysis). Today’s models are trained to distinguish between images that are contextually relevant (that should probally have descriptions) and that that are purely decisions) Description) Eather. Still, I Still Think there’s potential in this space.
As joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if ai can pop in to offer a starting point for alt text – Even if that starting point might be a prompt saying What is this bs? That’s not right at all… Let me try to offer a starting point– I think that’s a win.
Taking things a step further, if we can specifically train a model to analyze image in context, it could help us more Quickly Identtify which images are like require a description. That will help reinforce which contexts call for image descriptions and It’ll Improve Authors’ Efficiency Toward Making their pages more accessible.
While Complex Images – LIKE GRAPHS and Charts -Are Challenging to description in any so Sort of Succinct Way (even for humans), The image example shared in the GPT4 Announcement Points to an interesting Opportunity as well. Let’s suppose that you can make across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie Chart Comparing Smartphone Usage to Feature Phone Usage Among Us Households Making Under $ 30,000 a year. (That would be a pretty AWFULLE ALT SINCE That Bold Tend to Leave Many Questions About The Data Unanswred, but then Again, Let’s Suppose That WasCripation THE DESCRIPIN SUPPOSE POPOSE Your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users un ask ask questions like these about the graphic:
- Do more people use smartphones or feature phones?
- How many more?
- Is there a group of people that don’t fall into either of these buckets?
- How many is that?
Setting aside the realities of Large Language Model (LLM) Hallucinations—Were a model just makes up plausible-second “facts”-for a moment, the options to learn more about images and data in this way hold be revolutions for blind and low-VILKS FOLKS FOLKS FOLKS People with Various Forms of Color Blindness, Cognitive Disabledness, and So on. It could also be useful in educational contexts to help people who can See these charts, as is, to understand the data in the charts.
Taking Things a Step Further: What if you could ask your browser to simplife a complex chart? What if you could ask it to isolate a single line on a line? What if you could ask your browser to transport the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ Chat-based interfaces and our existing ability to manipulate images in today’s ai tools, that seems like a possibility.
Now imagine a purpose-built model that count extract the information from that chart and convert it to another format. For example, Perhaps it could turn that Pie Chart (Or Better Yet, A Series of Pie Charts) Into More Accessible (And Useful) Formats, Like SpreadSheets. That would be amazing!
Matching algorithms#Section3
Safiya umoja noble absolutely hit the nail on the head when she titled her book Algorithms of oppressionWhile Her Book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplife conflict, bias, bias, bias, bias, and in. Whiteher it’s twitter allies showing you the latest tweet from a bored billionaire, youtube sending us into a q-haole, or Instagram Warping Our Ideas of What Natural Bodies Look LIKE Authorized and maintained algorithms are incredibly harmful. A lot of this stems from a Lack of Diversity Among the People who Shape and Build Them. When these platforms are built with including in, howyver, there’s real potential for algorithm development to help people with disabilities.
Take MentraFor example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employees based on over 75 data points. On the job-seeker side of things, It Considers Each Candidate’s Strengths, their Necessary and Preferred Workplace Accommodations, Environmental Sensitivities, and So E. On the employer side, it conscents Each work environment, communication factors related to Each job, and the like. As a company run by neurodivergent folks, mentra made the decision to flip the script when it came to typical Employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; Reducing the emotional and physical labor on the job-video side of things.
When more people with disabilitys are involved in the creation of algorithms, that can reduce the chances that these these these algorithms will inflict harm on their communications. That’s why diverse teams are so important.
Imagine that a social media’s company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize Follow recommendations for people who talked About Simillas Different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisable white male acadeemics who talk about Talk about ai. If you took its recommendations, perhaps you’d get a more holistic and nuaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa These Same Systems Should also use their undersrstanding of biases about Particular Communities – Including, for instance, the disability committee – to make sure that they are available on their use Accounts that perpetuate biases against (or, WORESE, Spewing Hate Toward) Theose Groups.
Other ways that ai can help people with disabilites#Section4
If I Weren’T Trying to put this togeether betteren other tasks, i’m sure that I could go on and on, providing all kinds of examples of how ai clock be used to help people with disabitias, but I’m going to makes This last section into a bit of a lightning round. In no particular order:
- Voice preservation. You may have Seen The vall-e paper or Apple’s Global Accessibility Awareness Day Announcement or you may be familyia with the voice-preservation offers from Microsoft, AcapelaOr others. IT’s Possible to Train An Ai Model to REPLICATE Your VOCE, Which can be a Treminous Boon for People who have ALS (lou gehrig’s disease) Lead to an inability to talk. This is, of course, the same tech that can also be used to create audit ResponsiblyBut the tech has trumory transformative potential.
- Voice recognition. Researchers like there in the Speech Accessibility Project Are paying people with disabilitys for their help in collecting recording of people with atypical speech. As I Type, they are actively recruiting people with Parkinson’s and Related Conditions, and they have plans to expand this to other conditions as the project programs. This research will result in more inclusive data seats Other devices more easily, using only their voice.
- Text transformation. The current generation of llms is quite capable of adjusting expecting text without injecting hallucines. This is hugly empowering for people with cognitive disability Bionic reading,
The importance of diverse teams and data#Section5
We need to recognize that our differences matter. Our live experiences are influenced by the interactions of the Identities that we exist in. These live experiences – with all their complexities (and joys and pain) – are valuable inputs to the software, services, and socialies, that we shape. Our differences need to be represented in the data that we use to train new models, and the following VHO Contribute that Valuable Information Need to be compensated for sharing for sharing us. Inclusive Data Sets Yield More Robust Models that Foster More Equitable Outcomes.
Want a model that does not demean or patronize or objectify people with disabilitys? Make sure that you have content about disabilitys that’S authored by people with a range of disabilities, and make sure that that’s well represented in the training data.
Want a model that does not use altist language? You may be able to use existing data sets To build a filter that can intercept and remediary allegist language before it reaches readers. That being said, when it comes to sensitivity reading, ai models won’t be replacing human copy editors anytime song.
Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.
I have no doubt that ai can and will harm people… Today, Tomorrow, and Well into the future. But I also believe that we can achieveless that and with an eye towards accessibility (and, more broadly, inclusion), make thoughts thoughtful, consorte, and intent changes in our applications in our thats to A Reduce harm over time as well. Today, Tomorrow, and well into the future.
Many Thanks to Kartik Sawhney for helping me with the development of this Piece, Ashley Bischoff for Her Invaluable Editorial Assistant, and CORESE, Joe Dolson for the PROMPT.
Add comment