It’s been sparse lately because the PC I’m using is finally showing its age. That, and all the writing assignments I’m trying to complete before it finally goes down. So I’m raising funds to try and get a new computer, for work as well as grad school (i.e., one computer to do all those things), especially as Adobe Creative Cloud is nigh-unusable on this PC due to all the resource hogging that suite does.
I’m learning a bit more about voice readers in the meantime. The one I’m most familiar with, JAWS, is out of reach for many people due to an individual non-commercial license starting at $900 – and that doesn’t count the PC you’d have to already have to run it. While I use it in my work, I’m looking for other voice readers people use. NVDA and even Apple VoiceOver seem to be the next most common ones, and the interesting thing about Apple VoiceOver is that it’s built into the operating system: programs like JAWS and NVDA are third-party, with NVDA in fact being an open-source project. NVDA only runs on Windows machines though, complicating the problems of operating systems, interoperability, affordability, and access.
However, voice readers can only parse so much, and so when testing for accessibility, I usually combine tests – just because something isn’t parsed well by a voice reader, could mean that the problem lies with the voice reader software/module. This is especially true when dealing with ARIA tags, as JAWS has known issues parsing that information.
I hope to explore more of the Apple accessibility features, but that might take more time before I can – plus iOS has slightly different features than MacOS does, though interoperability is much better between the two.