A graphical user interface (GUI) is the way many people interact with computing devices, ranging from smartphones and smart TVs, to laptops, desktops and website navigation. Although smart speakers and the likes of Siri on iOS devices have given people an alternative user interface, where they can request information and do a limited range of tasks using their voice, the use of computer-generated voice has been around for decades, running screen reader software for blind and partially sighted people.

Along with his day job, Suleyman Gokyigit, CIO at Fire, a US organisation that defends the rights of free speech, spends some of his time testing accessibility in software and websites for crowd-testing firm Applause. Computer Weekly recently spoke to Gokyigit about some of the areas software developers and website designers need to consider to provide greater accessibility.

One of the services Applause provides is the ability for companies that are developing new software or websites to check if they are meeting the design principles needed for accessibility. “Even if it’s not an accessibility feature, companies want to make sure somebody who is blind can use the software or visit the website,” says Gokyigit.

His work with Applause involves testing and performing a series of tasks on websites or software using a screen reader. His actions are recorded. “This could be something like going to a web page, logging on, and then creating a new order. I’m providing feedback the whole time, which allows these companies to understand,” he says.

Accessibility, past and present

A person using a screen reader does not use a mouse. “Software and websites should not be designed in a way where something has to be clicked on with the mouse,” says Gokyigit. “You have to be able to use a keyboard to move around and there should be ways of providing all the functionality with a keyboard.”

Discussing his personal journey with accessibility in software, Gokyigit, who is completely blind, says the technology has changed a lot over the years. He uses screen reading software. When he was in elementary school, Gokyigit used an Apple 2 and the accessibility software available was a program that provided compatibility with just a handful of very specific applications. “It did truly basic things like typing. That’s how I learned to type,” he says.



When he started using PCs in 1991, at the age of 12, Gokyigit used a program called JAWS (Job Access With Speech), a screen reader for the MS-DOS operating system. However, since the PC’s operating system only provided a command-line user interface, screen reading was simpler than the graphical user interface of modern operating systems used on PCs, Macs and smartphones, and the applications built on top of them.

With Windows 3.0, Microsoft built a GUI on top of MS-DOS, but as Gokyigit recalls, from an accessibility perspective, “it was completely unusable initially”. The shift from a 100% text-based user interface to a graphical representation led to zero accessibility. It wasn’t a priority at the time, he adds. This meant applications that supported accessibility were released two to three years after the software had been released. “We started being able to use software two to three years after everybody else because it took time to make things accessible,” he says.

Almost three-and-a-half decades on, accessibility in software has vastly improved, but there are still areas where improvements can be made. “The goal in software should always be to enable accessibility immediately on release. There should be nothing special that somebody who is blind or has any kind of disability needs to do to get their software to work,” says Gokyigit.

While the clunky hardware-based voice synthesisers of the past have been replaced by software with more natural-sounding voice synthesis and modern operating systems have an incredible amount of accessibility built-in, there is still room for improvement.

“You can’t even compare where we were 30 years ago as far as accessibility goes to today,” says Gokyigit. “A lot of changes have been driven by the technology, but software developers and the companies they work for are now more aware of accessibility. There is a very large user base out there making use of things such as screen readers.”

Developers need to consider that a person using a screen reader does not use the mouse, so Gokyigit urges developers to consider how they implement keyboard navigation. “Software and websites should not be designed in a way that means something has to be clicked on with the mouse,” he says. “You have to be able to use a keyboard to move around and access full functionality.”

Looking at web pages, he adds: “All non-text elements should be described. But to this day you can visit a lot of websites where it says you have an unlabeled button or graphics image. This means there’s no description, yet it is something that will take just seconds to include.”

The role of AI in accessibility

Artificial intelligence (AI) has the potential to read the computer screen and understand what the user is trying to achieve.

Looking at the potential for AI to improve accessibility, Gokyigit says: “That would be incredible, but we’re not there yet. Right now, AI is very helpful in doing things such as descriptions. Being able to describe what’s on the screen, or even just being able to take a photograph and ask the AI to describe the picture, was, until just very recently, not practical as it would hallucinate.” In other words, the AI would get confused and present an incorrect description of the image. Descriptions were also very short. “Now you can get paragraphs and paragraphs of descriptions that are very detailed and very accurate, so so I can ask the AI today to describe what’s on the screen,” he adds.

But there are still gaps in software. Describing a recent situation, Gokyigit says: “I wanted to create a new shared folder on my network attached storage device to include in our backup, but the admin interface was not accessible.” While the screen reader informed him that he needed to click on a checkbox, Gokyigit could not find it.

“I could tell that the folders were checked, but for the new folder I tried everything from clicking on it, trying random clicks a little bit to the left and a little bit to the right, hoping I’d hit a checkbox that the screen reader had not picked up.” But nothing he tried worked. “Ultimately, I had to bring in somebody who could see there was a checkbox and click on it. What would be amazing is for the AI to go ahead and click the checkbox for me.”

Accessibility means usability for everyone

Gokyigit believes challenges with software usability go beyond making software usable for people with disabilities.

“The ability to have an actual conversation or being able to control your computer by speaking to it makes a lot of sense,” he says. “Look at the old science fiction shows, like Star Trek, even in the 1960s and certainly in the 1980s, people knew that the most natural user interface is simply to have a conversation with the machine and tell it what you want it to do.”



Source link

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *