Engineering:Screen reader: Difference between revisions

From HandWiki
update
 
simplify
 
Line 1: Line 1:
{{short description|Assistive technology that converts text or images to speech or Braille}}
{{short description|Assistive technology that converts text or images to speech or Braille}}
{{Use mdy dates|date=January 2026}}
[[File:Accessible Books Consortium explains - a digital file is not necessarily accessible.webm|thumb|An example of someone using a screen reader showing documents that are inaccessible, readable and accessible]]
[[File:Accessible Books Consortium explains - a digital file is not necessarily accessible.webm|thumb|An example of someone using a screen reader showing documents that are inaccessible, readable and accessible]]
{{Use mdy dates|date=July 2017}}
A '''screen reader''' is a form of [[Engineering:Assistive technology|assistive technology]] (<abbr>AT</abbr>)<ref>{{cite web|url=https://www.microsoft.com/enable/at/types.aspx|title=Types of Assistive Technology Products|publisher=Microsoft Accessibility|access-date=June 13, 2016}}</ref> that renders text and image content as speech or braille output. Screen readers are essential to [[Medicine:Blindness|blind]] people,<ref name="afb"/> and are also useful to people who are visually impaired,<ref name="afb"/> illiterate or [[Medicine:Learning disability|learning-disabled]].<ref name="Screen1">{{cite web|url=http://www.vadsa.org/ace/reader.htm|title=Screen Readers and how they work with E-Learning|publisher=Virginia.gov|access-date=March 31, 2019|archive-url=https://web.archive.org/web/20181113075826/https://www.vadsa.org/ace/reader.htm|archive-date=November 13, 2018}}</ref> Screen readers are [[Application software|software applications]] that attempt to convey what people with normal eyesight see on a [[Engineering:Display device|display]] to their users via non-visual means, like text-to-speech,<ref>{{cite web|url=http://windows.microsoft.com/en-us/windows/hear-text-read-aloud-narrator#1TC=windows-8|title=Hear text read aloud with Narrator|publisher=[[Software:Microsoft Office|Microsoft]]|access-date=June 13, 2016}}</ref> [[Earcon|earcon]]s,<ref>{{cite web|url=https://www.perkins.org/resource/icons-and-earcons-critical-often-overlooked-tech-skills/|title=iCons and Earcons: Critical but often overlooked tech skills|publisher=Perkins School for the Blind|date=March 21, 2023|access-date=January 10, 2026}}</ref> or a braille device.<ref name="afb">{{cite web|url=https://www.afb.org/blindness-and-low-vision/using-technology/assistive-technology-videos/screen-reading-technology|title=Screen reading technology|publisher=AFB|access-date=February 23, 2022}}</ref> They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various [[Operating system|operating system]] features (like [[Inter-process communication|inter-process communication]] and querying [[User interface|user interface]] properties), and employing [[Hooking|hooking]] techniques.<ref name="SR Overview">{{cite web|url=https://www.nomensa.com/blog/2005/what-screen-reader|title=What is a Screen Reader|publisher=Nomensa|access-date=July 9, 2017}}</ref>
A '''screen reader''' is a form of [[Engineering:Assistive technology|assistive technology]] (<abbr>AT</abbr>)<ref>{{cite web|url=https://www.microsoft.com/enable/at/types.aspx|title=Types of Assistive Technology Products|publisher=Microsoft Accessibility|access-date=13 June 2016}}</ref> that renders text and image content as speech or braille output. Screen readers are essential to [[Medicine:Blindness|blind]] people,<ref name="afb"/> and are also useful to people who are visually impaired,<ref name="afb"/> illiterate or [[Medicine:Learning disability|learning-disabled]].<ref name="Screen1">{{cite web|url=http://www.vadsa.org/ace/reader.htm|title=Screen Readers and how they work with E-Learning|publisher=Virginia.gov|access-date=31 March 2019|archive-url=https://web.archive.org/web/20181113075826/https://www.vadsa.org/ace/reader.htm|archive-date=13 November 2018}}</ref> Screen readers are software applications that attempt to convey what people with normal eyesight see on a [[Engineering:Display device|display]] to their users via non-visual means, like text-to-speech,<ref>{{cite web|url=http://windows.microsoft.com/en-us/windows/hear-text-read-aloud-narrator#1TC=windows-8|title=Hear text read aloud with Narrator|publisher=[[Software:Microsoft Office|Microsoft]]|access-date=13 June 2016}}</ref> sound icons,<ref>{{cite web|url=https://css-tricks.com/accessiblility-basics-turn-your-css-off/|title=Accessibility Basics: How Does Your Page Look To A Screen Reader?|last=Coyier|first=Chris|date=29 October 2007|publisher=CSS-Tricks|access-date=13 June 2016}}</ref> or a braille device.<ref name="afb">{{cite web|url=https://www.afb.org/blindness-and-low-vision/using-technology/assistive-technology-videos/screen-reading-technology|title=Screen reading technology|publisher=AFB|access-date=23 February 2022}}</ref> They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various [[Operating system|operating system]] features (like [[Inter-process communication|inter-process communication]] and querying [[User interface|user interface]] properties), and employing [[Hooking|hooking]] techniques.<ref name="SR Overview">{{cite web|url=https://www.nomensa.com/blog/2005/what-screen-reader|title=What is a Screen Reader|publisher=Nomensa|access-date=9 July 2017}}</ref>


[[Software:Microsoft Windows|Microsoft Windows]] operating systems have included the [[Software:Microsoft Narrator|Microsoft Narrator]] screen reader since [[Software:Windows 2000|Windows 2000]], though separate products such as [[Company:Freedom Scientific|Freedom Scientific]]'s commercially available [[Software:JAWS (screen reader)|JAWS]] screen reader and [[Engineering:ZoomText|ZoomText]] screen magnifier and the free and open source screen reader [[Software:NonVisual Desktop Access|NVDA]] by NV Access are more popular for that operating system.<ref>{{cite web|url=https://webaim.org/projects/screenreadersurvey9/|title=Screen Reader User Survey #9|publisher=WebAIM|access-date=July 1, 2021}}</ref> [[Company:Apple Inc.|Apple Inc.]]'s [[Software:MacOS|macOS]], [[Software:IOS|iOS]], and [[Software:TvOS|tvOS]] include [[Software:VoiceOver|VoiceOver]] as a built-in screen reader, while [[Company:Google|Google]]'s [[Software:Android (operating system)|Android]] provides the [[Engineering:Google TalkBack|Talkback screen reader]] and its [[Software:ChromeOS|ChromeOS]] can use ChromeVox.<ref>{{cite web|url=http://www.chromevox.com/|title=ChromeVox|publisher=Google|access-date=March 9, 2020}}</ref> Similarly, Android-based devices from Amazon provide the VoiceView screen reader.  There are also free and open source screen readers for [[Software:Linux|Linux]] and [[Unix-like]] systems, such as Speakup and [[Software:Orca (assistive technology)|Orca]].
[[Software:Microsoft Windows|Microsoft Windows]] operating systems have included the [[Software:Microsoft Narrator|Microsoft Narrator]] screen reader since [[Software:Windows 2000|Windows 2000]], though separate products such as [[Company:Freedom Scientific|Freedom Scientific]]'s commercially available [[Software:JAWS (screen reader)|JAWS]] screen reader and [[Engineering:ZoomText|ZoomText]] screen magnifier and the free and open source screen reader [[Software:NonVisual Desktop Access|NVDA]] by NV Access are more popular for that operating system.<ref>{{cite web|url=https://webaim.org/projects/screenreadersurvey9/|title=Screen Reader User Survey #9|publisher=WebAIM|access-date=July 1, 2021}}</ref> [[Company:Apple Inc.|Apple Inc.]]'s [[Software:MacOS|macOS]], [[Software:IOS|iOS]], and [[Software:TvOS|tvOS]] include [[Software:VoiceOver|VoiceOver]] as a built-in screen reader, while [[Company:Google|Google]]'s [[Software:Android (operating system)|Android]] provides the [[Engineering:Google TalkBack|Talkback screen reader]] and its [[Software:ChromeOS|ChromeOS]] can use ChromeVox.<ref>{{cite web|url=http://www.chromevox.com/|title=ChromeVox|publisher=Google|access-date=March 9, 2020}}</ref> Similarly, Android-based devices from Amazon provide the VoiceView screen reader.  There are also free and open source screen readers for [[Software:Linux|Linux]] and [[Unix-like]] systems, such as Speakup and [[Software:Orca (assistive technology)|Orca]].
Line 18: Line 18:
| access-date=September 7, 2006 |archive-url = https://web.archive.org/web/20060625225004/http://www.edstoffel.com/david/talkingterminals.html <!-- Bot retrieved archive --> |archive-date = June 25, 2006}}</ref> and communicating the results to the user.
| access-date=September 7, 2006 |archive-url = https://web.archive.org/web/20060625225004/http://www.edstoffel.com/david/talkingterminals.html <!-- Bot retrieved archive --> |archive-date = June 25, 2006}}</ref> and communicating the results to the user.


In the 1980s, the Research Centre for the Education of the Visually Handicapped (<abbr>RCEVH</abbr>) at the [[Organization:University of Birmingham|University of Birmingham]] developed a Screen Reader for the [[Engineering:BBC Micro|BBC Micro]] and <!--abbr?-->NEC Portable.<ref>Paul Blenkhorn, "The <abbr>RCEVH</abbr> project on micro-computer systems and computer assisted learning", British Journal of Visual Impairment, 4/3, 101-103 (1986). [http://www.visugate.biz/bjvi/1986/autumn1986.html#RCEVH Free HTML version at Visugate].</ref><ref>{{cite web | title=Access to personal computers using speech synthesis. RNIB New Beacon No.76, May 1992 | date=March 3, 2014| url=http://www.rnib.org.uk/information-everyday-living-using-technology-beginners-guides/beginners-guide-assistive-technology}}</ref>
In the 1980s, the Research Centre for the Education of the Visually Handicapped (<abbr>RCEVH</abbr>) at the [[Organization:University of Birmingham|University of Birmingham]] developed a Screen Reader for the [[Engineering:BBC Micro|BBC Micro]] and <!--abbr?-->NEC Portable.<ref>Paul Blenkhorn, "The <abbr>RCEVH</abbr> project on micro-computer systems and computer assisted learning", British Journal of Visual Impairment, 4/3, 101-103 (1986). [http://www.visugate.biz/bjvi/1986/autumn1986.html#RCEVH Free HTML version at Visugate] {{Webarchive|url=https://web.archive.org/web/20070928210916/http://www.visugate.biz/bjvi/1986/autumn1986.html#RCEVH |date=September 28, 2007 }}.</ref><ref>{{cite web | title=Access to personal computers using speech synthesis. RNIB New Beacon No.76, May 1992 | date=March 3, 2014| url=http://www.rnib.org.uk/information-everyday-living-using-technology-beginners-guides/beginners-guide-assistive-technology}}</ref>


=== Graphical ===
=== Graphical ===
Line 40: Line 40:
* [[Java Access Bridge]]<ref>{{cite web|url=http://java.sun.com/products/accessbridge/|title=Oracle Technology Network for Java Developers – Oracle Technology Network – Oracle}}</ref>
* [[Java Access Bridge]]<ref>{{cite web|url=http://java.sun.com/products/accessbridge/|title=Oracle Technology Network for Java Developers – Oracle Technology Network – Oracle}}</ref>


Screen readers can query the operating system or application for what is currently being displayed and receive updates when the display changes. For example, a screen reader can be told that the current focus is on a button and the button caption to be communicated to the user. This approach is considerably easier for the developers of screen readers, but fails when applications do not comply with the accessibility <abbr>API</abbr>: for example, [[Software:Microsoft Word|Microsoft Word]] does not comply with the <abbr>MSAA</abbr> <abbr>API</abbr>, so screen readers must still maintain an off-screen model for Word or find another way to access its contents. One approach is to use available operating system messages and application object models to supplement accessibility <abbr>API</abbr>s.
Screen readers can query the operating system or application for what is currently being displayed and receive updates when the display changes. For example, a screen reader can be told that the current focus is on a button and the button caption is to be communicated to the user. This approach is considerably easier for the developers of screen readers, but fails when applications do not comply with the accessibility <abbr>API</abbr>. One approach when the accessibility <abbr>API</abbr> is insufficient is to use available operating system messages and application object models to supplement accessibility <abbr>API</abbr>s.
 
Screen readers can be assumed to be able to access all display content that is not intrinsically inaccessible. Web browsers, word processors, icons and windows and email programs are just some of the applications used successfully by screen reader users. However, according to some users,{{Who|date=January 2015}} using a screen reader is considerably more difficult than using a GUI, and many applications have specific problems resulting from the nature of the application (e.g. animations) or failure to comply with accessibility standards for the platform (e.g. Microsoft Word and Active Accessibility).
=== Self-voicing programs and applications ===
 
 
=== Cloud-based ===
 
 
[[Engineering:Virtual assistant|Virtual assistant]]s can sometimes read out written documents (textual web content, <abbr>[[Portable Document Format|PDF]]</abbr> documents, e-mails etc.) The best-known examples are Apple's [[Software:Siri|Siri]], [[Software:Google Assistant|Google Assistant]], and [[Software:Amazon Alexa|Amazon Alexa]].
 
=== Web-based ===
 
A relatively new development in the field is web-based applications like Spoken-Web that act as web portals, managing content like news updates, weather, science and business articles for visually impaired or blind computer users. Other examples are ReadSpeaker or [[Engineering:BrowseAloud|BrowseAloud]] that add text-to-speech functionality to web content.{{citation needed|date=January 2015} cations is those who have difficulty reading because of learning disabilities or language barriers.{{citation needed|date=January 2015}} Although functionality remains lim pplications, the major benefit is to increase the accessibility of said websites when viewed on public machines where users do not have permission to install custom software, giving people greater "freedom to roam".{{citation needed|date=January 2015}}


Screen readers can be assumed to be able to access all display content that is not intrinsically inaccessible. Web browsers, word processors, icons and windows and email programs are just some of the applications used successfully by screen reader users. However, according to some users,{{Who|date=January 2015}} using a screen reader is considerably more difficult than using a GUI, and many applications have specific problems resulting from the nature of the application (e.g. animations) or failure to comply with accessibility standards for the platform.
== Customization ==
== Customization ==


Most screen readers allow the user to select whether most punctuation is announced or silently ignored. Some screen readers can be tailored to a particular application through '''scripting'''. One advantage of scripting is that it allows customizations to be shared among users, increasing accessibility for all. <abbr>JAWS</abbr> enjoys an active script-sharing community, for example.<ref>{{Cite web |title=An Introduction to JAWS Scripting |url=https://afb.org/aw/4/6/14806 |archive-url= |archive-date= |website=AccessWorld (American Foundation for the Blind)}}</ref>


=== Verbosity ===
=== Verbosity ===


Verbosity is a feature of screen reading software that supports vision-impaired computer users. Speech verbosity controls enable users to choose how much speech feedback they wish to hear. Specifically, verbosity settings allow users to construct a mental model of web pages displayed on their computer screen. Based on verbosity settings, a screen-reading program informs users of certain formatting changes, such as when a frame or table begins and ends, where graphics have been inserted into the text, or when a list appears in the document. The verbosity settings can also control the level of descriptiveness of elements, such as lists, tables, and regions.<ref>{{Cite journal |last1=Zong |first1=Jonathan |last2=Lee |first2=Crystal |last3=Lundgard |first3=Alan |last4=Jang |first4=JiWoong |last5=Hajas |first5=Daniel |last6=Satyanarayan |first6=Arvind |date=2022 |title=Rich Screen Reader Experiences for Accessible Data Visualization |journal=Computer Graphics Forum |language=en |volume=41 |issue=3 |pages=15–27 |doi=10.1111/cgf.14519 |arxiv=2205.04917 |s2cid=248665696 |issn=0167-7055}}</ref> For example, [[Software:JAWS (screen reader)|JAWS]] provides low, medium, and high web verbosity preset levels. The high web verbosity level provides more detail about the contents of a webpage.<ref>{{Cite web |title=JAWS Web Verbosity |url=https://www.freedomscientific.com/SurfsUp/Web_Verbosity.htm |access-date=2022-11-06 |website=www.freedomscientific.com}}</ref>
Verbosity is a feature of screen reading software that supports vision-impaired computer users. Speech verbosity controls enable users to choose how much speech feedback they wish to hear. Specifically, verbosity settings allow users to construct a mental model of web pages displayed on their computer screen. Based on verbosity settings, a screen-reading program informs users of certain formatting changes, such as when a frame or table begins and ends, where graphics have been inserted into the text, or when a list appears in the document. The verbosity settings can also control the level of descriptiveness of elements, such as lists, tables, and regions.<ref>{{Cite journal |last1=Zong |first1=Jonathan |last2=Lee |first2=Crystal |last3=Lundgard |first3=Alan |last4=Jang |first4=JiWoong |last5=Hajas |first5=Daniel |last6=Satyanarayan |first6=Arvind |date=2022 |title=Rich Screen Reader Experiences for Accessible Data Visualization |journal=Computer Graphics Forum |language=en |volume=41 |issue=3 |pages=15–27 |doi=10.1111/cgf.14519 |arxiv=2205.04917 |s2cid=248665696 |issn=0167-7055}}</ref> For example, [[Software:JAWS (screen reader)|JAWS]] provides low, medium, and high web verbosity preset levels. The high web verbosity level provides more detail about the contents of a webpage.<ref>{{Cite web |title=JAWS Web Verbosity |url=https://support.freedomscientific.com/SurfsUp/7-WebVerbosity.htm |access-date=February 19, 2026 |website=www.freedomscientific.com}}</ref>


=== Language ===
=== Language ===


Some screen readers can read text in more than one [[Social:Language|language]], provided that the language of the material is encoded in its [[Metadata Encoding and Transmission Standard|metadata]].<ref>{{cite web|url=https://developer.yahoo.com/blogs/ydn/yahoo-search-results-now-natural-language-support-7318.html|title=Yahoo! search results now with natural language support|date=March 13, 2008|author=Chris Heilmann|work=Yahoo! Developer Network Blog|access-date=February 28, 2015|archive-url=https://web.archive.org/web/20090125024422/http://developer.yahoo.net/blog/archives/2008/03/yahoo_search_re.html|archive-date=January 25, 2009|url-status=live}}</ref>
Some screen readers can read text in more than one [[Social:Language|language]], provided that the language of the material is encoded in its [[Metadata Encoding and Transmission Standard|metadata]].<ref>{{cite web|url=https://developer.yahoo.com/blogs/ydn/yahoo-search-results-now-natural-language-support-7318.html|title=Yahoo! search results now with natural language support|date=March 13, 2008|author=Chris Heilmann|work=Yahoo! Developer Network Blog|access-date=February 28, 2015|archive-url=https://web.archive.org/web/20090125024422/http://developer.yahoo.net/blog/archives/2008/03/yahoo_search_re.html|archive-date=January 25, 2009|url-status=live}}</ref>


== See also ==
== See also ==

Latest revision as of 04:25, 5 April 2026

Short description: Assistive technology that converts text or images to speech or Braille

File:Accessible Books Consortium explains - a digital file is not necessarily accessible.webm A screen reader is a form of assistive technology (AT)[1] that renders text and image content as speech or braille output. Screen readers are essential to blind people,[2] and are also useful to people who are visually impaired,[2] illiterate or learning-disabled.[3] Screen readers are software applications that attempt to convey what people with normal eyesight see on a display to their users via non-visual means, like text-to-speech,[4] earcons,[5] or a braille device.[2] They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various operating system features (like inter-process communication and querying user interface properties), and employing hooking techniques.[6]

Microsoft Windows operating systems have included the Microsoft Narrator screen reader since Windows 2000, though separate products such as Freedom Scientific's commercially available JAWS screen reader and ZoomText screen magnifier and the free and open source screen reader NVDA by NV Access are more popular for that operating system.[7] Apple Inc.'s macOS, iOS, and tvOS include VoiceOver as a built-in screen reader, while Google's Android provides the Talkback screen reader and its ChromeOS can use ChromeVox.[8] Similarly, Android-based devices from Amazon provide the VoiceView screen reader. There are also free and open source screen readers for Linux and Unix-like systems, such as Speakup and Orca.

History

Around 1978, Al Overby of IBM Raleigh developed a prototype of a talking terminal, known as SAID (for Synthetic Audio Interface Driver), for the IBM 3270 terminal.[9] SAID read the ASCII values of the display in a stream and spoke them through a large vocal track synthesizer the size of a suitcase, and it cost around $10,000.[10] Dr. Jesse Wright, a blind research mathematician, and Jim Thatcher, formerly his graduate student from the University of Michigan, working as mathematicians for IBM, adapted this as an internal IBM tool for use by blind people. After the early IBM Personal Computer (PC) was released in 1981, Thatcher and Wright developed a software equivalent to SAID, called PC-SAID, or Personal Computer Synthetic Audio Interface Driver. This was renamed and released in 1984 as IBM Screen Reader, which became the proprietary eponym for that general class of assistive technology.[10]

Types

Command-line (text)

In early operating systems, such as MS-DOS, which employed command-line interfaces (CLIs), the screen display consisted of characters mapping directly to a screen buffer in memory and a cursor position. Input was by keyboard. All this information could therefore be obtained from the system either by hooking the flow of information around the system and reading the screen buffer or by using a standard hardware output socket[11] and communicating the results to the user.

In the 1980s, the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham developed a Screen Reader for the BBC Micro and NEC Portable.[12][13]

Graphical

Off-screen models

With the arrival of graphical user interfaces (GUIs), the situation became more complicated. A GUI has characters and graphics drawn on the screen at particular positions, and therefore there is no purely textual representation of the graphical contents of the display. Screen readers were therefore forced to employ new low-level techniques, gathering messages from the operating system and using these to build up an "off-screen model", a representation of the display in which the required text content is stored.[14]

For example, the operating system might send messages to draw a command button and its caption. These messages are intercepted and used to construct the off-screen model. The user can switch between controls (such as buttons) available on the screen and the captions and control contents will be read aloud and/or shown on a refreshable braille display.


Accessibility APIs

Operating system and application designers have attempted to address these problems by providing ways for screen readers to access the display contents without having to maintain an off-screen model. These involve the provision of alternative and accessible representations of what is being displayed on the screen accessed through an API. Existing APIs include:

Screen readers can query the operating system or application for what is currently being displayed and receive updates when the display changes. For example, a screen reader can be told that the current focus is on a button and the button caption is to be communicated to the user. This approach is considerably easier for the developers of screen readers, but fails when applications do not comply with the accessibility API. One approach when the accessibility API is insufficient is to use available operating system messages and application object models to supplement accessibility APIs.

Screen readers can be assumed to be able to access all display content that is not intrinsically inaccessible. Web browsers, word processors, icons and windows and email programs are just some of the applications used successfully by screen reader users. However, according to some users,[who?] using a screen reader is considerably more difficult than using a GUI, and many applications have specific problems resulting from the nature of the application (e.g. animations) or failure to comply with accessibility standards for the platform.

Customization

Most screen readers allow the user to select whether most punctuation is announced or silently ignored. Some screen readers can be tailored to a particular application through scripting. One advantage of scripting is that it allows customizations to be shared among users, increasing accessibility for all. JAWS enjoys an active script-sharing community, for example.[18]

Verbosity

Verbosity is a feature of screen reading software that supports vision-impaired computer users. Speech verbosity controls enable users to choose how much speech feedback they wish to hear. Specifically, verbosity settings allow users to construct a mental model of web pages displayed on their computer screen. Based on verbosity settings, a screen-reading program informs users of certain formatting changes, such as when a frame or table begins and ends, where graphics have been inserted into the text, or when a list appears in the document. The verbosity settings can also control the level of descriptiveness of elements, such as lists, tables, and regions.[19] For example, JAWS provides low, medium, and high web verbosity preset levels. The high web verbosity level provides more detail about the contents of a webpage.[20]

Language

Some screen readers can read text in more than one language, provided that the language of the material is encoded in its metadata.[21]

See also

References

  1. "Types of Assistive Technology Products". Microsoft Accessibility. https://www.microsoft.com/enable/at/types.aspx. 
  2. 2.0 2.1 2.2 "Screen reading technology". AFB. https://www.afb.org/blindness-and-low-vision/using-technology/assistive-technology-videos/screen-reading-technology. 
  3. "Screen Readers and how they work with E-Learning". Virginia.gov. http://www.vadsa.org/ace/reader.htm. 
  4. "Hear text read aloud with Narrator". Microsoft. http://windows.microsoft.com/en-us/windows/hear-text-read-aloud-narrator#1TC=windows-8. 
  5. "iCons and Earcons: Critical but often overlooked tech skills". Perkins School for the Blind. March 21, 2023. https://www.perkins.org/resource/icons-and-earcons-critical-often-overlooked-tech-skills/. 
  6. "What is a Screen Reader". Nomensa. https://www.nomensa.com/blog/2005/what-screen-reader. 
  7. "Screen Reader User Survey #9". WebAIM. https://webaim.org/projects/screenreadersurvey9/. 
  8. "ChromeVox". Google. http://www.chromevox.com/. 
  9. Cooke, Annemarie (March 2004). "A History of Accessibility at IBM". https://www.afb.org/aw/5/2/14760. 
  10. 10.0 10.1 "Making A Difference Award (2009) — Jim Thatcher (interview)". 2009. https://www.sigcas.org/2018/02/08/making-a-difference-award-2009-jim-thatcher-interview/. 
  11. "Talking Terminals. BYTE, September 1982". http://www.edstoffel.com/david/talkingterminals.html. 
  12. Paul Blenkhorn, "The RCEVH project on micro-computer systems and computer assisted learning", British Journal of Visual Impairment, 4/3, 101-103 (1986). Free HTML version at Visugate .
  13. "Access to personal computers using speech synthesis. RNIB New Beacon No.76, May 1992". March 3, 2014. http://www.rnib.org.uk/information-everyday-living-using-technology-beginners-guides/beginners-guide-assistive-technology. 
  14. According to "Making the GUI Talk " (by Richard Schwerdtfeger, BYTE December 1991, p. 118-128), the first screen reader to build an off-screen model was outSPOKEN.
  15. Implementing Accessibility on Android.
  16. Apple Accessibility API.
  17. "Oracle Technology Network for Java Developers – Oracle Technology Network – Oracle". http://java.sun.com/products/accessbridge/. 
  18. "An Introduction to JAWS Scripting". https://afb.org/aw/4/6/14806. 
  19. Zong, Jonathan; Lee, Crystal; Lundgard, Alan; Jang, JiWoong; Hajas, Daniel; Satyanarayan, Arvind (2022). "Rich Screen Reader Experiences for Accessible Data Visualization" (in en). Computer Graphics Forum 41 (3): 15–27. doi:10.1111/cgf.14519. ISSN 0167-7055. 
  20. "JAWS Web Verbosity". https://support.freedomscientific.com/SurfsUp/7-WebVerbosity.htm. 
  21. Chris Heilmann (March 13, 2008). "Yahoo! search results now with natural language support". Yahoo! Developer Network Blog. https://developer.yahoo.com/blogs/ydn/yahoo-search-results-now-natural-language-support-7318.html.