Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

speech recognition input #2976

Closed
nvaccessAuto opened this issue Feb 9, 2013 · 6 comments
Closed

speech recognition input #2976

nvaccessAuto opened this issue Feb 9, 2013 · 6 comments

Comments

@nvaccessAuto
Copy link

Reported by zahari_bgr on 2013-02-09 22:02
Creating of speech recognition input method.
Supporting of different engines, starting with Microsoft Speech Recognition engine, that is included in SAPI.
Introduction of Microsoft Speech Recognition in SAPI5.3 is available here:
http://msdn.microsoft.com/en-us/magazine/cc163663.aspx

@nvaccessAuto
Copy link
Author

Comment 1 by jteh on 2013-02-09 23:15
Please be more specific about what you are requesting. Entering of text via speech recognition is something that is already handled by other software and is out of scope for a screen reader. Are you talking about NVDA commands? How do you envision this working?

@nvaccessAuto
Copy link
Author

Comment 2 by zahari_bgr on 2013-02-11 16:31
Yes, I'm talking about NVDA commands.
According to the provided document for Microsoft Speech Recognition, programs could register grammars in several different ways.
I think NVDA could listen for a keyword, e.g. NVDA, than for valid NVDA command by its name, e.g. "NVDA Menu", "Report title" etc, then for any additional parameters - if needed.

@nvaccessAuto
Copy link
Author

Comment 3 by zahari_bgr on 2014-03-08 19:51
By Speech Recognition input i mean something like keyboard input, mouse input, touch input, braille input etc, so Speech Gestures could be mapped to different NVDA commands.
May be it should be modular based, like speech synth system, so there could be different drivers - one for Microsoft Speech Recognition, another for Google Speech Recognition and so on.
I've suggested Microsoft Speech Recognition because it is integrated in Windows and the interfaces for this to work are defined in SAPI.
Speech-based interaction with computers and other smart devices is getting more popular every year. It could help many immobilized people as well. Also, many people may find it better then the touch screen for using on device without physical keyboard.

@dkager
Copy link
Collaborator

dkager commented Jul 12, 2017

@jcsteh Thoughts? Thinking about wontfix because AFAIK speech recognition software can emulate keyboard shortcuts and hence interact with NVDA. But it could be a very nice feature.

@jcsteh
Copy link
Contributor

jcsteh commented Jul 12, 2017

Supporting voice commands is definitely in scope for NVDA. I'd say this is a valid (and potentially useful) feature. However, DictationBridge already supports this for Dragon and might some day support it for Windows Speech Recognition. While DictationBridge is a separate add-on, it's a much more compelling solution because it makes dictation more accessible as well, and duplicating effort here would be pretty wasteful. Closing.

@derekriemer
Copy link
Collaborator

covered by dictationbridge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants