New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement different profiles #3562
Comments
Comment 1 by jteh on 2013-10-07 22:21
|
Comment 2 by jorgtum on 2013-10-08 08:47 Is not all covered, because the punctuation symbols must be readed in the language of the speech instead the interface language, but too the announcement of links, headers, images, lists, tables, columns etc should be announced in the speech language, when I am reading a English text, and I have the English voice, I need all be readed in English, for instance, if I have a link in the middle of a text, this must be readed like "lin" (English) and not like "vinclo" (Aragonese). The same case to all the language, if I'm reading a French text, I need that all be pronnounced in French, etc. Thanks and sorry by repeat. Jorge. |
Comment 3 by leonarddr on 2013-10-08 08:52 |
Comment 4 by jteh on 2013-10-15 07:01 |
Comment 5 by jorgtum on 2013-10-15 08:10 I think the priority is that the punctuation symbols (colon, dot, semicolon ...) are readed on the same language,being in a different file it should be easier, this must be changed in fly when the voice changes, the announce of other messages like links, headers etc have less priority but I think also is important. |
I think this could be acomplished with speech refactor, if you set the same synth as second synth to pronounce only the element types and status. Then you must link the first synth with the second one and if you change the voice language for synth 1 the language for synth 2 changes too. Is this possible from a programatical point of view? The most complicated thing is actually to manages the interactions between the voices. Synth 1 should make a pause of x ms and in this pause synth 2 should report the element type. After that, synth 1 should continue reading the displayed text until next element type or status is identified. |
I think also a problem is to separate language of reporting of element type and status from the interface language. |
Maybe speech refactoring is not required at all if element type and status is separated from interface language. |
May be add dialect like en_element_type and use auto dialect switching
would work.
…On Thu, Dec 13, 2018 at 11:37 PM Adriani90 ***@***.***> wrote:
Maybe speech refactoring is not required at all if element type and status
is separated from interface language.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#3562 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AoCGGR1ErjtlSGy635rxXE1gnCP_dsqcks5u4nREgaJpZM4ZR3qr>
.
|
Reported by jorgtum on 2013-10-07 19:04
I am a constant user of NVDA, I use the aragonese language to the interface and the same voice, but some times I need read english texts or in other languages, I can change the voice, the variant, the pitch, the volume and the speed but the symbols are pronnounced from the aragonese table, this is a problem for me., I don't want to have enable the automatic change of language because it is not good for me..
By this I suggest you create different customizable profiles with a voice, speed, variant, pitch, level of punctuation etc. and aggree it on the NVDA + Ctrl + arrow keys to can change it fast and the punctuation symbols (. , / - ... etc) are readed on the language that be currently the voice. Same with announce of links, headers, etc.
Thanks,
Jorge.
The text was updated successfully, but these errors were encountered: