You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Replace `<key>` with the API key for your speech service. Replace `<location>` with the location you used when you created the speech service resource.
@@ -59,7 +59,7 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
59
59
1. Add the following code to create a speech recognizer:
1. The speech recognizer runs on a background thread, listening foraudio and converting any speechin it to text. You can get the text using a callback function- a functionyou define and pass to the recognizer. Every time speech is detected, the callback is called. Add the following code to define a callback that prints the text to the console, and pass this callback to the recognizer:
The `number` entities wil be an array of numbers. For example, if you said *"Set a four minute 17 second timer."*, then the `number` array will contain 2 integers -4and17.
@@ -392,15 +392,15 @@ Once published, the LUIS model can be called from code. In the last lesson you s
392
392
393
393
```python
394
394
if time_unit =='minute':
395
-
total_time+= number *60
395
+
total_seconds+= number *60
396
396
else:
397
-
total_time+= number
397
+
total_seconds+= number
398
398
```
399
399
400
400
1. Finally, outside this loop through the entities, log the total time for the timer:
401
401
402
402
```python
403
-
logging.info(f'Timer required for {total_time} seconds')
403
+
logging.info(f'Timer required for {total_seconds} seconds')
404
404
```
405
405
406
406
1. Run the function app and speak into your IoT device. You will see the total time for the timer in the function app output:
Copy file name to clipboardexpand all lines: 6-consumer/lessons/2-language-understanding/assignment.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Instructions
4
4
5
-
So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven.
5
+
So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven before the timer is elapsed.
6
6
7
7
Add a new intent to your LUIS app to cancel the timer. It won't need any entities, but will need some example sentences. Handle this in your serverless code if it is the top intent, logging that the intent was recognized.
Copy file name to clipboardexpand all lines: 6-consumer/lessons/3-spoken-feedback/README.md
+67-6
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,50 @@ In this lesson we'll cover:
26
26
27
27
## Text to speech
28
28
29
+
Text to speech, as the name suggests, is the process of converting text into audio that contains the text as spoken words. The basic principle is to break down the words in the text into their constituent sounds (known as phonemes), and stitch together audio for those sounds, either using pre-recorded audio or using audio generated by AI models.
30
+
31
+

32
+
33
+
Text to speech systems typically have 3 stages:
34
+
35
+
* Text analysis
36
+
* Linguistic analysis
37
+
* Wave-form generation
38
+
39
+
### Text analysis
40
+
41
+
Text analysis involves taking the text provided, and converting into words that can be used to generate speech. For example, if you convert "Hello world", there there is no text analysis needed, the two words can be converted to speech. If you have "1234" however, then this might need to be converted either into the words "One thousand, two hundred thirty four" or "One, two, three, four" depending on the context. For "I have 1234 apples", then it would be "One thousand, two hundred thirty four", but for "The child counted 1234" then it would be "One, two, three, four".
42
+
43
+
The words created vary not only for the language, but the locale of that language. For example, in American English, 120 would be "One hundred twenty", in British English it would be "One hundred and twenty", with the use of "and" after the hundreds.
44
+
45
+
✅ Some other examples that require text analysis include "in" as a short form of inch, and "st" as a short form of saint and street. Can you think of other examples in your language of words that are ambiguous without context.
46
+
47
+
Once the words have been defined, they are sent for linguistic analysis.
48
+
49
+
### Linguistic analysis
50
+
51
+
Linguistic analysis breaks the words down into phonemes. Phonemes are based not just on the letters used, but the other letters in the word. For example, in English the 'a' sound in 'car' and 'care' is different. The English language has 44 different phonemes for the 26 letters in the alphabet, some shared by different letters, such as the same phoneme used at the start of 'circle' and 'serpent'.
52
+
53
+
✅ Do some research: What are the phonemes for you language?
54
+
55
+
Once the words have been converted to phonemes, these phonemes need additional data to support intonation, adjusting the tone or duration depending on the context. One example is in English pitch increases can be used to convert a sentence into a question, having a raised pitch for the last word implies a question.
56
+
57
+
For example - the sentence "You have an apple" is a statement saying that you have an apple. If the pitch goes up at the end, increasing for the word apple, it becomes the question "You have an apple?", asking if you have an apple. The linguistic analysis needs to use the question mark at the end to decide to increase pitch.
58
+
59
+
Once the phonemes have been generated, they can be sent for wave-form generation to produce the audio output.
60
+
61
+
### Wave-form generation
62
+
63
+
The first electronic text to speech systems used single audio recordings for each phoneme, leading to very monotonous, robotic sounding voices. The linguistic analysis would produce phonemes, these would be loaded from a database of sounds and stitched together to make the audio.
64
+
65
+
✅ Do some research: Find some audio recordings from early speech synthesis systems. Compare it to modern speech synthesis, such as that used in smart assistants.
66
+
67
+
More modern wave-form generation uses ML models built using deep learning (very large neural networks that act in a similar way to neurons in the brain) to produce more natural sounding voices that can be indistinguishable from humans.
68
+
69
+
> 💁 Some of these ML models can be re-trained using transfer learning to sound like real people. This means using voice as a security system, something banks are increasingly trying to do, is no longer a good idea as anyone with a recording of a few minutes of your voice can impersonate you.
70
+
71
+
These large ML models are being trained to combine all three steps into end-to-end speech synthesizers.
72
+
29
73
## Set the timer
30
74
31
75
The timer can be set by sending a command from the serverless code, instructing the IoT device to set the timer. This command will contain the time in seconds till the timer needs to go off.
@@ -38,11 +82,11 @@ The timer can be set by sending a command from the serverless code, instructing
38
82
39
83
You will need to set up the connection string for the IoT Hub with the service policy (*NOT* the device) in your `local.settings.json` file and add the `azure-iot-hub` pip package to your `requirements.txt` file. The device ID can be extracted from the event.
40
84
41
-
1. The direct method you send needs to be called `set-timer`, and will need to send the length of the timer as a JSON property called `time`. Use the following code to build the `CloudToDeviceMethod` using the `total_time` calculated from the data extracted by LUIS:
85
+
1. The direct method you send needs to be called `set-timer`, and will need to send the length of the timer as a JSON property called `seconds`. Use the following code to build the `CloudToDeviceMethod` using the `total_seconds` calculated from the data extracted by LUIS:
> 💁 You can find this code in the [code-command/wio-terminal](code-command/wio-terminal), [code-command/virtual-device](code-command/virtual-device), or [code-command/pi](code-command/pi) folder.
64
-
65
107
## Convert text to speech
66
108
67
-
The same speech service you used to convert speech to text can be used to convert text back into speech, and this can be played through a microphone on your IoT device.
109
+
The same speech service you used to convert speech to text can be used to convert text back into speech, and this can be played through a speaker on your IoT device. The text to convert is sent to the speech service, along with the type of audio required (such as the sample rate), and binary data containing the audio is returned.
110
+
111
+
When you send this request, you send it using *Speech Synthesis Markup Language* (SSML), an XML-based markup language for speech synthesis applications. This defines not only the text to be converted, but the language of the text, the voice to use, and can even be used to define speed, volume, and pitch for some orall of the words in the text.
112
+
113
+
For example, this SSML defines a request to convert the text "Your 3 minute 5 second time has been set" to speech using a British English voice called `en-GB-MiaNeural`
114
+
115
+
```xml
116
+
<speak version='1.0' xml:lang='en-GB'>
117
+
<voice xml:lang='en-GB'name='en-GB-MiaNeural'>
118
+
Your 3 minute 5 second time has been set
119
+
</voice>
120
+
</speak>
121
+
```
122
+
123
+
> 💁 Most text to speech systems have multiple voices for different languages, with relevant accents such as a British English voice with an English accent and a New Zealand English voice with a New Zealand accent.
68
124
69
125
### Task - convert text to speech
70
126
@@ -78,12 +134,17 @@ Work through the relevant guide to convert text to speech using your IoT device:
78
134
79
135
## 🚀 Challenge
80
136
137
+
SSML has ways to change how words are spoken, such as adding emphasis to certain words, adding pauses, or changing pitch. Try some of these out, sending different SSMLfrom your IoT device and comparing the output. You can read more about SSML, including how to change the way words are spoken in the [Speech Synthesis Markup Language (SSML) Version 1.1 specification from the World Wide Web consortium](https://www.w3.org/TR/speech-synthesis11/).
* Read more on speech synthesis on the [Speech synthesis page on Wikipedia](https://wikipedia.org/wiki/Speech_synthesis)
146
+
* Read more on ways criminals are using speech synthesis to steal on the [Fake voices 'help cyber crooks steal cash' story on BBC news](https://www.bbc.com/news/technology-48908736)
In the assignment for the last lesson, you added a cancel timer intent to LUIS. For this assignment you need to handle this intent in the serverless code, send a command to the IoT device, then cancel the timer.
| Handle the intent in serverless code and send a command | Was able to handle the intent and send a command to the device | Was able to handle the intent but was unable to send the command to the device | Was unable to handle the intent |
12
+
| Cancel the timer on the device | Was able to receive the command and cancel the timer | Was able to receive the command but not cancel the timer | Was unable to receive the command |
0 commit comments