draft-ietf-codec-opus.xml 98.9 KB
Newer Older
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1
2
3
4
<?xml version='1.0'?>
<!DOCTYPE rfc SYSTEM 'rfc2629.dtd'>
<?rfc toc="yes" symrefs="yes" ?>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
5
<rfc ipr="trust200902" category="std" docName="draft-ietf-codec-opus-03">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
6
7

<front>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
8
<title abbrev="Interactive Audio Codec">Definition of the Opus Audio Codec</title>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
9
10
11
12
13
14
15
16
17
18
19
20


<author initials="JM" surname="Valin" fullname="Jean-Marc Valin">
<organization>Octasic Inc.</organization>
<address>
<postal>
<street>4101, Molson Street</street>
<city>Montreal</city>
<region>Quebec</region>
<code></code>
<country>Canada</country>
</postal>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
21
<phone>+1 514 282-8858</phone>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
22
23
24
25
26
<email>jean-marc.valin@octasic.com</email>
</address>
</author>

<author initials="K." surname="Vos" fullname="Koen Vos">
27
<organization>Skype Technologies S.A.</organization>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
28
29
<address>
<postal>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
30
<street>Stadsgarden 6</street>
31
<city>Stockholm</city>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
32
<region></region>
33
34
<code>11645</code>
<country>SE</country>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
35
</postal>
36
<phone>+46 855 921 989</phone>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
37
38
39
40
41
<email>koen.vos@skype.net</email>
</address>
</author>


Jean-Marc Valin's avatar
Jean-Marc Valin committed
42
<date day="15" month="February" year="2011" />
Jean-Marc Valin's avatar
Jean-Marc Valin committed
43
44
45
46
47
48
49

<area>General</area>

<workgroup></workgroup>

<abstract>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
50
This document describes the Opus codec, designed for interactive speech and audio 
Jean-Marc Valin's avatar
Jean-Marc Valin committed
51
transmission over the Internet.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
52
53
54
55
56
57
58
59
</t>
</abstract>
</front>

<middle>

<section anchor="introduction" title="Introduction">
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
60
We propose the Opus codec based on a linear prediction layer (LP) and an
Jean-Marc Valin's avatar
Jean-Marc Valin committed
61
62
63
64
65
MDCT-based enhancement layer. The main idea behind the proposal is that
the speech low frequencies are usually more efficiently coded using
linear prediction codecs (such as CELP variants), while the higher frequencies
are more efficiently coded in the transform domain (e.g. MDCT). For low 
sampling rates, the MDCT layer is not useful and only the LP-based layer is
66
used. On the other hand, non-speech signals are not always adequately coded
Jean-Marc Valin's avatar
Jean-Marc Valin committed
67
68
using linear prediction, so for music only the MDCT-based layer is used.
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
69

Jean-Marc Valin's avatar
Jean-Marc Valin committed
70
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
71
72
73
74
In this proposed prototype, the LP layer is based on the 
<eref target='http://developer.skype.com/silk'>SILK</eref> codec 
<xref target="SILK"></xref> and the MDCT layer is based on the 
<eref target='http://www.celt-codec.org/'>CELT</eref>  codec
Jean-Marc Valin's avatar
Jean-Marc Valin committed
75
 <xref target="CELT"></xref>.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
76
77
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
78
<t>This is a work in progress.</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
79
80
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
81
<section anchor="hybrid" title="Opus Codec">
82
83
84
85
86
87
88

<t>
In hybrid mode, each frame is coded first by the LP layer and then by the MDCT 
layer. In the current prototype, the cutoff frequency is 8 kHz. In the MDCT
layer, all bands below 8 kHz are discarded, such that there is no coding
redundancy between the two layers. Also both layers use the same instance of 
the range coder to encode the signal, which ensures that no "padding bits" are
Jean-Marc Valin's avatar
Jean-Marc Valin committed
89
90
91
92
wasted. The hybrid approach makes it easy to support both constant bit-rate
(CBR) and varaible bit-rate (VBR) coding. Although the SILK layer used is VBR,
it is easy to make the bit allocation of the CELT layer produce a final stream
that is CBR by using all the bits left unused by the SILK layer.
93
94
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
95
96
97
98
99
100
101
102
103
104
<t>
In addition to their frame size, the SILK and CELT codecs require
a look-ahead of 5.2 ms and 2.5 ms, respectively. SILK's look-ahead is due to
noise shaping estimation (5 ms) and the internal resampling (0.2 ms), while
CELT's look-ahead is due to the overlapping MDCT windows. To compensate for the
difference, the CELT encoder input is delayed by 2.7 ms. This ensures that low
frequencies and high frequencies arrive at the same time.
</t>


105
106
107
108
109
<section title="Source Code">
<t>
The source code is currently available in a
<eref target='git://git.xiph.org/users/jm/ietfcodec.git'>Git repository</eref> 
which references two other
Jean-Marc Valin's avatar
Jean-Marc Valin committed
110
111
112
repositories (for SILK and CELT). Development snapshots are provided at 
<eref target='http://opus-codec.org/'/>.

113
114
115
116
117
</t>
</section>

</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
118
119
120
121
122
123
124
125
<section anchor="modes" title="Codec Modes">
<t>
There are three possible operating modes for the proposed prototype:
<list style="numbers">
<t>A linear prediction (LP) mode for use in low bit-rate connections with up to 8 kHz audio bandwidth (16 kHz sampling rate)</t>
<t>A hybrid (LP+MDCT) mode for full-bandwidth speech at medium bitrates</t>
<t>An MDCT-only mode for very low delay speech transmission as well as music transmission.</t>
</list>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
126
127
Each of these modes supports a number of difference frame sizes and sampling
rates. In order to distinguish between the various modes and configurations,
Jean-Marc Valin's avatar
Jean-Marc Valin committed
128
we define a single-byte table-of-contents (TOC) header that can used in the transport layer 
Jean-Marc Valin's avatar
Jean-Marc Valin committed
129
(e.g RTP) to signal this information. The following describes the proposed
Jean-Marc Valin's avatar
Jean-Marc Valin committed
130
TOC byte.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
131
132
133
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
134
The LP mode supports the following configurations (numbered from 0 to 11):
Jean-Marc Valin's avatar
Jean-Marc Valin committed
135
<list style="symbols">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
136
137
138
<t>8 kHz:  10, 20, 40, 60 ms (0..3)</t>
<t>12 kHz: 10, 20, 40, 60 ms (4..7)</t>
<t>16 kHz: 10, 20, 40, 60 ms (8..11)</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
139
140
141
142
143
</list>
for a total of 12 configurations.
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
144
The hybrid mode supports the following configurations (numbered from 12 to 15):
Jean-Marc Valin's avatar
Jean-Marc Valin committed
145
<list style="symbols">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
146
147
<t>32 kHz: 10, 20 ms (12..13)</t>
<t>48 kHz: 10, 20 ms (14..15)</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
148
149
150
151
152
</list>
for a total of 4 configurations.
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
153
The MDCT-only mode supports the following configurations (numbered from 16 to 31):
Jean-Marc Valin's avatar
Jean-Marc Valin committed
154
<list style="symbols">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
155
156
157
158
<t>8 kHz:  2.5, 5, 10, 20 ms (16..19)</t>
<t>16 kHz: 2.5, 5, 10, 20 ms (20..23)</t>
<t>32 kHz: 2.5, 5, 10, 20 ms (24..27)</t>
<t>48 kHz: 2.5, 5, 10, 20 ms (28..31)</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
159
</list>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
160
for a total of 16 configurations.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
161
162
163
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
164
There is thus a total of 32 configurations, encoded in 5 bits. On bit is used to signal mono vs stereo, which leaves 2 bits for the number of frames per packets (codes 0 to 3):
Jean-Marc Valin's avatar
Jean-Marc Valin committed
165
<list style="symbols">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
166
167
<t>0:    1 frames in the packet</t>
<t>1:    2 frames in the packet, each with equal compressed size</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
168
169
<t>2:    2 frames in the packet, with different compressed size</t>
<t>3:    arbitrary number of frames in the packet</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
170
</list>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
171
172
For code 2, the TOC byte is followed by the length of the first frame, encoded as described below.
For code 3, the TOC byte is followed by a byte encoding the number of frames in the packet, with the MSB indicating VBR. In the VBR case, the byte indicating the number of frames is followed by N-1 frame 
Jean-Marc Valin's avatar
Jean-Marc Valin committed
173
174
lengths encoded as described below. As an additional limit, the audio duration contained
within a packet may not exceed 120 ms.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
175
176
177
178
179
</t>

<t>
The compressed size of the frames (if needed) is indicated -- usually -- with one byte, with the following meaning:
<list style="symbols">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
180
<t>0:          No frame (DTX or lost packet)</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
181
182
<t>1-251:      Size of the frame in bytes</t>
<t>252-255:    A second byte is needed. The total size is (size[1]*4)+size[0]</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
183
184
185
186
</list>
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
187
The maximum size representable is 255*4+255=1275 bytes. For 20 ms frames, that 
Jean-Marc Valin's avatar
Jean-Marc Valin committed
188
189
190
191
192
193
represents a bit-rate of 510 kb/s, which is really the highest rate anyone would want 
to use in stereo mode (beyond that point, lossless codecs would be more appropriate).
</t>

<section anchor="examples" title="Examples">
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
194
Simplest case: one narrowband mono 20-ms SILK frame
Jean-Marc Valin's avatar
Jean-Marc Valin committed
195
196
197
198
199
200
201
202
</t>

<t>
<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Jean-Marc Valin's avatar
Jean-Marc Valin committed
203
|    1    |0|0|0|               compressed data...              |
Jean-Marc Valin's avatar
Jean-Marc Valin committed
204
205
206
207
208
209
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
210
Two 48 kHz mono 5 ms CELT frames of the same compressed size:
Jean-Marc Valin's avatar
Jean-Marc Valin committed
211
212
213
214
215
216
217
218
</t>

<t>
<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Jean-Marc Valin's avatar
Jean-Marc Valin committed
219
|    29   |0|0|1|               compressed data...              |
Jean-Marc Valin's avatar
Jean-Marc Valin committed
220
221
222
223
224
225
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
226
Two 48 kHz mono 20-ms hybrid frames of different compressed size:
Jean-Marc Valin's avatar
Jean-Marc Valin committed
227
228
229
230
231
232
233
234
</t>

<t>
<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Jean-Marc Valin's avatar
Jean-Marc Valin committed
235
236
237
|    15   |0|1|1|       2       |   frame size  |compressed data|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                       compressed data...                      |
Jean-Marc Valin's avatar
Jean-Marc Valin committed
238
239
240
241
242
243
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
244
Four 48 kHz stereo 20-ms CELT frame of the same compressed size:
Jean-Marc Valin's avatar
Jean-Marc Valin committed
245
246
247
248
249
250
251
252
253

</t>

<t>
<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Jean-Marc Valin's avatar
Jean-Marc Valin committed
254
|    31   |1|1|0|       4       |      compressed data...       |
Jean-Marc Valin's avatar
Jean-Marc Valin committed
255
256
257
258
259
260
261
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</t>
</section>


262
263
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
264
<section title="Opus Decoder">
265
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
266
267
268
269
270
The Opus decoder consists of two main blocks: the SILK decoder and the CELT decoder. 
The output of the Opus decode is the sum of the outputs from the SILK and CELT decoders
with proper sample rate conversion and delay compensation as illustrated in the
block diagram below. At any given time, one or both of the SILK and CELT decoders
may be active. 
271
272
<figure>
<artwork>
273
<![CDATA[
Jean-Marc Valin's avatar
Jean-Marc Valin committed
274
275
276
277
278
279
280
281
282
283
284
                       +-------+    +----------+
                       | SILK  |    |  sample  |
                    +->|encoder|--->|   rate   |----+
bit-    +-------+   |  |       |    |conversion|    v
stream  | Range |---+  +-------+    +----------+  /---\  audio
------->|decoder|                                 | + |------>
        |       |---+  +-------+    +----------+  \---/
        +-------+   |  | CELT  |    | Delay    |    ^
                    +->|decoder|----| compens- |----+
                       |       |    | ation    |
                       +-------+    +----------+
285
286
287
]]>
</artwork>
</figure>
288
289
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
290
<section anchor="range-decoder" title="Range Decoder">
291
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
292
293
294
295
296
297
298
299
300
The range decoder extracts the symbols and integers encoded using the range encoder in
<xref target="range-encoder"></xref>. The range decoder maintains an internal
state vector composed of the two-tuple (dif,rng), representing the
difference between the high end of the current range and the actual
coded value, and the size of the current range, respectively. Both
dif and rng are 32-bit unsigned integer values. rng is initialized to
2^7. dif is initialized to rng minus the top 7 bits of the first
input octet. Then the range is immediately normalized, using the
procedure described in the following section.
301
302
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
303
<section anchor="decoding-symbols" title="Decoding Symbols">
304
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
305
306
307
308
309
   Decoding symbols is a two-step process. The first step determines
   a value fs that lies within the range of some symbol in the current
   context. The second step updates the range decoder state with the
   three-tuple (fl,fh,ft) corresponding to that symbol, as defined in
   <xref target="encoding-symbols"></xref>.
310
311
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
312
313
314
315
316
   The first step is implemented by ec_decode() 
   (rangedec.c), 
   and computes fs = ft-min((dif-1)/(rng/ft)+1,ft), where ft is
   the sum of the frequency counts in the current context, as described
   in <xref target="encoding-symbols"></xref>. The divisions here are exact integer division. 
317
318
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
319
320
321
322
323
   In the reference implementation, a special version of ec_decode()
   called ec_decode_bin() (rangeenc.c) is defined using
   the parameter ftb instead of ft. It is mathematically equivalent to
   calling ec_decode() with ft = (1&lt;&lt;ftb), but avoids one of the
   divisions.
324
325
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
326
327
328
329
330
   The decoder then identifies the symbol in the current context
   corresponding to fs; i.e., the one whose three-tuple (fl,fh,ft)
   satisfies fl &lt;= fs &lt; fh. This tuple is used to update the decoder
   state according to dif = dif - (rng/ft)*(ft-fh), and if fl is greater
   than zero, rng = (rng/ft)*(fh-fl), or otherwise rng = rng - (rng/ft)*(ft-fh). After this update, the range is normalized.
331
332
333
</t>
<t>
   To normalize the range, the following process is repeated until
Jean-Marc Valin's avatar
Jean-Marc Valin committed
334
335
336
337
338
339
340
341
342
   rng > 2^23. First, rng is set to (rng&lt;8)&amp;0xFFFFFFFF. Then the next
   8 bits of input are read into sym, using the remaining bit from the
   previous input octet as the high bit of sym, and the top 7 bits of the
   next octet for the remaining bits of sym. If no more input octets
   remain, zero bits are used instead. Then, dif is set to
   (dif&lt;&lt;8)-sym&amp;0xFFFFFFFF (i.e., using wrap-around if the subtraction
   overflows a 32-bit register). Finally, if dif is larger than 2^31,
   dif is then set to dif - 2^31. This process is carried out by
   ec_dec_normalize() (rangedec.c).
343
344
345
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
346
<section anchor="decoding-ints" title="Decoding Uniformly Distributed Integers">
347
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
348
349
350
351
   Functions ec_dec_uint() or ec_dec_bits() are based on ec_decode() and
   decode one of N equiprobable symbols, each with a frequency of 1,
   where N may be as large as 2^32-1. Because ec_decode() is limited to
   a total frequency of 2^16-1, this is done by decoding a series of
352
353
354
   symbols in smaller contexts.
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
355
356
357
358
359
360
361
362
363
364
   ec_dec_bits() (entdec.c) is defined, like
   ec_decode_bin(), to take a single parameter ftb, with ftb &lt; 32.
   and ftb &lt; 32, and produces an ftb-bit decoded integer value, t,
   initialized to zero. While ftb is greater than 8, it decodes the next
   8 most significant bits of the integer, s = ec_decode_bin(8), updates
   the decoder state with the 3-tuple (s,s+1,256), adds those bits to
   the current value of t, t = t&lt;&lt;8 | s, and subtracts 8 from ftb. Then
   it decodes the remaining bits of the integer, s = ec_decode_bin(ftb),
   updates the decoder state with the 3 tuple (s,s+1,1&lt;&lt;ftb), and adds
   those bits to the final values of t, t = t&lt;&lt;ftb | s.
365
366
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
367
368
369
370
371
372
373
374
375
376
377
378
379
   ec_dec_uint() (entdec.c) takes a single parameter,
   ft, which is not necessarily a power of two, and returns an integer,
   t, with a value between 0 and ft-1, inclusive, which is initialized to zero. Let
   ftb be the location of the highest 1 bit in the two's-complement
   representation of (ft-1), or -1 if no bits are set. If ftb>8, then
   the top 8 bits of t are decoded using t = ec_decode((ft-1>>ftb-8)+1),
   the decoder state is updated with the three-tuple
   (s,s+1,(ft-1>>ftb-8)+1), and the remaining bits are decoded with
   t = t&lt;&lt;ftb-8|ec_dec_bits(ftb-8). If, at this point, t >= ft, then
   the current frame is corrupt, and decoding should stop. If the
   original value of ftb was not greater than 8, then t is decoded with
   t = ec_decode(ft), and the decoder state is updated with the
   three-tuple (t,t+1,ft).
380
381
382
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
383
<section anchor="decoder-tell" title="Current Bit Usage">
384
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
385
   The bit allocation routines in CELT need to be able to determine a
386
   conservative upper bound on the number of bits that have been used
Jean-Marc Valin's avatar
Jean-Marc Valin committed
387
388
389
390
391
392
393
   to decode from the current frame thus far. This drives allocation
   decisions which must match those made in the encoder. This is
   computed in the reference implementation to fractional bit precision
   by the function ec_dec_tell() (rangedec.c). Like all
   operations in the range decoder, it must be implemented in a
   bit-exact manner, and must produce exactly the same value returned by
   ec_enc_tell() after encoding the same symbols.
394
395
396
397
398
</t>
</section>

</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
399
400
401
402
403
      <section anchor='outline_decoder' title='SILK Decoder'>
        <t>
          At the receiving end, the received packets are by the range decoder split into a number of frames contained in the packet. Each of which contains the necessary information to reconstruct a 20 ms frame of the output signal.
        </t>
        <section title="Decoder Modules">
404
          <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
            An overview of the decoder is given in <xref target="decoder_figure" />.
            <figure align="center" anchor="decoder_figure">
              <artwork align="center">
                <![CDATA[
   
   +---------+    +------------+    
-->| Range   |--->| Decode     |---------------------------+
 1 | Decoder | 2  | Parameters |----------+       5        |
   +---------+    +------------+     4    |                |
                       3 |                |                |
                        \/               \/               \/
                  +------------+   +------------+   +------------+
                  | Generate   |-->| LTP        |-->| LPC        |-->
                  | Excitation |   | Synthesis  |   | Synthesis  | 6
                  +------------+   +------------+   +------------+

1: Range encoded bitstream
2: Coded parameters
3: Pulses and gains
4: Pitch lags and LTP coefficients
5: LPC coefficients
6: Decoded signal
]]>
              </artwork>
              <postamble>Decoder block diagram.</postamble>
            </figure>
431
432
          </t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
433
          <section title='Range Decoder'>
434
            <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
435
              The range decoder decodes the encoded parameters from the received bitstream. Output from this function includes the pulses and gains for the excitation signal generation, as well as LTP and LSF codebook indices, which are needed for decoding LTP and LPC coefficients needed for LTP and LPC synthesis filtering the excitation signal, respectively.
436
437
438
            </t>
          </section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
439
          <section title='Decode Parameters'>
440
            <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
441
              Pulses and gains are decoded from the parameters that was decoded by the range decoder.
442
            </t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
443

444
            <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
445
446
447
              When a voiced frame is decoded and LTP codebook selection and indices are received, LTP coefficients are decoded using the selected codebook by choosing the vector that corresponds to the given codebook index in that codebook. This is done for each of the four subframes.
              The LPC coefficients are decoded from the LSF codebook by first adding the chosen vectors, one vector from each stage of the codebook. The resulting LSF vector is stabilized using the same method that was used in the encoder, see
              <xref target='lsf_stabilizer_overview_section' />. The LSF coefficients are then converted to LPC coefficients, and passed on to the LPC synthesis filter.
448
449
450
            </t>
          </section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
451
          <section title='Generate Excitation'>
452
            <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
453
              The pulses signal is multiplied with the quantization gain to create the excitation signal.
454
455
456
            </t>
          </section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
457
          <section title='LTP Synthesis'>
458
            <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
459
460
              For voiced speech, the excitation signal e(n) is input to an LTP synthesis filter that will recreate the long term correlation that was removed in the LTP analysis filter and generate an LPC excitation signal e_LPC(n), according to
              <figure align="center">
461
462
                <artwork align="center">
                  <![CDATA[
Jean-Marc Valin's avatar
Jean-Marc Valin committed
463
464
465
466
467
                   d
                  __
e_LPC(n) = e(n) + \  e(n - L - i) * b_i,
                  /_
                 i=-d
468
469
470
]]>
                </artwork>
              </figure>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
471
              using the pitch lag L, and the decoded LTP coefficients b_i.
472

Jean-Marc Valin's avatar
Jean-Marc Valin committed
473
              For unvoiced speech, the output signal is simply a copy of the excitation signal, i.e., e_LPC(n) = e(n).
474
            </t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
475
          </section>
476

Jean-Marc Valin's avatar
Jean-Marc Valin committed
477
          <section title='LPC Synthesis'>
478
            <t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
479
              In a similar manner, the short-term correlation that was removed in the LPC analysis filter is recreated in the LPC synthesis filter. The LPC excitation signal e_LPC(n) is filtered using the LTP coefficients a_i, according to
480
481
482
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
Jean-Marc Valin's avatar
Jean-Marc Valin committed
483
484
485
486
487
488
                 d_LPC
                  __
y(n) = e_LPC(n) + \  e_LPC(n - i) * a_i,
                  /_
                  i=1
]]>
489
490
                </artwork>
              </figure>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
491
              where d_LPC is the LPC synthesis filter order, and y(n) is the decoded output signal.
492
493
            </t>
          </section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
494
495
        </section>
      </section>
496
497


Jean-Marc Valin's avatar
Jean-Marc Valin committed
498
<section title="CELT Decoder">
499

Jean-Marc Valin's avatar
Jean-Marc Valin committed
500
501
<t>
Insert decoder figure.
502

Jean-Marc Valin's avatar
Jean-Marc Valin committed
503
</t>
504

Jean-Marc Valin's avatar
Jean-Marc Valin committed
505
506
507
508
<texttable anchor='table_example'>
<ttcol align='center'>Symbol(s)</ttcol>
<ttcol align='center'>PDF</ttcol>
<ttcol align='center'>Condition</ttcol>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
509
510
<c>silence</c>      <c>[32767, 1]/32768</c> <c></c>
<c>post-filter</c>  <c>[1, 1]/2</c> <c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
511
512
513
514
<c>octave</c>       <c>uniform (6)</c><c>post-filter</c>
<c>period</c>       <c>raw bits (4+octave)</c><c>post-filter</c>
<c>gain</c>         <c>raw bits (3)</c><c>post-filter</c>
<c>tapset</c>       <c>[2, 1, 1]/4</c><c>post-filter</c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
515
516
<c>transient</c>    <c>[7, 1]/8</c><c></c>
<c>intra</c>        <c>[7, 1]/8</c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
517
<c>coarse energy</c><c><xref target="energy-decoding"/></c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
518
<c>tf_change</c>    <c><xref target="transient-decoding"/></c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
519
<c>tf_select</c>    <c>[1, 1]/2</c><c><xref target="transient-decoding"/></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
520
<c>spread</c>       <c>[7, 2, 21, 2]/32</c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
521
<c>dyn. alloc.</c>  <c><xref target="allocation"/></c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
522
<c>alloc. trim</c>  <c>[2, 2, 5, 10, 22, 46, 22, 10, 5, 2, 2]/128</c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
523
<c>skip (*)</c>     <c>[1, 1]/2</c><c><xref target="allocation"/></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
524
<c>intensity (*)</c><c>uniform</c><c><xref target="allocation"/></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
525
<c>dual (*)</c>     <c>[1, 1]/2</c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
526
<c>fine energy</c>  <c><xref target="energy-decoding"/></c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
527
<c>residual</c>     <c><xref target="PVQ-decoder"/></c><c></c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
528
<c>anti-collapse</c><c>[1, 1]/2</c><c>transient, 4-8 blocks</c>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
529
530
531
532
533
534
535
536
537
538
539
540
541
<c>finalize</c>     <c><xref target="energy-decoding"/></c><c></c>
<postamble>Order of the symbols in the CELT section of the bit-stream</postamble>
</texttable>

<t>
The decoder extracts information from the range-coded bit-stream in the order
described in the figure above. In some circumstances, it is 
possible for a decoded value to be out of range due to a very small amount of redundancy
in the encoding of large integers by the range coder.
In that case, the decoder should assume there has been an error in the coding, 
decoding, or transmission and SHOULD take measures to conceal the error and/or report
to the application that a problem has occurred.
</t>
542

Jean-Marc Valin's avatar
Jean-Marc Valin committed
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
<section anchor="transient-decoding" title="Transient Decoding">
<t>
The <spanx style="emph">transient</spanx> flag encoded in the bit-stream has a
probability of 1/8. When it is set, then the MDCT coefficients represent multiple 
short MDCTs in the frame. When not set, the coefficients represent a single
long MDCT for the frame. In addition to the global transient flag is a per-band
binary flag to change the time-frequency (tf) resolution independently in each band. The 
change in tf resolution is defined in tf_select_table[][] in celt.c and depends
on the frame size, whether the transient flag is set, and the value of tf_select.
The tf_select flag uses a 1/2 probability, but is only decoded 
if it can have an impact on the result knowing the value of all per-band
tf_change flags. 
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
558
<section anchor="energy-decoding" title="Energy Envelope Decoding">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
559

Jean-Marc Valin's avatar
Jean-Marc Valin committed
560
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
It is important to quantize the energy with sufficient resolution because
any energy quantization error cannot be compensated for at a later
stage. Regardless of the resolution used for encoding the shape of a band,
it is perceptually important to preserve the energy in each band. CELT uses a 
three-step coarse-fine-fine strategy for encoding the energy in the base-2 log
domain, as implemented in quant_bands.c</t>

<section anchor="coarse-energy-decoding" title="Coarse energy decoding">
<t>
Coarse quantization of the energy uses a fixed resolution of 6 dB
(integer part of base-2 log). To minimize the bitrate, prediction is applied
both in time (using the previous frame) and in frequency (using the previous
bands). The part of the prediction that is based on the
previous frame can be disabled, creating an "intra" frame where the energy
is coded without reference to prior frames. The decoder first reads the intra flag
to determine what prediction is used.
The 2-D z-transform of
the prediction filter is: A(z_l, z_b)=(1-a*z_l^-1)*(1-z_b^-1)/(1-b*z_b^-1)
where b is the band index and l is the frame index. The prediction coefficients
applied depend on the frame size in use when not using intra energy and a=0 b=4915/32768
when using intra energy.
The time-domain prediction is based on the final fine quantization of the previous
frame, while the frequency domain (within the current frame) prediction is based
on coarse quantization only (because the fine quantization has not been computed
yet). The prediction is clamped internally so that fixed point implementations with
limited dynamic range to not suffer desynchronization.  
We approximate the ideal
probability distribution of the prediction error using a Laplace distribution
with seperate parameters for each frame size in intra and inter-frame modes. The
coarse energy quantization is performed by unquant_coarse_energy() and 
unquant_coarse_energy_impl() (quant_bands.c). The encoding of the Laplace-distributed values is
implemented in ec_laplace_decode() (laplace.c).
Jean-Marc Valin's avatar
Jean-Marc Valin committed
593
</t>
594

Jean-Marc Valin's avatar
Jean-Marc Valin committed
595
596
597
</section>

<section anchor="fine-energy-decoding" title="Fine energy quantization">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
598
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
599
600
601
602
603
604
The number of bits assigned to fine energy quantization in each band is determined
by the bit allocation computation described in <xref target="allocation"></xref>. 
Let B_i be the number of fine energy bits 
for band i; the refinement is an integer f in the range [0,2^B_i-1]. The mapping between f
and the correction applied to the coarse energy is equal to (f+1/2)/2^B_i - 1/2. Fine
energy quantization is implemented in quant_fine_energy() (quant_bands.c). 
Jean-Marc Valin's avatar
Jean-Marc Valin committed
605
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
<t>
When some bits are left "unused" after all other flags have been decoded, these bits
are assigned to a "final" step of fine allocation. In effect, these bits are used
to add one extra fine energy bit per band per channel. The allocation process 
determines two <spanx style="emph">priorities</spanx> for the final fine bits. 
Any remaining bits are first assigned only to bands of priority 0, starting 
from band 0 and going up. If all bands of priority 0 have received one bit per
channel, then bands of priority 1 are assigned an extra bit per channel, 
starting from band 0. If any bit is left after this, they are left unused.
This is implemented in unquant_energy_finalise() (quant_bands.c).
</t>

</section> <!-- fine energy -->

</section> <!-- Energy decode -->


623

Jean-Marc Valin's avatar
Jean-Marc Valin committed
624
<section anchor="allocation" title="Bit allocation">
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
<t>Bit allocation is performed based only on information available to both
the encoder and decoder. The same calculations are performed in a bit-exact
manner in both the encoder and decoder to ensure that the result is always
exactly the same. Any mismatch causes corruption of the decoded output.
The allocation is computed by compute_allocation() (rate.c),
which is used in both the encoder and the decoder.</t>

<t>For a given band, the bit allocation is nearly constant across
frames that use the same number of bits for Q1, yielding a 
pre-defined signal-to-mask ratio (SMR) for each band. Because the
bands each have a width of one Bark, this is equivalent to modeling the
masking occurring within each critical band, while ignoring inter-band
masking and tone-vs-noise characteristics. While this is not an
optimal bit allocation, it provides good results without requiring the
transmission of any allocation information. Additionally, the encoder
is able to signal alterations to the implicit allocation via
Jean-Marc Valin's avatar
Jean-Marc Valin committed
641
two means: There is an entropy coded trim parameter can be used to tilt the
642
643
644
645
646
647
allocation to favor low or high frequencies, and there is a boost parameter
which can be used to shift large amounts of additional precision into
individual bands.
</t>


Jean-Marc Valin's avatar
Jean-Marc Valin committed
648
<t>
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
For every encoded or decoded frame, a target allocation must be computed
using the projected allocation. In the reference implementation this is
performed by compute_allocation() (rate.c).
The target computation begins by calculating the available space as the
number of eighth-bits which can be fit in the frame after Q1 is stored according
to the range coder (ec_tell_frac()) and reserving one eighth-bit.
Then the two projected prototype allocations whose sums multiplied by 8 are nearest
to that value are determined. These two projected prototype allocations are then interpolated
by finding the highest integer interpolation coefficient in the range 0-63
such that the sum of the higher prototype times the coefficient divided by
64 plus the sum of the lower prototype multiplied is less than or equal to the
available eighth-bits. During the interpolation a maximum allocation
in each band is imposed along with a threshold hard minimum allocation for
each band.
Starting from the last coded band a binary decision is coded for each
band over the minimum threshold to determine if that band should instead
recieve only the minimum allocation. This process stops at the first
non-minimum band, the first band to recieve an explicitly coded boost,
or the first band in the frame, whichever comes first.
The reference implementation performs this step in interp_bits2pulses()
using a binary search for the interpolation. (rate.c).
Jean-Marc Valin's avatar
Jean-Marc Valin committed
670
</t>
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686

<t>
Because the computed target will sometimes be somewhat smaller than the
available space, the excess space is divided by the number of bands, and this amount
is added equally to each band which was not forced to the minimum value.
</t>

<t>
The allocation target is separated into a portion used for fine energy
and a portion used for the Spherical Vector Quantizer (PVQ). The fine energy
quantizer operates in whole-bit steps and is allocated based on an offset
fraction of the total usable space. Excess bits above the maximums are
left unallocated and placed into the rolling balance maintained during
the quantization process.
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
687
</section>
688

Jean-Marc Valin's avatar
Jean-Marc Valin committed
689
690
691
692
693
<section anchor="PVQ-decoder" title="Spherical VQ Decoder">
<t>
In order to correctly decode the PVQ codewords, the decoder must perform exactly the same
bits to pulses conversion as the encoder.
</t>
694

695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
<section anchor="bits-pulses" title="Bits to Pulses">
<t>
Although the allocation is performed in 1/8th bit units, the quantization requires
an integer number of pulses K. To do this, the encoder searches for the value
of K that produces the number of bits that is the nearest to the allocated value
(rounding down if exactly half-way between two values), subject to not exceeding
the total number of bits available. For efficiency reasons the search is performed against a
precomputated allocation table which only permits some K values for each N. The number of
codebooks entries can be computed as explained in <xref target="cwrs-encoding"></xref>. The difference
between the number of bits allocated and the number of bits used is accumulated to a
<spanx style="emph">balance</spanx> (initialised to zero) that helps adjusting the
allocation for the next bands. One third of the balance is applied to the
bit allocation of the each band to help achieving the target allocation. The only
exceptions are the band before the last and the last band, for which half the balance
and the whole balance are applied, respectively.
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
713
714
715
716
717
718
719
<section anchor="cwrs-decoder" title="Index Decoding">
<t>
The decoding of the codeword from the index is performed as specified in 
<xref target="PVQ"></xref>, as implemented in function
decode_pulses() (cwrs.c).
</t>
</section>
720

Jean-Marc Valin's avatar
Jean-Marc Valin committed
721
722
723
724
725
726
<section anchor="normalised-decoding" title="Normalised Vector Decoding">
<t>
The spherical codebook is decoded by alg_unquant() (vq.c).
The index of the PVQ entry is obtained from the range coder and converted to 
a pulse vector by decode_pulses() (cwrs.c).
</t>
727

Jean-Marc Valin's avatar
Jean-Marc Valin committed
728
729
<t>The decoded normalized vector for each band is equal to</t>
<t>X' = y/||y||,</t>
730
731

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
732
733
This operation is implemented in mix_pitch_and_residual() (vq.c), 
which is the same function as used in the encoder.
734
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
735
</section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
736
737
738
739


</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
740
741
742
743
744
745
<section anchor="anti-collapse" title="Anti-collapse processing">
<t>
When the frame has the transient bit set...
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
746
<section anchor="denormalization" title="Denormalization">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
747
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
748
749
750
751
Just like each band was normalized in the encoder, the last step of the decoder before
the inverse MDCT is to denormalize the bands. Each decoded normalized band is
multiplied by the square root of the decoded energy. This is done by denormalise_bands()
(bands.c).
Jean-Marc Valin's avatar
Jean-Marc Valin committed
752
753
754
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
755
756
757
758
759
760
761
762
<section anchor="inverse-mdct" title="Inverse MDCT">
<t>The inverse MDCT implementation has no special characteristics. The
input is N frequency-domain samples and the output is 2*N time-domain 
samples, while scaling by 1/2. The output is windowed using the same window 
as the encoder. The IMDCT and windowing are performed by mdct_backward
(mdct.c). If a time-domain pre-emphasis 
window was applied in the encoder, the (inverse) time-domain de-emphasis window
is applied on the IMDCT result. 
Jean-Marc Valin's avatar
Jean-Marc Valin committed
763
</t>
Gregory Maxwell's avatar
Gregory Maxwell committed
764

Jean-Marc Valin's avatar
Jean-Marc Valin committed
765
<section anchor="post-filter" title="Post-filter">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
766
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
The output of the inverse MDCT (after weighted overlap-add) is sent to the
post-filter. Although the post-filter is applied at the end, the post-filter
parameters are encoded at the beginning, just after the silence flag.
The post-filter can be switched on or off using one bit (logp=1).
If the post-filter is enabled, then the octave is decoded as an integer value
between 0 and 6 of uniform probability. Once the octave is known, the fine pitch
within the octave is decoded using 4+octave raw bits. The final pitch period
is equal to (16&lt;&lt;octave)+fine_pitch-1 so it is bounded between 15 and 1022,
inclusively. Next, the gain is decoded as three raw bits and is equal to 
G=3*(int_gain+1)/32. The set of post-filter taps is decoded last using 
a pdf equal to [2, 1, 1]/4. Tapset zero corresponds to the filter coefficients
g0 = 0.3066406250, g1 = 0.2170410156, g2 = 0.1296386719. Tapset one
corresponds to the filter coefficients g0 = 0.4638671875, g1 = 0.2680664062,
g2 = 0, and tapset two uses filter coefficients g0 = 0.7998046875,
g1 = 0.1000976562, g2 = 0.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
782
783
784
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
785
786
787
788
789
790
791
792
793
The post-filter response is thus computed as:
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
   y(n) = x(n) + G*(g0*y(n-T) + g1*(y(n-T+1)+y(n-T+1)) 
                              + g2*(y(n-T+2)+y(n-T+2)))
]]>
                </artwork>
              </figure>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
794

Jean-Marc Valin's avatar
Jean-Marc Valin committed
795
796
797
During a transition between different gains, a smooth transition is calculated
using the square of the MDCT window. It is important that values of y(n) be 
interpolated one at a time such that the past value of y(n) used is interpolated.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
798
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
799
</section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
800

Jean-Marc Valin's avatar
Jean-Marc Valin committed
801
<section anchor="deemphasis" title="De-emphasis">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
802
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
803
804
805
After the post-filter, 
the signal is de-emphasized using the inverse of the pre-emphasis filter 
used in the encoder: 1/A(z)=1/(1-alpha_p*z^-1), where alpha_p=0.8500061035.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
806
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
807
</section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
808

Jean-Marc Valin's avatar
Jean-Marc Valin committed
809
</section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
810

Jean-Marc Valin's avatar
Jean-Marc Valin committed
811
<section anchor="Packet Loss Concealment" title="Packet Loss Concealment (PLC)">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
812
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
813
814
815
816
817
818
819
820
821
Packet loss concealment (PLC) is an optional decoder-side feature which 
SHOULD be included when transmitting over an unreliable channel. Because 
PLC is not part of the bit-stream, there are several possible ways to 
implement PLC with different complexity/quality trade-offs. The PLC in
the reference implementation finds a periodicity in the decoded
signal and repeats the windowed waveform using the pitch offset. The windowed
waveform is overlapped in such a way as to preserve the time-domain aliasing
cancellation with the previous frame and the next frame. This is implemented 
in celt_decode_lost() (mdct.c).
Jean-Marc Valin's avatar
Jean-Marc Valin committed
822
823
824
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
825
</section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
826
827
828
829

</section>


Jean-Marc Valin's avatar
Jean-Marc Valin committed
830
831
832
<!--  ******************************************************************* -->
<!--  **************************   OPUS ENCODER   *********************** -->
<!--  ******************************************************************* -->
Jean-Marc Valin's avatar
Jean-Marc Valin committed
833

Jean-Marc Valin's avatar
Jean-Marc Valin committed
834
<section title="Codec Encoder">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
835
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
836
837
838
Opus encoder block diagram.
<figure>
<artwork>
839
<![CDATA[
Jean-Marc Valin's avatar
Jean-Marc Valin committed
840
841
842
843
844
845
846
847
848
849
850
851
852
853
         +----------+    +-------+
         |  sample  |    | SILK  |
      +->|   rate   |--->|encoder|--+
      |  |conversion|    |       |  |
audio |  +----------+    +-------+  |    +-------+
------+                             +--->| Range |
      |  +-------+                       |encoder|---->
      |  | CELT  |                  +--->|       | bit-stream
      +->|encoder|------------------+    +-------+
         |       |
         +-------+
]]>
</artwork>
</figure>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
854
855
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
856
<section anchor="range-encoder" title="Range Coder">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
857
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
858
859
860
861
862
863
Opus uses an entropy coder based upon <xref target="range-coding"></xref>, 
which is itself a rediscovery of the FIFO arithmetic code introduced by <xref target="coding-thesis"></xref>.
It is very similar to arithmetic encoding, except that encoding is done with
digits in any base instead of with bits, 
so it is faster when using larger bases (i.e.: an octet). All of the
calculations in the range coder must use bit-exact integer arithmetic.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
864
865
866
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
867
868
869
870
871
872
873
The range coder also acts as the bit-packer for Opus. It is
used in three different ways, to encode:
<list style="symbols">
<t>entropy-coded symbols with a fixed probability model using ec_encode(), (rangeenc.c)</t>
<t>integers from 0 to 2^M-1 using ec_enc_uint() or ec_enc_bits(), (entenc.c)</t>
<t>integers from 0 to N-1 (where N is not a power of two) using ec_enc_uint(). (entenc.c)</t>
</list>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
874
875
876
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
877
878
879
880
881
882
883
884
The range encoder maintains an internal state vector composed of the
four-tuple (low,rng,rem,ext), representing the low end of the current
range, the size of the current range, a single buffered output octet,
and a count of additional carry-propagating output octets. Both rng
and low are 32-bit unsigned integer values, rem is an octet value or
the special value -1, and ext is an integer with at least 16 bits.
This state vector is initialized at the start of each each frame to
the value (0,2^31,-1,0).
Jean-Marc Valin's avatar
Jean-Marc Valin committed
885
886
887
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
888
889
890
891
892
Each symbol is drawn from a finite alphabet and coded in a separate
context which describes the size of the alphabet and the relative
frequency of each symbol in that alphabet. Opus only uses static
contexts; they are not adapted to the statistics of the data that is
coded.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
893
894
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
895
<section anchor="encoding-symbols" title="Encoding Symbols">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
896
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
897
898
899
900
901
902
903
904
905
906
   The main encoding function is ec_encode() (rangeenc.c),
   which takes as an argument a three-tuple (fl,fh,ft)
   describing the range of the symbol to be encoded in the current
   context, with 0 &lt;= fl &lt; fh &lt;= ft &lt;= 65535. The values of this tuple
   are derived from the probability model for the symbol. Let f(i) be
   the frequency of the ith symbol in the current context. Then the
   three-tuple corresponding to the kth symbol is given by
   <![CDATA[
fl=sum(f(i),i<k), fh=fl+f(i), and ft=sum(f(i)).
]]>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
907
908
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
909
910
911
912
913
   ec_encode() updates the state of the encoder as follows. If fl is
   greater than zero, then low = low + rng - (rng/ft)*(ft-fl) and 
   rng = (rng/ft)*(fh-fl). Otherwise, low is unchanged and
   rng = rng - (rng/ft)*(fh-fl). The divisions here are exact integer
   division. After this update, the range is normalized.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
914
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
<t>
   To normalize the range, the following process is repeated until
   rng > 2^23. First, the top 9 bits of low, (low>>23), are placed into
   a carry buffer. Then, low is set to <![CDATA[(low << 8 & 0x7FFFFFFF) and rng
   is set to (rng<<8)]]>. This process is carried out by
   ec_enc_normalize() (rangeenc.c).
</t>
<t>
   The 9 bits produced in each iteration of the normalization loop
   consist of 8 data bits and a carry flag. The final value of the
   output bits is not determined until carry propagation is accounted
   for. Therefore the reference implementation buffers a single
   (non-propagating) output octet and keeps a count of additional
   propagating (0xFF) output octets. An implementation MAY choose to use
   any mathematically equivalent scheme to perform carry propagation.
</t>
<t>
   The function ec_enc_carry_out() (rangeenc.c) performs
   this buffering. It takes a 9-bit input value, c, from the normalization
   8-bit output and a carry bit. If c is 0xFF, then ext is incremented
   and no octets are output. Otherwise, if rem is not the special value
   -1, then the octet (rem+(c>>8)) is output. Then ext octets are output
   with the value 0 if the carry bit is set, or 0xFF if it is not, and
   rem is set to the lower 8 bits of c. After this, ext is set to zero.
</t>
<t>
   In the reference implementation, a special version of ec_encode()
   called ec_encode_bin() (rangeenc.c) is defined to
   take a two-tuple (fl,ftb), where <![CDATA[0 <= fl < 2^ftb and ftb < 16. It is
   mathematically equivalent to calling ec_encode() with the three-tuple
   (fl,fl+1,1<<ftb)]]>, but avoids using division.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
946

Jean-Marc Valin's avatar
Jean-Marc Valin committed
947
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
948
949
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
950
<section anchor="encoding-ints" title="Encoding Uniformly Distributed Integers">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
951
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
952
953
954
955
956
   Functions ec_enc_uint() or ec_enc_bits() are based on ec_encode() and 
   encode one of N equiprobable symbols, each with a frequency of 1,
   where N may be as large as 2^32-1. Because ec_encode() is limited to
   a total frequency of 2^16-1, this is done by encoding a series of
   symbols in smaller contexts.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
957
958
</t>
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
   ec_enc_bits() (entenc.c) is defined, like
   ec_encode_bin(), to take a two-tuple (fl,ftb), with <![CDATA[0 <= fl < 2^ftb
   and ftb < 32. While ftb is greater than 8, it encodes bits (ftb-8) to
   (ftb-1) of fl, e.g., (fl>>ftb-8&0xFF) using ec_encode_bin() and
   subtracts 8 from ftb. Then, it encodes the remaining bits of fl, e.g.,
   (fl&(1<<ftb)-1)]]>, again using ec_encode_bin().
</t>
<t>
   ec_enc_uint() (entenc.c) takes a two-tuple (fl,ft),
   where ft is not necessarily a power of two. Let ftb be the location
   of the highest 1 bit in the two's-complement representation of
   (ft-1), or -1 if no bits are set. If ftb>8, then the top 8 bits of fl
   are encoded using ec_encode() with the three-tuple
   (fl>>ftb-8,(fl>>ftb-8)+1,(ft-1>>ftb-8)+1), and the remaining bits
   are encoded with ec_enc_bits using the two-tuple
   <![CDATA[(fl&(1<<ftb-8)-1,ftb-8). Otherwise, fl is encoded with ec_encode()
   directly using the three-tuple (fl,fl+1,ft)]]>.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
976
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
977
</section>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
978

Jean-Marc Valin's avatar
Jean-Marc Valin committed
979
<section anchor="encoder-finalizing" title="Finalizing the Stream">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
980
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
981
982
983
984
985
986
987
988
989
990
991
992
   After all symbols are encoded, the stream must be finalized by
   outputting a value inside the current range. Let end be the integer
   in the interval [low,low+rng) with the largest number of trailing
   zero bits. Then while end is not zero, the top 9 bits of end, e.g.,
   <![CDATA[(end>>23), are sent to the carry buffer, and end is replaced by
   (end<<8&0x7FFFFFFF). Finally, if the value in carry buffer, rem, is]]>
   neither zero nor the special value -1, or the carry count, ext, is
   greater than zero, then 9 zero bits are sent to the carry buffer.
   After the carry buffer is finished outputting octets, the rest of the
   output buffer is padded with zero octets. Finally, rem is set to the
   special value -1. This process is implemented by ec_enc_done()
   (rangeenc.c).
Jean-Marc Valin's avatar
Jean-Marc Valin committed
993
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
994
995
996
</section>

<section anchor="encoder-tell" title="Current Bit Usage">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
997
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
   The bit allocation routines in Opus need to be able to determine a
   conservative upper bound on the number of bits that have been used
   to encode the current frame thus far. This drives allocation
   decisions and ensures that the range code will not overflow the
   output buffer. This is computed in the reference implementation to
   fractional bit precision by the function ec_enc_tell() 
   (rangeenc.c).
   Like all operations in the range encoder, it must
   be implemented in a bit-exact manner.
</t>
</section>

</section>

        <section title='SILK Encoder'>
          <t>
            In the following, we focus on the core encoder and describe its components. For simplicity, we will refer to the core encoder simply as the encoder in the remainder of this document. An overview of the encoder is given in <xref target="encoder_figure" />.
          </t>

          <figure align="center" anchor="encoder_figure">
            <artwork align="center">
              <![CDATA[
                                                              +---+
                               +----------------------------->|   |
        +---------+            |     +---------+              |   |
        |Voice    |            |     |LTP      |              |   |
 +----->|Activity |-----+      +---->|Scaling  |---------+--->|   |
 |      |Detector |  3  |      |     |Control  |<+  12   |    |   |
 |      +---------+     |      |     +---------+ |       |    |   |
 |                      |      |     +---------+ |       |    |   |
 |                      |      |     |Gains    | |  11   |    |   |
 |                      |      |  +->|Processor|-|---+---|--->| R |
 |                      |      |  |  |         | |   |   |    | a |
 |                     \/      |  |  +---------+ |   |   |    | n |
 |                 +---------+ |  |  +---------+ |   |   |    | g |
 |                 |Pitch    | |  |  |LSF      | |   |   |    | e |
 |              +->|Analysis |-+  |  |Quantizer|-|---|---|--->|   |
 |              |  |         |4|  |  |         | | 8 |   |    | E |->
 |              |  +---------+ |  |  +---------+ |   |   |    | n |14
 |              |              |  |   9/\  10|   |   |   |    | c |
 |              |              |  |    |    \/   |   |   |    | o |
 |              |  +---------+ |  |  +----------+|   |   |    | d |
 |              |  |Noise    | +--|->|Prediction|+---|---|--->| e |
 |              +->|Shaping  |-|--+  |Analysis  || 7 |   |    | r |
 |              |  |Analysis |5|  |  |          ||   |   |    |   |
 |              |  +---------+ |  |  +----------+|   |   |    |   |
 |              |              |  |       /\     |   |   |    |   |
 |              |    +---------|--|-------+      |   |   |    |   |
 |              |    |        \/  \/            \/  \/  \/    |   |
 |  +---------+ |    |      +---------+       +------------+  |   |
 |  |High-Pass| |    |      |         |       |Noise       |  |   |
-+->|Filter   |-+----+----->|Prefilter|------>|Shaping     |->|   |
1   |         |      2      |         |   6   |Quantization|13|   |
    +---------+             +---------+       +------------+  +---+

1:  Input speech signal
2:  High passed input signal
3:  Voice activity estimate
4:  Pitch lags (per 5 ms) and voicing decision (per 20 ms)
5:  Noise shaping quantization coefficients
  - Short term synthesis and analysis 
    noise shaping coefficients (per 5 ms)
  - Long term synthesis and analysis noise 
    shaping coefficients (per 5 ms and for voiced speech only)
  - Noise shaping tilt (per 5 ms)
  - Quantizer gain/step size (per 5 ms)
6:  Input signal filtered with analysis noise shaping filters
7:  Short and long term prediction coefficients
    LTP (per 5 ms) and LPC (per 20 ms)
8:  LSF quantization indices
9:  LSF coefficients
10: Quantized LSF coefficients 
11: Processed gains, and synthesis noise shape coefficients
12: LTP state scaling coefficient. Controlling error propagation
   / prediction gain trade-off
13: Quantized signal
14: Range encoded bitstream

]]>
            </artwork>
            <postamble>Encoder block diagram.</postamble>
          </figure>

          <section title='Voice Activity Detection'>
            <t>
              The input signal is processed by a VAD (Voice Activity Detector) to produce a measure of voice activity, and also spectral tilt and signal-to-noise estimates, for each frame. The VAD uses a sequence of half-band filterbanks to split the signal in four subbands: 0 - Fs/16, Fs/16 - Fs/8, Fs/8 - Fs/4, and Fs/4 - Fs/2, where Fs is the sampling frequency, that is, 8, 12, 16 or 24 kHz. The lowest subband, from 0 - Fs/16 is high-pass filtered with a first-order MA (Moving Average) filter (with transfer function H(z) = 1-z^(-1)) to reduce the energy at the lowest frequencies. For each frame, the signal energy per subband is computed. In each subband, a noise level estimator tracks the background noise level and an SNR (Signal-to-Noise Ratio) value is computed as the logarithm of the ratio of energy to noise level. Using these intermediate variables, the following parameters are calculated for use in other SILK modules:
              <list style="symbols">
                <t>
                  Average SNR. The average of the subband SNR values.
                </t>

                <t>
                  Smoothed subband SNRs. Temporally smoothed subband SNR values.
                </t>

                <t>
                  Speech activity level. Based on the average SNR and a weighted average of the subband energies.
                </t>

                <t>
                  Spectral tilt. A weighted average of the subband SNRs, with positive weights for the low subbands and negative weights for the high subbands.
                </t>
              </list>
            </t>
          </section>

          <section title='High-Pass Filter'>
            <t>
              The input signal is filtered by a high-pass filter to remove the lowest part of the spectrum that contains little speech energy and may contain background noise. This is a second order ARMA (Auto Regressive Moving Average) filter with a cut-off frequency around 70 Hz.
            </t>
            <t>
              In the future, a music detector may also be used to lower the cut-off frequency when the input signal is detected to be music rather than speech.
            </t>
          </section>

          <section title='Pitch Analysis' anchor='pitch_estimator_overview_section'>
            <t>
              The high-passed input signal is processed by the open loop pitch estimator shown in <xref target='pitch_estimator_figure' />.
              <figure align="center" anchor="pitch_estimator_figure">
                <artwork align="center">
                  <![CDATA[
                                 +--------+  +----------+     
                                 |2 x Down|  |Time-     |      
                              +->|sampling|->|Correlator|     |
                              |  |        |  |          |     |4
                              |  +--------+  +----------+    \/
                              |                    | 2    +-------+
                              |                    |  +-->|Speech |5
    +---------+    +--------+ |                   \/  |   |Type   |->
    |LPC      |    |Down    | |              +----------+ |       |
 +->|Analysis | +->|sample  |-+------------->|Time-     | +-------+
 |  |         | |  |to 8 kHz|                |Correlator|----------->
 |  +---------+ |  +--------+                |__________|          6
 |       |      |                                  |3
 |      \/      |                                 \/ 
 |  +---------+ |                            +----------+
 |  |Whitening| |                            |Time-     |    
-+->|Filter   |-+--------------------------->|Correlator|----------->
1   |         |                              |          |          7
    +---------+                              +----------+ 
                                            
1: Input signal
2: Lag candidates from stage 1
3: Lag candidates from stage 2
4: Correlation threshold
5: Voiced/unvoiced flag
6: Pitch correlation
7: Pitch lags 
]]>
                </artwork>
                <postamble>Block diagram of the pitch estimator.</postamble>
              </figure>
              The pitch analysis finds a binary voiced/unvoiced classification, and, for frames classified as voiced, four pitch lags per frame - one for each 5 ms subframe - and a pitch correlation indicating the periodicity of the signal. The input is first whitened using a Linear Prediction (LP) whitening filter, where the coefficients are computed through standard Linear Prediction Coding (LPC) analysis. The order of the whitening filter is 16 for best results, but is reduced to 12 for medium complexity and 8 for low complexity modes. The whitened signal is analyzed to find pitch lags for which the time correlation is high. The analysis consists of three stages for reducing the complexity:
              <list style="symbols">
                <t>In the first stage, the whitened signal is downsampled to 4 kHz (from 8 kHz) and the current frame is correlated to a signal delayed by a range of lags, starting from a shortest lag corresponding to 500 Hz, to a longest lag corresponding to 56 Hz.</t>

                <t>
                  The second stage operates on a 8 kHz signal ( downsampled from 12, 16 or 24 kHz ) and measures time correlations only near the lags corresponding to those that had sufficiently high correlations in the first stage. The resulting correlations are adjusted for a small bias towards short lags to avoid ending up with a multiple of the true pitch lag. The highest adjusted correlation is compared to a threshold depending on:
                  <list style="symbols">
                    <t>
                      Whether the previous frame was classified as voiced
                    </t>
                    <t>
                      The speech activity level
                    </t>
                    <t>
                      The spectral tilt.
                    </t>
                  </list>
                  If the threshold is exceeded, the current frame is classified as voiced and the lag with the highest adjusted correlation is stored for a final pitch analysis of the highest precision in the third stage.
                </t>
                <t>
                  The last stage operates directly on the whitened input signal to compute time correlations for each of the four subframes independently in a narrow range around the lag with highest correlation from the second stage.
                </t>
              </list>
            </t>
          </section>

          <section title='Noise Shaping Analysis' anchor='noise_shaping_analysis_overview_section'>
            <t>
              The noise shaping analysis finds gains and filter coefficients used in the prefilter and noise shaping quantizer. These parameters are chosen such that they will fulfil several requirements:
              <list style="symbols">
                <t>Balancing quantization noise and bitrate. The quantization gains determine the step size between reconstruction levels of the excitation signal. Therefore, increasing the quantization gain amplifies quantization noise, but also reduces the bitrate by lowering the entropy of the quantization indices.</t>
                <t>Spectral shaping of the quantization noise; the noise shaping quantizer is capable of reducing quantization noise in some parts of the spectrum at the cost of increased noise in other parts without substantially changing the bitrate. By shaping the noise such that it follows the signal spectrum, it becomes less audible. In practice, best results are obtained by making the shape of the noise spectrum slightly flatter than the signal spectrum.</t>
                <t>Deemphasizing spectral valleys; by using different coefficients in the analysis and synthesis part of the prefilter and noise shaping quantizer, the levels of the spectral valleys can be decreased relative to the levels of the spectral peaks such as speech formants and harmonics. This reduces the entropy of the signal, which is the difference between the coded signal and the quantization noise, thus lowering the bitrate.</t>
                <t>Matching the levels of the decoded speech formants to the levels of the original speech formants; an adjustment gain and a first order tilt coefficient are computed to compensate for the effect of the noise shaping quantization on the level and spectral tilt.</t>
              </list>
            </t>
            <t>
              <figure align="center" anchor="noise_shape_analysis_spectra_figure">
                <artwork align="center">
                  <![CDATA[
  / \   ___
   |   // \\
   |  //   \\     ____
   |_//     \\___//  \\         ____
   | /  ___  \   /    \\       //  \\
 P |/  /   \  \_/      \\_____//    \\
 o |  /     \     ____  \     /      \\
 w | /       \___/    \  \___/  ____  \\___ 1
 e |/                  \       /    \  \    
 r |                    \_____/      \  \__ 2
   |                                  \     
   |                                   \___ 3
   |
   +---------------------------------------->
                    Frequency

1: Input signal spectrum
2: Deemphasized and level matched spectrum
3: Quantization noise spectrum
]]>
                </artwork>
                <postamble>Noise shaping and spectral de-emphasis illustration.</postamble>
              </figure>
              <xref target='noise_shape_analysis_spectra_figure' /> shows an example of an input signal spectrum (1). After de-emphasis and level matching, the spectrum has deeper valleys (2). The quantization noise spectrum (3) more or less follows the input signal spectrum, while having slightly less pronounced peaks. The entropy, which provides a lower bound on the bitrate for encoding the excitation signal, is proportional to the area between the deemphasized spectrum (2) and the quantization noise spectrum (3). Without de-emphasis, the entropy is proportional to the area between input spectrum (1) and quantization noise (3) - clearly higher.
            </t>

            <t>
              The transformation from input signal to deemphasized signal can be described as a filtering operation with a filter
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
                                     Wana(z)
H(z) = G * ( 1 - c_tilt * z^(-1) ) * -------
                                     Wsyn(z),
            ]]>
                </artwork>
              </figure>
              having an adjustment gain G, a first order tilt adjustment filter with
              tilt coefficient c_tilt, and where
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
               16                                 d
               __                                __
Wana(z) = (1 - \ (a_ana(k) * z^(-k))*(1 - z^(-L) \ b_ana(k)*z^(-k)),
               /_                                /_  
               k=1                               k=-d
            ]]>
                </artwork>
              </figure>
              is the analysis part of the de-emphasis filter, consisting of the short-term shaping filter with coefficients a_ana(k), and the long-term shaping filter with coefficients b_ana(k) and pitch lag L. The parameter d determines the number of long-term shaping filter taps.
            </t>

            <t>
              Similarly, but without the tilt adjustment, the synthesis part can be written as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
               16                                 d
               __                                __
Wsyn(z) = (1 - \ (a_syn(k) * z^(-k))*(1 - z^(-L) \ b_syn(k)*z^(-k)).
               /_                                /_  
               k=1                               k=-d
            ]]>
                </artwork>
              </figure>
            </t>
            <t>
              All noise shaping parameters are computed and applied per subframe of 5 milliseconds. First, an LPC analysis is performed on a windowed signal block of 15 milliseconds. The signal block has a look-ahead of 5 milliseconds relative to the current subframe, and the window is an asymmetric sine window. The LPC analysis is done with the autocorrelation method, with an order of 16 for best quality or 12 in low complexity operation. The quantization gain is found as the square-root of the residual energy from the LPC analysis, multiplied by a value inversely proportional to the coding quality control parameter and the pitch correlation.
            </t>
            <t>
              Next we find the two sets of short-term noise shaping coefficients a_ana(k) and a_syn(k), by applying different amounts of bandwidth expansion to the coefficients found in the LPC analysis. This bandwidth expansion moves the roots of the LPC polynomial towards the origo, using the formulas
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
 a_ana(k) = a(k)*g_ana^k, and
 a_syn(k) = a(k)*g_syn^k,
            ]]>
                </artwork>
              </figure>
              where a(k) is the k'th LPC coefficient and the bandwidth expansion factors g_ana and g_syn are calculated as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
g_ana = 0.94 - 0.02*C, and
g_syn = 0.94 + 0.02*C,
            ]]>
                </artwork>
              </figure>
              where C is the coding quality control parameter between 0 and 1. Applying more bandwidth expansion to the analysis part than to the synthesis part gives the desired de-emphasis of spectral valleys in between formants.
            </t>

            <t>
              The long-term shaping is applied only during voiced frames. It uses three filter taps, described by
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
b_ana = F_ana * [0.25, 0.5, 0.25], and
b_syn = F_syn * [0.25, 0.5, 0.25].
            ]]>
                </artwork>
              </figure>
              For unvoiced frames these coefficients are set to 0. The multiplication factors F_ana and F_syn are chosen between 0 and 1, depending on the coding quality control parameter, as well as the calculated pitch correlation and smoothed subband SNR of the lowest subband. By having F_ana less than F_syn, the pitch harmonics are emphasized relative to the valleys in between the harmonics.
            </t>

            <t>
              The tilt coefficient c_tilt is for unvoiced frames chosen as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
c_tilt = 0.4, and as
c_tilt = 0.04 + 0.06 * C
            ]]>
                </artwork>
              </figure>
              for voiced frames, where C again is the coding quality control parameter and is between 0 and 1.
            </t>
            <t>
              The adjustment gain G serves to correct any level mismatch between original and decoded signal that might arise from the noise shaping and de-emphasis. This gain is computed as the ratio of the prediction gain of the short-term analysis and synthesis filter coefficients. The prediction gain of an LPC synthesis filter is the square-root of the output energy when the filter is excited by a unit-energy impulse on the input. An efficient way to compute the prediction gain is by first computing the reflection coefficients from the LPC coefficients through the step-down algorithm, and extracting the prediction gain from the reflection coefficients as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
               K
              ___
 predGain = ( | | 1 - (r_k)^2 )^(-0.5),
              k=1
            ]]>
                </artwork>
              </figure>
              where r_k is the k'th reflection coefficient.
            </t>

            <t>
              Initial values for the quantization gains are computed as the square-root of the residual energy of the LPC analysis, adjusted by the coding quality control parameter. These quantization gains are later adjusted based on the results of the prediction analysis.
            </t>
          </section>

          <section title='Prefilter'>
            <t>
              In the prefilter the input signal is filtered using the spectral valley de-emphasis filter coefficients from the noise shaping analysis, see <xref target='noise_shaping_analysis_overview_section' />. By applying only the noise shaping analysis filter to the input signal, it provides the input to the noise shaping quantizer.
            </t>
          </section>
          <section title='Prediction Analysis' anchor='pred_ana_overview_section'>
            <t>
              The prediction analysis is performed in one of two ways depending on how the pitch estimator classified the frame. The processing for voiced and unvoiced speech are described in <xref target='pred_ana_voiced_overview_section' /> and <xref target='pred_ana_unvoiced_overview_section' />, respectively. Inputs to this function include the pre-whitened signal from the pitch estimator, see <xref target='pitch_estimator_overview_section' />.
            </t>

            <section title='Voiced Speech' anchor='pred_ana_voiced_overview_section'>
              <t>
                For a frame of voiced speech the pitch pulses will remain dominant in the pre-whitened input signal. Further whitening is desirable as it leads to higher quality at the same available bit-rate. To achieve this, a Long-Term Prediction (LTP) analysis is carried out to estimate the coefficients of a fifth order LTP filter for each of four sub-frames. The LTP coefficients are used to find an LTP residual signal with the simulated output signal as input to obtain better modelling of the output signal. This LTP residual signal is the input to an LPC analysis where the LPCs are estimated using Burgs method, such that the residual energy is minimized. The estimated LPCs are converted to a Line Spectral Frequency (LSF) vector, and quantized as described in <xref target='lsf_quantizer_overview_section' />. After quantization, the quantized LSF vector is converted to LPC coefficients and hence by using these quantized coefficients the encoder remains fully synchronized with the decoder. The LTP coefficients are quantized using a method described in <xref target='ltp_quantizer_overview_section' />. The quantized LPC and LTP coefficients are now used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes.
              </t>
            </section>
            <section title='Unvoiced Speech' anchor='pred_ana_unvoiced_overview_section'>
              <t>
                For a speech signal that has been classified as unvoiced there is no need for LTP filtering as it has already been determined that the pre-whitened input signal is not periodic enough within the allowed pitch period range for an LTP analysis to be worth-while the cost in terms of complexity and rate. Therefore, the pre-whitened input signal is discarded and instead the high-pass filtered input signal is used for LPC analysis using Burgs method. The resulting LPC coefficients are converted to an LSF vector, quantized as described in the following section and transformed back to obtain quantized LPC coefficients. The quantized LPC coefficients are used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes.
              </t>
            </section>
          </section>

          <section title='LSF Quantization' anchor='lsf_quantizer_overview_section'>
            <t>The purpose of quantization in general is to significantly lower the bit rate at the cost of some introduced distortion. A higher rate should always result in lower distortion, and lowering the rate will generally lead to higher distortion. A commonly used but generally sub-optimal approach is to use a quantization method with a constant rate where only the error is minimized when quantizing.</t>
            <section title='Rate-Distortion Optimization'>
              <t>Instead, we minimize an objective function that consists of a weighted sum of rate and distortion, and use a codebook with an associated non-uniform rate table. Thus, we take into account that the probability mass function for selecting the codebook entries are by no means guaranteed to be uniform in our scenario. The advantage of this approach is that it ensures that rarely used codebook vector centroids, which are modelling statistical outliers in the training set can be quantized with a low error but with a relatively high cost in terms of a high rate. At the same time this approach also provides the advantage that frequently used centroids are modelled with low error and a relatively low rate. This approach will lead to equal or lower distortion than the fixed rate codebook at any given average rate, provided that the data is similar to the data used for training the codebook.</t>
            </section>

            <section title='Error Mapping' anchor='lsf_error_mapping_overview_section'>
              <t>
                Instead of minimizing the error in the LSF domain, we map the errors to better approximate spectral distortion by applying an individual weight to each element in the error vector. The weight vectors are calculated for each input vector using the Inverse Harmonic Mean Weighting (IHMW) function proposed by Laroia et al., see <xref target="laroia-icassp" />.
                Consequently, we solve the following minimization problem, i.e.,
                <figure align="center">
                  <artwork align="center">
                    <![CDATA[
LSF_q = argmin { (LSF - c)' * W * (LSF - c) + mu * rate },
        c in C
            ]]>
                  </artwork>
                </figure>
                where LSF_q is the quantized vector, LSF is the input vector to be quantized, and c is the quantized LSF vector candidate taken from the set C of all possible outcomes of the codebook.
              </t>
            </section>
            <section title='Multi-Stage Vector Codebook'>
              <t>
                We arrange the codebook in a multiple stage structure to achieve a quantizer that is both memory efficient and highly scalable in terms of computational complexity, see e.g. <xref target="sinervo-norsig" />. In the first stage the input is the LSF vector to be quantized, and in any other stage s > 1, the input is the quantization error from the previous stage, see <xref target='lsf_quantizer_structure_overview_figure' />.
                <figure align="center" anchor="lsf_quantizer_structure_overview_figure">
                  <artwork align="center">
                    <![CDATA[
      Stage 1:           Stage 2:                Stage S:
    +----------+       +----------+            +----------+
    |  c_{1,1} |       |  c_{2,1} |            |  c_{S,1} | 
LSF +----------+ res_1 +----------+  res_{S-1} +----------+
--->|  c_{1,2} |------>|  c_{2,2} |--> ... --->|  c_{S,2} |--->
    +----------+       +----------+            +----------+ res_S =
        ...                ...                     ...      LSF-LSF_q
    +----------+       +----------+            +----------+ 
    |c_{1,M1-1}|       |c_{2,M2-1}|            |c_{S,MS-1}|
    +----------+       +----------+            +----------+     
    | c_{1,M1} |       | c_{2,M2} |            | c_{S,MS} |
    +----------+       +----------+            +----------+
]]>
                  </artwork>
                  <postamble>Multi-Stage LSF Vector Codebook Structure.</postamble>
                </figure>
              </t>

              <t>
                By storing total of M codebook vectors, i.e.,
                <figure align="center">
                  <artwork align="center">
                    <![CDATA[
     S
    __
M = \  Ms,
    /_
    s=1
]]>
                  </artwork>
                </figure>
                where M_s is the number of vectors in stage s, we obtain a total of
                <figure align="center">
                  <artwork align="center">
                    <![CDATA[
     S
    ___
T = | | Ms
    s=1
]]>
                  </artwork>
                </figure>
                possible combinations for generating the quantized vector. It is for example possible to represent 2^36 uniquely combined vectors using only 216 vectors in memory, as done in SILK for voiced speech at all sample frequencies above 8 kHz.
              </t>
            </section>
            <section title='Survivor Based Codebook Search'>
              <t>
                This number of possible combinations is far too high for a full search to be carried out for each frame so for all stages but the last, i.e., s smaller than S, only the best min( L, Ms ) centroids are carried over to stage s+1. In each stage the objective function, i.e., the weighted sum of accumulated bit-rate and distortion, is evaluated for each codebook vector entry and the results are sorted. Only the best paths and the corresponding quantization errors are considered in the next stage. In the last stage S the single best path through the multistage codebook is determined. By varying the maximum number of survivors from each stage to the next L, the complexity can be adjusted in real-time at the cost of a potential increase when evaluating the objective function for the resulting quantized vector. This approach scales all the way between the two extremes, L=1 being a greedy search, and the desirable but infeasible full search, L=T/MS. In fact, a performance almost as good as what can be achieved with the infeasible full search can be obtained at a substantially lower complexity by using this approach, see e.g. <xref target='leblanc-tsap' />.
              </t>
            </section>
            <section title='LSF Stabilization' anchor='lsf_stabilizer_overview_section'>
              <t>If the input is stable, finding the best candidate will usually result in the quantized vector also being stable, but due to the multi-stage approach it could in theory happen that the best quantization candidate is unstable and because of this there is a need to explicitly ensure that the quantized vectors are stable. Therefore we apply a LSF stabilization method which ensures that the LSF parameters are within valid range, increasingly sorted, and have minimum distances between each other and the border values that have been pre-determined as the 0.01 percentile distance values from a large training set.</t>
            </section>
            <section title='Off-Line Codebook Training'>
              <t>
                The vectors and rate tables for the multi-stage codebook have been trained by minimizing the average of the objective function for LSF vectors from a large training set.
              </t>
            </section>
          </section>

          <section title='LTP Quantization' anchor='ltp_quantizer_overview_section'>
            <t>
              For voiced frames, the prediction analysis described in <xref target='pred_ana_voiced_overview_section' /> resulted in four sets (one set per subframe) of five LTP coefficients, plus four weighting matrices. Also, the LTP coefficients for each subframe are quantized using entropy constrained vector quantization. A total of three vector codebooks are available for quantization, with different rate-distortion trade-offs. The three codebooks have 10, 20 and 40 vectors and average rates of about 3, 4, and 5 bits per vector, respectively. Consequently, the first codebook has larger average quantization distortion at a lower rate, whereas the last codebook has smaller average quantization distortion at a higher rate. Given the weighting matrix W_ltp and LTP vector b, the weighted rate-distortion measure for a codebook vector cb_i with rate r_i is give by
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
 RD = u * (b - cb_i)' * W_ltp * (b - cb_i) + r_i,
]]>
                </artwork>
              </figure>
              where u is a fixed, heuristically-determined parameter balancing the distortion and rate. Which codebook gives the best performance for a given LTP vector depends on the weighting matrix for that LTP vector. For example, for a low valued W_ltp, it is advantageous to use the codebook with 10 vectors as it has a lower average rate. For a large W_ltp, on the other hand, it is often better to use the codebook with 40 vectors, as it is more likely to contain the best codebook vector.
              The weighting matrix W_ltp depends mostly on two aspects of the input signal. The first is the periodicity of the signal; the more periodic the larger W_ltp. The second is the change in signal energy in the current subframe, relative to the signal one pitch lag earlier. A decaying energy leads to a larger W_ltp than an increasing energy. Both aspects do not fluctuate very fast which causes the W_ltp matrices for different subframes of one frame often to be similar. As a result, one of the three codebooks typically gives good performance for all subframes. Therefore the codebook search for the subframe LTP vectors is constrained to only allow codebook vectors to be chosen from the same codebook, resulting in a rate reduction.
            </t>

            <t>
              To find the best codebook, each of the three vector codebooks is used to quantize all subframe LTP vectors and produce a combined weighted rate-distortion measure for each vector codebook and the vector codebook with the lowest combined rate-distortion over all subframes is chosen. The quantized LTP vectors are used in the noise shaping quantizer, and the index of the codebook plus the four indices for the four subframe codebook vectors are passed on to the range encoder.
            </t>
          </section>


          <section title='Noise Shaping Quantizer'>
            <t>
              The noise shaping quantizer independently shapes the signal and coding noise spectra to obtain a perceptually higher quality at the same bitrate.
            </t>
            <t>
              The prefilter output signal is multiplied with a compensation gain G computed in the noise shaping analysis. Then the output of a synthesis shaping filter is added, and the output of a prediction filter is subtracted to create a residual signal. The residual signal is multiplied by the inverse quantized quantization gain from the noise shaping analysis, and input to a scalar quantizer. The quantization indices of the scalar quantizer represent a signal of pulses that is input to the pyramid range encoder. The scalar quantizer also outputs a quantization signal, which is multiplied by the quantized quantization gain from the noise shaping analysis to create an excitation signal. The output of the prediction filter is added to the excitation signal to form the quantized output signal y(n). The quantized output signal y(n) is input to the synthesis shaping and prediction filters.
            </t>

          </section>

          <section title='Range Encoder'>
            <t>
              Range encoding is a well known method for entropy coding in which a bitstream sequence is continually updated with every new symbol, based on the probability for that symbol. It is similar to arithmetic coding but rather than being restricted to generating binary output symbols, it can generate symbols in any chosen number base. In SILK all side information is range encoded. Each quantized parameter has its own cumulative density function based on histograms for the quantization indices obtained by running a training database.
            </t>

            <section title='Bitstream Encoding Details'>
              <t>
                TBD.
              </t>
            </section>
          </section>
        </section>


<section title="CELT Encoder">
<t>
Copy from CELT draft.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1483
1484
</t>

1485
1486
1487
1488
1489
1490
1491
<section anchor="prefilter" title="Pre-filter">
<t>
Inverse of the post-filter
</t>
</section>


Jean-Marc Valin's avatar
Jean-Marc Valin committed
1492
<section anchor="forward-mdct" title="Forward MDCT">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1493

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1494
1495
1496
1497
<t>The MDCT implementation has no special characteristics. The
input is a windowed signal (after pre-emphasis) of 2*N samples and the output is N
frequency-domain samples. A <spanx style="emph">low-overlap</spanx> window is used to reduce the algorithmic delay. 
It is derived from a basic (full overlap) window that is the same as the one used in the Vorbis codec: W(n)=[sin(pi/2*sin(pi/2*(n+.5)/L))]^2. The low-overlap window is created by zero-padding the basic window and inserting ones in the middle, such that the resulting window still satisfies power complementarity. The MDCT is computed in mdct_forward() (mdct.c), which includes the windowing operation and a scaling of 2/N.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1498
1499
1500
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1501
<section anchor="normalization" title="Bands and Normalization">
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1502
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1503
1504
1505
1506
1507
1508
1509
The MDCT output is divided into bands that are designed to match the ear's critical 
bands for the smallest (2.5ms) frame size. The larger frame sizes use integer
multiplies of the 2.5ms layout. For each band, the encoder
computes the energy that will later be encoded. Each band is then normalized by the 
square root of the <spanx style="strong">non-quantized</spanx> energy, such that each band now forms a unit vector X.
The energy and the normalization are computed by compute_band_energies()
and normalise_bands() (bands.c), respectively.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1510
1511
1512
</t>
</section>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1513
<section anchor="energy-quantization" title="Energy Envelope Quantization">
1514
1515

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1516
1517
1518
1519
1520
1521
It is important to quantize the energy with sufficient resolution because
any energy quantization error cannot be compensated for at a later
stage. Regardless of the resolution used for encoding the shape of a band,
it is perceptually important to preserve the energy in each band. CELT uses a
coarse-fine strategy for encoding the energy in the base-2 log domain, 
as implemented in quant_bands.c</t>
1522

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1523
<section anchor="coarse-energy" title="Coarse energy quantization">
1524
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
The coarse quantization of the energy uses a fixed resolution of 6 dB.
To minimize the bitrate, prediction is applied both in time (using the previous frame)
and in frequency (using the previous bands). The prediction using the
previous frame can be disabled, creating an "intra" frame where the energy
is coded without reference to prior frames. An encoder is able to choose the
mode used at will based on both loss robustness and efficiency
considerations.
The 2-D z-transform of
the prediction filter is: A(z_l, z_b)=(1-a*z_l^-1)*(1-z_b^-1)/(1-b*z_b^-1)
where b is the band index and l is the frame index. The prediction coefficients
applied depend on the frame size in use when not using intra energy and a=0 b=4915/32768
when using intra energy.
The time-domain prediction is based on the final fine quantization of the previous
frame, while the frequency domain (within the current frame) prediction is based
on coarse quantization only (because the fine quantization has not been computed
yet). The prediction is clamped internally so that fixed point implementations with
limited dynamic range to not suffer desynchronization.  Identical prediction
clamping must be implemented in all encoders and decoders.
We approximate the ideal
probability distribution of the prediction error using a Laplace distribution
with seperate parameters for each frame size in intra and inter-frame modes. The
coarse energy quantization is performed by quant_coarse_energy() and 
quant_coarse_energy() (quant_bands.c). The encoding of the Laplace-distributed values is
implemented in ec_laplace_encode() (laplace.c).
1549
1550
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1551
1552
<!-- FIXME: bit budget consideration -->
</section> <!-- coarse energy -->
1553

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1554
<section anchor="fine-energy" title="Fine energy quantization">
1555
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1556
1557
1558
1559
1560
1561
1562
After the coarse energy quantization and encoding, the bit allocation is computed 
(<xref target="allocation"></xref>) and the number of bits to use for refining the
energy quantization is determined for each band. Let B_i be the number of fine energy bits 
for band i; the refinement is an integer f in the range [0,2^B_i-1]. The mapping between f
and the correction applied to the coarse energy is equal to (f+1/2)/2^B_i - 1/2. Fine
energy quantization is implemented in quant_fine_energy() 
(quant_bands.c).
1563
1564
1565
</t>

<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1566
1567
1568
1569
1570
1571
1572
1573
1574
If any bits are unused at the end of the encoding process, these bits are used to
increase the resolution of the fine energy encoding in some bands. Priority is given
to the bands for which the allocation (<xref target="allocation"></xref>) was rounded
down. At the same level of priority, lower bands are encoded first. Refinement bits
are added until there is no more room for fine energy or until each band
has gained an additional bit of precision or has the maximum fine
energy precision. This is implemented in quant_energy_finalise()
(quant_bands.c).
</t>
1575

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1576
</section> <!-- fine energy -->
1577
1578


Jean-Marc Valin's avatar
Jean-Marc Valin committed
1579
</section> <!-- Energy quant -->
1580

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1581

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1582
1583
1584
1585
1586
1587
1588
1589
1590
<section anchor="pvq" title="Spherical Vector Quantization">
<t>CELT uses a Pyramid Vector Quantization (PVQ) <xref target="PVQ"></xref>
codebook for quantizing the details of the spectrum in each band that have not
been predicted by the pitch predictor. The PVQ codebook consists of all sums
of K signed pulses in a vector of N samples, where two pulses at the same position
are required to have the same sign. Thus the codebook includes 
all integer codevectors y of N dimensions that satisfy sum(abs(y(j))) = K.
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1591
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1592
1593
1594
1595
1596
In bands where there are sufficient bits allocated the PVQ is used to encode
the unit vector that results from the normalization in 
<xref target="normalization"></xref> directly. Given a PVQ codevector y, 
the unit vector X is obtained as X = y/||y||, where ||.|| denotes the 
L2 norm.
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1597
1598
1599
</t>


Jean-Marc Valin's avatar
Jean-Marc Valin committed
1600
1601
<section anchor="pvq-search" title="PVQ Search">

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1602
<t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1603
1604
1605
1606
1607
1608
1609
1610
The search for the best codevector y is performed by alg_quant()
(vq.c). There are several possible approaches to the 
search with a tradeoff between quality and complexity. The method used in the reference
implementation computes an initial codeword y1 by projecting the residual signal 
R = X - p' onto the codebook pyramid of K-1 pulses:
</t>
<t>
y0 = round_towards_zero( (K-1) * R / sum(abs(R)))
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1611
1612
</t>

Jean-Marc Valin's avatar
Jean-Marc Valin committed
1613
1614
1615
1616
1617
1618
<t>
Depending on N, K and the input data, the initial codeword y0 may contain from 
0 to K-1 non-zero values. All the remaining pulses, with the exception of the last one, 
are found iteratively with a greedy search that minimizes the normalized correlation
between y and R:
</t>
Jean-Marc Valin's avatar
Jean-Marc Valin committed
1619
1620

<t>