- Aug 02, 2011
-
-
Gregory Maxwell authored
-
- Jul 31, 2011
-
-
non-ascii characters from the source.
-
- Jul 29, 2011
-
-
Jean-Marc Valin authored
-
- Feb 10, 2011
-
-
Jean-Marc Valin authored
Got authorization from all copyright holders
-
- Jan 24, 2011
-
-
Timothy B. Terriberry authored
di_max was counting the _number_ of code-points remaining, not the largest one that could be used.
-
- Nov 09, 2010
-
-
This tunes the entropy model for coarse energy introduced in commit c1c40a76. It uses a constant set of parameters, tuned from about an hour and a half of randomly selected test data encoded for each frame size, prediction type (inter/intra), and band number. These will be slightly sub-optimal for different frame sizes, but should be better than what we were using. For inter, this saves an average of 2.8, 5.2, 7.1, and 6.7 bits/frame for frame sizes of 120, 240, 480, and 960, respectively. For intra, this saves an average of 1.5, 3.0, 4.5, and 5.3 bits/frame (for the same frame sizes, respectively).
-
- Aug 12, 2010
-
-
This changes how the PDF used to code coarse energy. New features: 1) The probability of 0 (p0) is now indepedent of the decay rate of the remaining values; this additional flexibility will allow us to model the actual distribution better, though that improvement is not part of this patch. 2) There is a guaranteed minimum number of encodable energy deltas. This ensures that even the most extreme sudden volume changes can be accurately represented. 3) The tail end of the distribution has an adjustable (through a constant in the code) minimum probability. This allows us to lower the worst-case bit cost of a single delta. 4) The codebook is interleaved as 0, -1, +1, -2, +2, ... instead of the 0, +1, -1, +2, -2, ... order used before (see 5). 5) There is no restriction that p0 be even. Any remaining, unused part of the code is assigned to an additional negative value (collected inter data suggests that very large negative deltas are more common than very large positive ones). If the minimum probability is greater than 1, then an additional positive delta with a smaller probablity may also be added. 6) Once the tail of the distribution is reached, the energy delta is computed directly, instead of continuing to loop through the codebook. This reduces the worst-case computational cost.
-
- Oct 24, 2009
-
-
Jean-Marc Valin authored
-
- Oct 18, 2009
-
-
Jean-Marc Valin authored
-
- Jul 26, 2009
-
-
Jean-Marc Valin authored
original ec_encode_bin()/ec_decode_bin() to optimize performance when ft is a power of two.
-
- Jul 05, 2009
-
-
Jean-Marc Valin authored
-
- May 24, 2009
-
-
Jean-Marc Valin authored
(rounding towards zero).
-
- May 23, 2009
-
-
Jean-Marc Valin authored
wider range of values.
-
Jean-Marc Valin authored
thus save a few divisions.
-
- May 27, 2008
-
-
Jean-Marc Valin authored
represented by the laplace encoder (would have a probability of zero due to finite precision)
-
- Apr 23, 2008
-
-
Jean-Marc Valin authored
-
- Mar 02, 2008
-
-
Jean-Marc Valin authored
-
- Mar 01, 2008
-
-
Jean-Marc Valin authored
-
- Feb 29, 2008
-
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
-
- Feb 25, 2008
-
-
Jean-Marc Valin authored
-
- Feb 20, 2008
-
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
to make the code more C89-friendly.
-
- Jan 28, 2008
-
-
Jean-Marc Valin authored
especially now that we have a custom version of that code anyway. Moved the test code to tests/
-
- Dec 08, 2007
-
-
Jean-Marc Valin authored
infinitete loop in Laplace decoding when the data is corrupted.
-
- Dec 07, 2007
-
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
-
- Dec 06, 2007
-
-
Jean-Marc Valin authored
me what I expect
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
-
- Dec 05, 2007
-
-
Jean-Marc Valin authored
-
- Nov 30, 2007
-
-
Jean-Marc Valin authored
-
Jean-Marc Valin authored
-
- Nov 29, 2007
-
-
Jean-Marc Valin authored
-