1. 21 Dec, 2012 1 commit
  2. 20 Dec, 2012 1 commit
    • Deb Mukherjee's avatar
      New previous coef context experiment · 08f0c7cc
      Deb Mukherjee authored
      Adds an experiment to derive the previous context of a coefficient
      not just from the previous coefficient in the scan order but from a
      combination of several neighboring coefficients previously encountered
      in scan order.  A precomputed table of neighbors for each location
      for each scan type and block size is used. Currently 5 neighbors are
      used.
      
      Results are about 0.2% positive using a strategy where the max coef
      magnitude from the 5 neigbors is used to derive the context.
      
      Change-Id: Ie708b54d8e1898af742846ce2d1e2b0d89fd4ad5
      08f0c7cc
  3. 19 Dec, 2012 2 commits
    • John Koleszar's avatar
      make: fix dependency generation · de529486
      John Koleszar authored
      Remove an extra level of escaping around the $@ variable to get valid output.
      Prior to this change, modifying header files did not trigger a rebuild of
      sources dependent on them.
      
      Change-Id: I93ecc60371b705b64dc8a2583a5d31126fe3f851
      de529486
    • John Koleszar's avatar
      Use boolcoder API instead of inlining · 05ec800e
      John Koleszar authored
      This patch changes the token packing to call the bool encoder API rather
      than inlining it into the token packing function, and similarly removes
      a special get_signed case from the detokenizer. This allows easier
      experimentation with changing the bool coder as a whole.
      
      Change-Id: I52c3625bbe4960b68cfb873b0e39ade0c82f9e91
      05ec800e
  4. 18 Dec, 2012 11 commits
  5. 17 Dec, 2012 1 commit
  6. 13 Dec, 2012 7 commits
    • Yaowu Xu's avatar
      fixed an encoder/decoder mismatch · 2b9ec585
      Yaowu Xu authored
      The mismatch was caused by an improper merge of cleanup code around
      tokenize_b() and stuff_b() with TX32X32 experiment.
      
      Change-Id: I225ae62f015983751f017386548d9c988c30664c
      2b9ec585
    • Yaowu Xu's avatar
      fixed build issue with round() · c6818876
      Yaowu Xu authored
      not defined in msvc
      
      Change-Id: I8fe8462a0c2f636d8b43c0243832ca67578f3665
      c6818876
    • Deb Mukherjee's avatar
      Build fixes with teh super blcoks and 32x32 expts · 7fa3deb1
      Deb Mukherjee authored
      Change-Id: I3c751f8d57ac7d3b754476dc6ce144d162534e6d
      7fa3deb1
    • Deb Mukherjee's avatar
    • Deb Mukherjee's avatar
      Further improvements on the hybrid dwt/dct expt · 210dc5b2
      Deb Mukherjee authored
      Modifies the scanning pattern and uses a floating point 16x16
      dct implementation for now to handle scaling better.
      Also experiments are in progress with 2/6 and 9/7 wavelets.
      
      Results have improved to within ~0.25% of 32x32 dct for std-hd
      and about 0.03% for derf. This difference can probably be bridged by
      re-optimizing the entropy stats for these transforms. Currently
      the stats used are common between 32x32 dct and dwt/dct.
      
      Experiments are in progress with various scan pattern - wavelet
      combinations.
      
      Ideally the subbands should be tokenized separately, and an
      experiment will be condcuted next on that.
      
      Change-Id: Ia9cbfc2d63cb7a47e562b2cd9341caf962bcc110
      210dc5b2
    • Ronald S. Bultje's avatar
    • Ronald S. Bultje's avatar
      New default coefficient/band probabilities. · 5a5df19d
      Ronald S. Bultje authored
      Gives 0.5-0.6% improvement on derf and stdhd, and 1.1% on hd. The
      old tables basically derive from times that we had only 4x4 or
      only 4x4 and 8x8 DCTs.
      
      Note that some values are filled with 128, because e.g. ADST ever
      only occurs as Y-with-DC, as does 32x32; 16x16 ever only occurs
      as Y-with-DC or as UV (as complement of 32x32 Y); and 8x8 Y2 ever
      only has 4 coefficients max. If preferred, I can add values of
      other tables in their place (e.g. use 4x4 2nd order high-frequency
      probabilities for 8x8 2nd order), so that they make at least some
      sense if we ever implement a larger 2nd order transform for the
      8x8 DCT (etc.), please let me know
      
      Change-Id: I917db356f2aff8865f528eb873c56ef43aa5ce22
      5a5df19d
  7. 12 Dec, 2012 2 commits
    • Ronald S. Bultje's avatar
    • Ronald S. Bultje's avatar
      Consistently use get_prob(), clip_prob() and newly added clip_pixel(). · 4d0ec7aa
      Ronald S. Bultje authored
      Add a function clip_pixel() to clip a pixel value to the [0,255] range
      of allowed values, and use this where-ever appropriate (e.g. prediction,
      reconstruction). Likewise, consistently use the recently added function
      clip_prob(), which calculates a binary probability in the [1,255] range.
      If possible, try to use get_prob() or its sister get_binary_prob() to
      calculate binary probabilities, for consistency.
      
      Since in some places, this means that binary probability calculations
      are changed (we use {255,256}*count0/(total) in a range of places,
      and all of these are now changed to use 256*count0+(total>>1)/total),
      this changes the encoding result, so this patch warrants some extensive
      testing.
      
      Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
      4d0ec7aa
  8. 11 Dec, 2012 3 commits
  9. 10 Dec, 2012 3 commits
  10. 08 Dec, 2012 7 commits
    • John Koleszar's avatar
    • Yaowu Xu's avatar
      experiment with CONTEXT conversion · ab480ced
      Yaowu Xu authored
      This commit changed the ENTROPY_CONTEXT conversion between MBs that
      have different transform sizes.
      
      In additioin, this commit also did a number of cleanup/bug fix:
      1. removed duplicate function vp9_fix_contexts() and changed to use
      vp8_reset_mb_token_contexts() for both encoder and decoder
      2. fixed a bug in stuff_mb_16x16 where wrong context was used for
      the UV.
      3. changed reset all context to 0 if a MB is skipped to simplify the
      logic.
      
      Change-Id: I7bc57a5fb6dbf1f85eac1543daaeb3a61633275c
      ab480ced
    • John Koleszar's avatar
      libvpx_test: ensure rtcd init functions are called · 6f014dc5
      John Koleszar authored
      In addition to allowing tests to use the RTCD-enabled functions (perhaps transitively)
      without having run a full encode/decode test yet, this fixes a linking issue with
      Apple's G++ whereby the Common symbols (the function pointers themselves) wouldn't
      be resolved. Fixing this linking issue is the primary impetus for this patch, as none
      of the tests exercise the RTCD functionality except through the main API.
      
      Change-Id: I12aed91ca37a707e5309aa6cb9c38a649c06bc6a
      6f014dc5
    • Jim Bankoski's avatar
      Merge "Fix implicit cast." into vp9-preview · fccebcba
      Jim Bankoski authored
      fccebcba
    • Jim Bankoski's avatar
      26a49182
    • Ronald S. Bultje's avatar
      Clean up 4x4 coefficient decoding code. · fbf052df
      Ronald S. Bultje authored
      Don't use vp9_decode_coefs_4x4() for 2nd order DC or luma blocks. The
      code introduces some overhead which is unnecessary for these cases.
      Also, remove variable declarations that are only used once, remove
      magic offsets into the coefficient buffer (use xd->block[i].qcoeff
      instead of xd->qcoeff + magic_offset), and fix a few Google Style
      Guide violations.
      
      Change-Id: I0ae653fd80ca7f1e4bccd87ecef95ddfff8f28b4
      fbf052df
    • Ronald S. Bultje's avatar
      Introduce vp9_coeff_probs/counts/stats/accum types. · 885cf816
      Ronald S. Bultje authored
      Use these, instead of the 4/5-dimensional arrays, to hold statistics,
      counts, accumulations and probabilities for coefficient tokens. This
      commit also re-allows ENTROPY_STATS to compile.
      
      Change-Id: If441ffac936f52a3af91d8f2922ea8a0ceabdaa5
      885cf816
  11. 07 Dec, 2012 2 commits