1. 19 Apr, 2013 3 commits
  2. 18 Apr, 2013 2 commits
  3. 17 Apr, 2013 1 commit
    • Jingning Han's avatar
      Recursive partition syntax coding · 90a91cc6
      Jingning Han authored
      Enable recursive partition information coding from SB64X64 down to
      MB16X16. The bit-stream syntax is now supporting rectangular block
      sizes. It starts from SB64X64 and recursively describes the partition
      type of the current block. If the partition type is PARTITION_NONE,
      the block is coded as a single unit; if it is PARTITION_HORZ or
      PARTITION_VERT, the block is segmented into two independently coded
      rectangular units, with no further partition needed; otherwise, the
      block is segmented into 4 square blocks. i.e., PARTITION_SPLIT case,
      each can be potentially further partitioned.
      
      Forward adaptive probability modeling is used for the partition
      information coding, conditioned on the current block size.
      
      Change-Id: I499365fb547839d555498e3bcc0387d8a3587d87
      90a91cc6
  4. 16 Apr, 2013 4 commits
  5. 15 Apr, 2013 1 commit
    • Adrian Grange's avatar
      Initial addition of multiple ARF frames · c2876cf0
      Adrian Grange authored
      This is work-in-progress, it implements multiple ARF
      encoding behind an experimental flag.
      
      It adds the ability to insert multiple ARF frames into a
      single ARF group. This patch implements the reordering
      of the coded frames, and implements a fixed-length coding
      pattern. It applies a fixed quantizer strategy based on
      where the frame is in the coding sequence.
      
      Further work to modify the rate control strategy is
      ongoing and will be submitted via a set of future patches.
      
      In this first step, each ARF group is recursively
      bisected and an ARF frame added at that position in the
      sequence. The recursion continues until ARF frames are
      within MIN_GF_INTERVAL frames.
      
      The code sits behind the "multiple-arf" experimental
      flag ("CONFIG_MULTIPLE_ARF"). The experimental flag
      "oneshotq" ("CONFIG_ONESHOTQ") also needs to be enabled
      for this patch to work correctly.
      
      Change-Id: Ie473b05ebb43ac473c0cfb659b2b8042823085e2
      c2876cf0
  6. 12 Apr, 2013 1 commit
  7. 11 Apr, 2013 4 commits
  8. 09 Apr, 2013 1 commit
  9. 03 Apr, 2013 1 commit
  10. 28 Mar, 2013 1 commit
    • Deb Mukherjee's avatar
      Framework changes in nzc to allow more flexibility · fe9b5143
      Deb Mukherjee authored
      The patch adds the flexibility to use standard EOB based coding
      on smaller block sizes and nzc based coding on larger blocksizes.
      The tx-sizes that use nzc based coding and those that use EOB based
      coding are controlled by a function get_nzc_used().
      By default, this function uses nzc based coding for 16x16 and 32x32
      transform blocks, which seem to bridge the performance gap
      substantially.
      
      All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.
      
      Change-Id: I06abed3df57b52d241ea1f51b0d571c71e38fd0b
      fe9b5143
  11. 27 Mar, 2013 1 commit
  12. 26 Mar, 2013 3 commits
    • Ronald S. Bultje's avatar
      Use above/left (instead of previous in scan-order) as token context. · 790fb132
      Ronald S. Bultje authored
      Pearson correlation for above or left is significantly higher than for
      previous-in-scan-order (absolute values depend on position in scan, but
      in general, we gain about 0.1-0.2 by using either above or left; using
      both basically just makes this even better). For eob branch skipping,
      we continue to use the previous token in scan order.
      
      This helps about 0.9% on derf after re-training on a limited data set.
      Full re-training and results on larger-resolution clips are pending.
      
      Note that this commit breaks trellis, so we can probably get further
      gains out of it by fixing trellis at some later point.
      
      Change-Id: Iead68e296fc3a105cca746b5e3da9555d6010cfe
      790fb132
    • John Koleszar's avatar
      Add an in-loop deringing experiment · 441e2eab
      John Koleszar authored
      Adds a per-frame, strength adjustable, in loop deringing filter. Uses
      the existing vp9_post_proc_down_and_across 5 tap thresholded blur
      code, with a brute force search for the threshold.
      
      Results almost strictly positive on the YT HD set, either having no
      effect or helping PSNR in the range of 1-3% (overall average 0.8%).
      Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
      This has an almost strictly negative impact to SSIM, so examining a
      different filter or a more balanced search heuristic is in order.
      
      Other test set results pending.
      
      Change-Id: I5ca6ee8fe292dfa3f2eab7f65332423fa1710b58
      441e2eab
    • Deb Mukherjee's avatar
      Modeling default coef probs with distribution · fd18d5df
      Deb Mukherjee authored
      Replaces the default tables for single coefficient magnitudes with
      those obtained from an appropriate distribution. The EOB node
      is left unchanged. The model is represeted as a 256-size codebook
      where the index corresponds to the probability of the Zero or the
      One node. Two variations are implemented corresponding to whether
      the Zero node or the One-node is used as the peg. The main advantage
      is that the default prob tables will become considerably smaller and
      manageable. Besides there is substantially less risk of over-fitting
      for a training set.
      
      Various distributions are tried and the one that gives the best
      results is the family of Generalized Gaussian distributions with
      shape parameter 0.75. The results are within about 0.2% of fully
      trained tables for the Zero peg variant, and within 0.1% of the
      One peg variant.
      
      The forward updates are optionally (controlled by a macro)
      model-based, i.e. restricted to only convey probabilities from the
      codebook. Backward updates can also be optionally (controlled by
      another macro) model-based, but is turned off by default. Currently
      model-based forward updates work about the same as unconstrained
      updates, but there is a drop in performance with backward-updates
      being model based.
      
      The model based approach also allows the probabilities for the key
      frames to be adjusted from the defaults based on the base_qindex of
      the frame. Currently the adjustment function is a placeholder that
      adjusts the prob of EOB and Zero node from the nominal one at higher
      quality (lower qindex) or lower quality (higher qindex) ends of the
      range. The rest of the probabilities are then derived based on the
      model from the adjusted prob of zero.
      
      Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
      fd18d5df
  13. 20 Mar, 2013 1 commit
  14. 18 Mar, 2013 1 commit
    • John Koleszar's avatar
      Replace scaling byte with explicit display size · 8a3f55f2
      John Koleszar authored
      If the intended display size is different than the size the frame is
      coded at, then send that size explicitly in the bitstream. Adds a new
      bit to the frame header to indicate whether the extra size fields
      are present.
      
      Change-Id: I525c66f22d207efaf1e5f903c6a2a91b80245854
      8a3f55f2
  15. 11 Mar, 2013 1 commit
  16. 10 Mar, 2013 1 commit
    • John Koleszar's avatar
      Optimize vp9_tree_probs_from_distribution · bd84685f
      John Koleszar authored
      The previous implementation visited each node in the tree multiple times
      because it used each symbol's encoding to revisit the branches taken and
      increment its count. Instead, we can traverse the tree depth first and
      calculate the probabilities and branch counts as we walk back up. The
      complexity goes from somewhere between O(nlogn) and O(n^2) (depending on
      how balanced the tree is) to O(n).
      
      Only tested one clip (256kbps, CIF), saw 13% decoding perf improvement.
      
      Note that this optimization should port trivially to VP8 as well. In VP8,
      the decoder doesn't use this function, but it does routinely show up
      on the profile for realtime encoding.
      
      Change-Id: I4f2848e4f41dc9a7694f73f3e75034bce08d1b12
      bd84685f
  17. 09 Mar, 2013 1 commit
    • Deb Mukherjee's avatar
      Continued experiment with nonzero count · a28139c8
      Deb Mukherjee authored
      Adds probability updates for extra bits for the nzcs, code for
      getting nzc stats, plus some minor cleanups and fixes.
      
      Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
      a28139c8
  18. 07 Mar, 2013 1 commit
    • Deb Mukherjee's avatar
      Coding con-zero count rather than EOB for coeffs · eb6ef241
      Deb Mukherjee authored
      This patch revamps the entropy coding of coefficients to code first
      a non-zero count per coded block and correspondingly remove the EOB
      token from the token set.
      
      STATUS:
      Main encode/decode code achieving encode/decode sync - done.
      Forward and backward probability updates to the nzcs - done.
      Rd costing updates for nzcs - done.
      Note: The dynamic progrmaming apporach used in trellis quantization
      is not exactly compatible with nzcs. A suboptimal approach has been
      used instead where branch costs are updated to account for changes
      in the nzcs.
      
      TODO:
      Training the default probs/counts for nzcs
      
      Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
      eb6ef241
  19. 05 Mar, 2013 1 commit
    • Ronald S. Bultje's avatar
      Make superblocks independent of macroblock code and data. · 111ca421
      Ronald S. Bultje authored
      Split macroblock and superblock tokenization and detokenization
      functions and coefficient-related data structs so that the bitstream
      layout and related code of superblock coefficients looks less like it's
      a hack to fit macroblocks in superblocks.
      
      In addition, unify chroma transform size selection from luma transform
      size (i.e. always use the same size, as long as it fits the predictor);
      in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
      transform will now use the 16x16 (instead of the 8x8) chroma transform,
      and 64x64 superblocks using the 32x32 luma transform will now use the
      32x32 (instead of the 16x16) chroma transform.
      
      Lastly, add a trellis optimize function for 32x32 transform blocks.
      
      HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
      a few negative points here and there that I might want to analyze
      a little closer.
      
      Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
      111ca421
  20. 04 Mar, 2013 1 commit
    • Jingning Han's avatar
      Support 16K sequence coding · 5957b2b5
      Jingning Han authored
      Fixed a couple of variable/function definitions, as well as header
      handling to support 16K sequence coding at high bit-rates.
      
      The width and height are each specified by two bytes in the header.
      Use an extra byte to explicitly indicate the scaling factors in
      both directions, each ranging from 0 to 15.
      
      Tested coding up to 16400x16400 dimension.
      
      Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
      5957b2b5
  21. 23 Feb, 2013 2 commits
    • Ronald S. Bultje's avatar
      Split coefficient token tables intra vs. inter. · 0c9e2e9a
      Ronald S. Bultje authored
      Change-Id: I5416455f8f129ca0f450d00e48358d2012605072
      0c9e2e9a
    • Paul Wilkins's avatar
      Further changes to coefficient contexts. · c17672a3
      Paul Wilkins authored
      This patch alters the balance of context between the
      coefficient bands (reflecting the position of coefficients
      within a transform blocks) and the energy of the previous
      token (or tokens) within a block.
      
      In this case the number of coefficient bands is reduced
      but more previous token energy bands are supported.
      
      Some initial rebalancing of the default tables has been
      by running multiple derf clips at multiple data rates using
      the ENTOPY_STATS macro. Further balancing needs to be
      done using larger image formatsd especially in regard to
      the bigger transform sizes which are not as well represented
      in encodings of smaller image formats.
      
      Change-Id: If9736e95c391e711b04aef6393d26f60f36e1f8a
      c17672a3
  22. 20 Feb, 2013 2 commits
  23. 19 Feb, 2013 1 commit
    • Yaowu Xu's avatar
      Use lossless for Q0 · 93d6b86c
      Yaowu Xu authored
      The commit changes the coding mode to lossless whenever the lowest
      quantizer is choosen.
      
      As expected, test results showed no difference for cif and std-hd
      set where Q0 is rarely used. For yt and yt-hd set, Q0 is used for
      a number of clips, where this commit helped a lot in the high end.
      
      Average over all clips in the sets:
      yt: 2.391% 1.017% 1.066%
      hd: 1.937%  .764%  .787%
      
      Change-Id: I9fa9df8646fd70cb09ffe9e4202b86b67da16765
      93d6b86c
  24. 15 Feb, 2013 3 commits
  25. 13 Feb, 2013 1 commit
    • Ronald S. Bultje's avatar
      Add support for tile rows. · 89a206ef
      Ronald S. Bultje authored
      These allow sending partial bitstream packets over the network before
      encoding a complete frame is completed, thus lowering end-to-end
      latency. The tile-rows are not independent.
      
      Change-Id: I99986595cbcbff9153e2a14f49b4aa7dee4768e2
      89a206ef