1. 18 Apr, 2013 1 commit
    • Jingning Han's avatar
      Make the use of pred buffers consistent in MB/SB · 6f43ff58
      Jingning Han authored
      Use in-place buffers (dst of MACROBLOCKD) for  macroblock prediction.
      This makes the macroblock buffer handling consistent with those of
      superblock. Remove predictor buffer MACROBLOCKD.
      
      Change-Id: Id1bcd898961097b1e6230c10f0130753a59fc6df
      6f43ff58
  2. 17 Apr, 2013 2 commits
  3. 16 Apr, 2013 2 commits
  4. 15 Apr, 2013 2 commits
  5. 11 Apr, 2013 3 commits
    • Jingning Han's avatar
      Make intra predictor support rectangular blocks · 815e95fb
      Jingning Han authored
      The intra predictor supports configurable block sizes. It can handle
      intra prediction down to 4x4 sizes, when enabled in BLOCK_SIZE_TYPE.
      
      Change-Id: I7399ec2512393aa98aadda9813ca0c83e19af854
      815e95fb
    • John Koleszar's avatar
      Remove unused vp9_recon_mb{y,uv}_s · c382ed09
      John Koleszar authored
      These functions now are handled through the common superblock code.
      
      Change-Id: Ib6688971bae297896dcec42fae1d3c79af7a611c
      c382ed09
    • Scott LaVarnway's avatar
      WIP: removing predictor buffer usage from decoder · 6189f2bc
      Scott LaVarnway authored
      This patch will use the dest buffer instead of the
      predictor buffer.  This will allow us in future commits
      to remove the extra mem copy that occurs in the dequant
      functions when eob == 0.  We should also be able to remove
      extra params that are passed into the dequant functions.
      
      Change-Id: I7241bc1ab797a430418b1f3a95b5476db7455f6a
      6189f2bc
  6. 10 Apr, 2013 2 commits
    • Ronald S. Bultje's avatar
      Make RD superblock mode search size-agnostic. · b4f6098e
      Ronald S. Bultje authored
      Merge various super_block_yrd and super_block_uvrd versions into one
      common function that works for all sizes. Make transform size selection
      size-agnostic also. This fixes a slight bug in the intra UV superblock
      code where it used the wrong transform size for txsz > 8x8, and stores
      the txsz selection for superblocks properly (instead of forgetting it).
      Lastly, it removes the trellis search that was done for 16x16 intra
      predictors, since trellis is relatively expensive and should thus only
      be done after RD mode selection.
      
      Gives basically identical results on derf (+0.009%).
      
      Change-Id: If4485c6f0a0fe4038b3172f7a238477c35a6f8d3
      b4f6098e
    • Ronald S. Bultje's avatar
      Make SB coding size-independent. · a3874850
      Ronald S. Bultje authored
      Merge sb32x32 and sb64x64 functions; allow for rectangular sizes. Code
      gives identical encoder results before and after. There are a few
      macros for rectangular block sizes under the sbsegment experiment; this
      experiment is not yet functional and should not yet be used.
      
      Change-Id: I71f93b5d2a1596e99a6f01f29c3f0a456694d728
      a3874850
  7. 04 Apr, 2013 1 commit
  8. 27 Mar, 2013 1 commit
    • Yunqing Wang's avatar
      Optimize 32x32 idct function · 21a718d9
      Yunqing Wang authored
      Wrote sse2 version of vp9_short_idct_32x32 function. Compared
      to c version, the sse2 version is 5X faster.
      
      Change-Id: I071ab7378358346ab4d9c6e2980f713c3c209864
      21a718d9
  9. 26 Mar, 2013 1 commit
    • Deb Mukherjee's avatar
      Implicit weighted prediction experiment · 23144d23
      Deb Mukherjee authored
      Adds an experiment to use a weighted prediction of two INTER
      predictors, where the weight is one of (1/4, 3/4), (3/8, 5/8),
      (1/2, 1/2), (5/8, 3/8) or (3/4, 1/4), and is chosen implicitly
      based on consistency of the predictors to the already
      reconstructed pixels to the top and left of the current macroblock
      or superblock.
      
      Currently the weighting is not applied to SPLITMV modes, which
      default to the usual (1/2, 1/2) weighting. However the code is in
      place controlled by a macro. The same weighting is used for Y and
      UV components, where the weight is derived from analyzing the Y
      component only.
      
      Results (over compound inter-intra experiment)
      derf: +0.18%
      yt: +0.34%
      hd: +0.49%
      stdhd: +0.23%
      
      The experiment suggests bigger benefit for explicitly signaled weights.
      
      Change-Id: I5438539ff4485c5752874cd1eb078ff14bf5235a
      23144d23
  10. 21 Mar, 2013 2 commits
    • Yunqing Wang's avatar
      Optimize 16x16 idct10 function · 869d6c05
      Yunqing Wang authored
      Wrote sse2 version of vp9_short_idct10_16x16 function. Compared
      to c version, the sse2 version is 2.3X faster.
      
      Change-Id: I314c4f09369648721798321eeed6f58e38857f26
      869d6c05
    • Yunqing Wang's avatar
      Optimize 16x16 idct function · ec310066
      Yunqing Wang authored
      Wrote sse2 version of vp9_short_idct16x16 function. Compared to c
      version, the sse2 version is over 2.5X faster.
      
      Change-Id: I38536e2b846427a2cc5c5423aaf305fd0e605d61
      ec310066
  11. 18 Mar, 2013 1 commit
    • Yunqing Wang's avatar
      Optimize 8x8 idct function · 6344c84c
      Yunqing Wang authored
      Wrote sse2 functions of vp9_short_idct8x8 and vp9_short_idct10_8x8.
      Compared to c version, the sse2 version is 2X faster. The decoder
      test didn't show noticeable gain since 8x8 idct doesn't take much
      of decoding time (less than 1% in my test).
      
      Change-Id: I56313e18cd481700b3b52c4eda5ca204ca6365f3
      6344c84c
  12. 15 Mar, 2013 1 commit
    • Christian Duvivier's avatar
      Faster vp9_short_fdct16x16. · 4418b790
      Christian Duvivier authored
      Scalar path is about 1.5x faster (3.1% overall encoder speedup).
      SSE2 path is about 7.2x faster (7.8% overall encoder speedup).
      
      Change-Id: I06da5ad0cdae2488431eabf002b0d898d66d8289
      4418b790
  13. 13 Mar, 2013 1 commit
    • Yaowu Xu's avatar
      removed reference to "LLM" and "x8" · 00555263
      Yaowu Xu authored
      The commit changed the name of files and function to remove obselete
      reference to LLM and x8.
      
      Change-Id: I973b20fc1a55149ed68b5408b3874768e6f88516
      00555263
  14. 08 Mar, 2013 1 commit
    • Yunqing Wang's avatar
      Add vp9_idct4_1d_sse2 · 11ca81f8
      Yunqing Wang authored
      Added SSE2 idct4_1d which is called by vp9_short_iht4x4. Also,
      modified the parameter type passed to vp9_short_iht functions to
      make it work with rtcd prototype.
      
      Change-Id: I81ba7cb4db6738f1923383b52a06deb760923ffe
      11ca81f8
  15. 07 Mar, 2013 1 commit
  16. 06 Mar, 2013 1 commit
    • Yunqing Wang's avatar
      Optimize add_residual function · 943c6d71
      Yunqing Wang authored
      Optimized adding diff to predictor, which gave 0.8% decoder
      performance gain.
      
      Change-Id: Ic920f0baa8cbd13a73fa77b7f9da83b58749f0f8
      943c6d71
  17. 05 Mar, 2013 1 commit
    • Ronald S. Bultje's avatar
      Make superblocks independent of macroblock code and data. · 111ca421
      Ronald S. Bultje authored
      Split macroblock and superblock tokenization and detokenization
      functions and coefficient-related data structs so that the bitstream
      layout and related code of superblock coefficients looks less like it's
      a hack to fit macroblocks in superblocks.
      
      In addition, unify chroma transform size selection from luma transform
      size (i.e. always use the same size, as long as it fits the predictor);
      in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
      transform will now use the 16x16 (instead of the 8x8) chroma transform,
      and 64x64 superblocks using the 32x32 luma transform will now use the
      32x32 (instead of the 16x16) chroma transform.
      
      Lastly, add a trellis optimize function for 32x32 transform blocks.
      
      HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
      a few negative points here and there that I might want to analyze
      a little closer.
      
      Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
      111ca421
  18. 04 Mar, 2013 1 commit
  19. 02 Mar, 2013 1 commit
  20. 01 Mar, 2013 1 commit
    • Yunqing Wang's avatar
      Add eob<=10 case in idct32x32 · c550bb3b
      Yunqing Wang authored
      Simplified idct32x32 calculation when there are only 10 or less
      non-zero coefficients in 32x32 block. This helps the decoder
      performance.
      
      Change-Id: If7f8893d27b64a9892b4b2621a37fdf4ac0c2a6d
      c550bb3b
  21. 28 Feb, 2013 4 commits
  22. 27 Feb, 2013 2 commits
    • John Koleszar's avatar
      Remove unused vp9_copy32xn · 7ad8dbe4
      John Koleszar authored
      This function was part of an optimization used in VP8 that required
      caching two macroblocks. This is unused in VP9, and might not
      survive refactoring to support superblocks, so removing it for now.
      
      Change-Id: I744e585206ccc1ef9a402665c33863fc9fb46f0d
      7ad8dbe4
    • Yunqing Wang's avatar
      Optimize vp9_dc_only_idct_add_c function · 35bc02c6
      Yunqing Wang authored
      Wrote SSE2 version of vp9_dc_only_idct_add_c function. In order to
      improve performance, clipped the absolute diff values to [0, 255].
      This allowed us to keep the additions/subtractions in 8 bits.
      Test showed an over 2% decoder performance increase.
      
      Change-Id: Ie1a236d23d207e4ffcd1fc9f3d77462a9c7fe09d
      35bc02c6
  23. 25 Feb, 2013 1 commit
    • Jingning Han's avatar
      clean up forward and inverse hybrid transform · 77a3becf
      Jingning Han authored
      Rebased.
      
      Remove the old matrix multiplication transform computation. The 16x16
      ADST/DCT can be switched on/off and evaluated by setting ACTIVE_HT16
      300/0 in vp9/common/vp9_blockd.h.
      
      Change-Id: Icab2dbd18538987e1dc4e88c45abfc4cfc6e133f
      77a3becf
  24. 23 Feb, 2013 1 commit
  25. 22 Feb, 2013 1 commit
    • Jingning Han's avatar
      Forward butterfly hybrid transform · babbd5d1
      Jingning Han authored
      This patch includes 4x4, 8x8, and 16x16 forward butterfly ADST/DCT
      hybrid transform. The kernel of 4x4 ADST is sin((2k+1)*(n+1)/(2N+1)).
      The kernel of 8x8/16x16 ADST is of the form sin((2k+1)*(2n+1)/4N).
      
      Change-Id: I8f1ab3843ce32eb287ab766f92e0611e1c5cb4c1
      babbd5d1
  26. 21 Feb, 2013 1 commit
  27. 20 Feb, 2013 1 commit
  28. 19 Feb, 2013 1 commit
    • Jingning Han's avatar
      16x16 butterfly inverse ADST/DCT hybrid transform · cd907b16
      Jingning Han authored
      rebased.
      
      This patch includes 16x16 butterfly inverse ADST/DCT hybrid
      transform. It uses the variant ADST of kernel
          sin((2k+1)*(2n+1)/4N),
      which allows a butterfly implementation.
      
      The coding gains as compared to DCT 16x16 are about 0.1% for
      both derf and std-hd. It is noteworthy that for std-hd sets
      many sequences gains about 0.5%, some 0.2%. There are also few
      points that provides -1% to -3% performance. Hence the average
      goes to about 0.1%.
      
      Change-Id: Ie80ac84cf403390f6e5d282caa58723739e5ec17
      cd907b16
  29. 15 Feb, 2013 1 commit