- 21 Feb, 2018 1 commit
-
-
Sebastien Alaiwan authored
This experiment has been adopted, we can simplify the code by dropping the associated preprocessor conditionals. Change-Id: Ic3438799335c6cd05f170302f49bd47e1f705c24
-
- 16 Feb, 2018 1 commit
-
-
Angie Chiang authored
Change-Id: I103547d5bd4c9537cc0640a10d50f5c3cb6b7771
-
- 15 Feb, 2018 1 commit
-
-
Yaowu Xu authored
The experiment is fully adopted. Change-Id: I6cc80a2acf0c93c13b0e36e6f4a2378fe5ce33c3
-
- 10 Feb, 2018 1 commit
-
-
Johann authored
Change-Id: I86befaf7aa35f3f9b18618db1a27d191c1f7af36
-
- 17 Jan, 2018 1 commit
-
-
Frederic Barbier authored
Change-Id: Ic357ba36525101310fa612916e04b5d46b513d54
-
- 16 Jan, 2018 1 commit
-
-
Frederic Barbier authored
Change-Id: I545f126f6ba724ff4e41294353c4f11a47c6e853
-
- 11 Jan, 2018 1 commit
-
-
Sebastien Alaiwan authored
This experiment has been adopted, we can simplify the code by dropping the associated preprocessor conditionals. Change-Id: I3e08eec385c40360e3934fa4f66f7c671e860517
-
- 09 Jan, 2018 1 commit
-
-
Hui Su authored
Change-Id: I66a542da0c7b8fdb9ac2d4efee73aa62414c10f9
-
- 29 Dec, 2017 1 commit
-
-
Dake He authored
1. Train and init cdfs directly. 2. Use laplace prob estimates in aom_entropy_optimizer to avoid zero probabilities. Change-Id: I878fc0a306cbffe3eb51c5b86d5872459b6705c5
-
- 27 Dec, 2017 2 commits
-
-
Yaowu Xu authored
Change-Id: I523d9a89493895eb6a7af1df30a39f36ca9f159f
-
Frederic Barbier authored
This experiment has been adopted, we can simplify the code by dropping the associated preprocessor conditionals. Change-Id: Idf52f49d953b422f7789247df966d238fc34299b
-
- 23 Dec, 2017 1 commit
-
-
Yue Chen authored
Change-Id: I70ebb6ada7ec4a975a8984a2e1ea2fa51664a786
-
- 19 Dec, 2017 1 commit
-
-
Sebastien Alaiwan authored
This experiment has been abandonned for AV1. Change-Id: Ib3bb07d62f2025ce50dc9bc1b3f9fc22488519a7
-
- 11 Dec, 2017 1 commit
-
-
Linfeng Zhang authored
Change-Id: Ia7d69d8582f8c37ad4b413ccd7e24711b8c3e005
-
- 02 Dec, 2017 2 commits
-
-
Debargha Mukherjee authored
Adds various tables, scan patterns etc. for 16x64 and 64x16 transforms. Also adds scan tables for previously missing 4:1 transforms for intra. Also adds missing CDFs for filterintra with tx64x64. Change-Id: I8b16e749741f503f13319e7b7b9685128b723956
-
Dake He authored
Multisymbol BR coding is simplified as follows. 1. Remove computation of level counts by using a template of size 8; 2. Context is derived by using a template of size 3. 3. lps and eob probabilities are trained. 4. Share contexts between TX_16X16 and above. The number of probability values used in BR coding are reduced from 1152 to 378. Change-Id: I0419127e871f9e566c2489aa4b1825c5364aec5a
-
- 01 Dec, 2017 1 commit
-
-
Debargha Mukherjee authored
Change-Id: I496c2a83b227ec88a6cd59a4c227b59eeeb0fc86
-
- 30 Nov, 2017 1 commit
-
-
Debargha Mukherjee authored
The change makes the entropy context for transforms use the same mechanism as with and without lv_map. For the non-lv-map case the context is now based on the the larger transform dim for 2:1 rect transforms. The context is now the average for 4:1 rect transforms for both lv-map and non-lv-map cases. There is one small fix for level map for getting the correct rate when skip is set. BDRATE: lowres, 30 frames, speed 1: -0.15% gain for the non-lv-map case on the baseline. Change-Id: I06a583d33bef68202d72a88e077f8d31cc5e7fe4
-
- 29 Nov, 2017 1 commit
-
-
Frederic Barbier authored
This experiment has been abandonned for AV1. Change-Id: I83fb51a17d67df6713308665d2626c232376d25a
-
- 20 Nov, 2017 1 commit
-
-
Dake He authored
At eob-1, coefficient must be non-zero. As such, this CL changes the alphabet for base levels at eob-1 from size 4 to size 3. Minor performance improvement is observed. In addition, changes in 33462 made by Ola Hugosson were also incorporated. Now with trained initial probability distributions. Change-Id: Id6b5d0908b5ff186ed88ab0733ce7cc0c4a468d5
-
- 17 Nov, 2017 1 commit
-
-
Ola Hugosson authored
The EOB coefficient cannot be 0 and for that reason it has special base_cdf contexts. Before this commit there was two contexts (DC and AC). This commit adds two additional contexts to separate the AC into 3 bands (i<=N/8, i<=N/4, i<=N/2). Change-Id: If088b20fd891920b7ea7fc988d29bf6d86d93bfc
-
- 15 Nov, 2017 2 commits
-
-
Frederic Barbier authored
This experiment has been adopted, we can simplify the code by dropping the associated preprocessor conditionals. Change-Id: I6ac62c2825eabcba8f854cfa25c84638d9a73872
-
Debargha Mukherjee authored
Remove the previous experiment and now use the same name for a simpler experiment that only enables 4:1 transforms for 4:1 partitions when ext_partition_types is on, and that which was previously enabled with the USE_RECT_TX_EXT macro. Change-Id: Iccc35744bd292abf3c187da6f23b787692d50296
-
- 14 Nov, 2017 3 commits
-
-
Ola Hugosson authored
The br_cdf and lps_cdf with a new 4-state symbol br_cdf. The br symbol indicates whether the level is k, k+1, k+2 or >k+2 In the latter case, a new br symbol is read. Up to 4 br symbols are read which will reach level 14 at most. Levels greater than 14 are golomb coded. The adapted symbol count is reduced further by this commit. E.g. for the I-frame of ducks_take_off at cq=12, the number of adapted symbols is reduced from 4.27M to 3.85M. About 10% reduction. Gains seems about neutral on a limitied subset. Change-Id: I294234dbd63fb0fa26aef297a371cba80bd67383
-
Ola Hugosson authored
This experiment modifies lv_map to make use of multi symbol. Replace the nz_map and coeff_base binary CDF with a new multi-symbol CDF of size 4. The new base_cdf indicates for each coeff if the level is 0, 1, 2 or >2. Two new special contexts are added to be used for the last coefficient (the EOB coeff). For the EOB coefficient we already know that it is non-zero. We use one context for DC EOB and one for AC EOB (this can potentially be refined more). The new symbol is read/written by special bitreader/bitwriter functions. Those functions reduce the probability precision from 15bit to 9bit before the invocation of the arithmetic coding engine. The adapted symbol count is significantly reduced by this experiment. E.g. for the I-frame of ducks_take_off at cq=12, the number of adapted symbols is reduced from 6.7M to 4.3M. Change-Id: Ifc3927d81ad044fb9b0733f1e54d713cb71a1572
-
Yue Chen authored
Change-Id: I96e5ff72caee8935efb7535afa3a534175bc425c
-
- 03 Nov, 2017 1 commit
-
-
Dake He authored
Per the codec WG call today, turn on Plan B for level map by default. Change-Id: Iae885b38917cf79e4f0b290cc2d73ac28321710f
-
- 02 Nov, 2017 2 commits
-
-
Sebastien Alaiwan authored
This experiment has been adopted, we can simplify the code by dropping the associated preprocessor conditionals. Change-Id: I02ed47186bbc32400ee9bfadda17659d859c0ef7
-
Dake He authored
This CL simplifies context derivation for nz and base level flags in level map. 1. Reduce SIG_COEF_CONTEXTS from 58 to 42. 2. NZ and base level flags share the same context offsets derived from a template of size 5 (down from 7). In limited runs, compression performance seems neutral if not better. Encoding time for a key frame on a local linux machine is reduced by about 25% or more. Change-Id: Ibd93b21c839154bc5ae26b993f9e66537cbf5942
-
- 01 Nov, 2017 1 commit
-
-
Jingning Han authored
Further reduce the context model size needed for base levels down to 25 per transform size. Change-Id: I9df4870d2b027cdb1356de0fc4d5bcc22155319e
-
- 28 Oct, 2017 2 commits
-
-
Jingning Han authored
Account for 1-D/2-D transform kernels for the eob modeling. To maintain a smaller context cardinality, set the two 1-D transform kernels in the same category. The difference in directions should be largely covered by the scan order. This and the previous CLs on nz_map context modeling together improve the compression performance of level-map coefficient coding system by 0.4% for lowres. Change-Id: I8c4f03ca01ce3d248950d04bd1266f445b4227a0
-
Jingning Han authored
Account for the rectangular transform block sizes in the non-zero map context model. Change-Id: I16cf21a4120c10c213df10950aeb4ef0ea40c477
-
- 24 Oct, 2017 1 commit
-
-
Sebastien Alaiwan authored
This experiment has been adopted. Change-Id: Ife4c18a59791268b7ac0de5a8a08e762a042cae2
-
- 21 Oct, 2017 1 commit
-
-
Yushin Cho authored
Change-Id: Id377c68e30031ad4697ca1ba311487b803a8af4c
-
- 17 Oct, 2017 1 commit
-
-
Sebastien Alaiwan authored
Change-Id: I5bff0a68602a89ce480fec049c8b2c4bce44f6bb
-
- 16 Oct, 2017 1 commit
-
-
Debargha Mukherjee authored
BUG=aomedia:907 Change-Id: Ibe367eb34596e2d34d8c059e083b083e702c225e
-
- 01 Oct, 2017 1 commit
-
-
Debargha Mukherjee authored
Change-Id: Ifa983d83a509cdfad78f6400df7d60c8f5b4f68c
-
- 29 Sep, 2017 1 commit
-
-
Urvang Joshi authored
Coefficient prob model was removed earlier in https://aomedia-review.googlesource.com/c/aom/+/17062, so these were unused and updating them was a wasted effort. Change-Id: Ibd5fd975134de8eb3d363c500cb0f07c4658efd1
-
- 28 Sep, 2017 1 commit
-
-
Angie Chiang authored
Observe 0.1% gain on lowres wo optimize_b before rebase Change-Id: I0cb5b5e4be2563093efb2f6dfbefdce9b554e910
-
- 10 Sep, 2017 1 commit
-
-
Jingning Han authored
Replace the truncated geometric distribution model with the grouped leaves structure for more efficient probability modeling. Each group has its own Geometric distribution This give us 0.2% gain on lowres Change-Id: If5c73dd429bd5183a8aa81042f8f56937b1d8a6a
-