- 14 Jun, 2017 5 commits
-
-
Jingning Han authored
Validate the provided coefficient location with respect to the height and width of the transform block size. Change-Id: Id4f10052141fd914f5aea5ae4202cf35d3e63867
-
Jingning Han authored
This commit makes the level map coding system support the transform coefficients from rectangular transform block sizes. Change-Id: I5cd6c71d12e41938f942adc98cc1e1f286336f12
-
Jingning Han authored
Change-Id: I44bea2cda7c57d82a79a906f52c18e188f1fedea
-
Jingning Han authored
Explicilty use the transform block size to determine the coeff band array. Remove the assumption on square transform block size. Change-Id: I18e285130465a5eced49304a27a6cb617e297760
-
Jingning Han authored
Map the rectangular transform block size into the bigger square transform block size as the context for level map probability model. Change-Id: I20cf2b16daec16172855a78a201b670ff0547bf5
-
- 08 Jun, 2017 1 commit
-
-
Angie Chiang authored
Change-Id: I0512000554ef74d397332e5ed135fe20e2c4a37e
-
- 02 Jun, 2017 6 commits
-
-
Angie Chiang authored
This is to facilitate lv_map experiment Change-Id: Ife779b172c4b81a9b2b4640464163300996e3969
-
Angie Chiang authored
Change-Id: Ieae4c1a1c932d375b4577c7e42a9764e5f9cd16a
-
Angie Chiang authored
This function will check if downgrading each coeff by one level will reduce overall rd cost. If so, it will down grade the coeff and update the effect and then move on to the next coeff. In general, we found out that updating according to the coding order will provide better coding performance. The optimization order is as follows. 1) forward optimize coeffs == 1 or -1 only 2) backward optimize all coeffs Change-Id: Ic0fd4d44d11878258e09d4fa87a8b48b397a10a8
-
Angie Chiang authored
This function will update txb_cache and txb_info affected by downgrading the coeff by one level Change-Id: I57f9377eb7fb94b4244e677704b33c5eece83133
-
Angie Chiang authored
Change-Id: I82e12e312e6685c3801b243196af2570d3793aac
-
Angie Chiang authored
This function will be applied to the last non-zero coeff to calculate the cost difference including eob change Change-Id: I471aa74600c41fd371447ec121d113c79bd767b8
-
- 01 Jun, 2017 4 commits
-
-
Angie Chiang authored
This function computes the overall (i.e. self and neighbors') cost difference caused by downgrading a coefficient by one level at a specific location Change-Id: I1b7b6acfe06ed06b9a2ff48b5bb11527646d1aa8
-
Angie Chiang authored
try_self_level_down() computes the cost difference of the coeff that is downgraded by one level get_level_prob() computes the probability of level_map coding at a specific position and level. Change-Id: Iaa9d40477aaf798993c2d5d26341551db665902b
-
Angie Chiang authored
This function pre-generate counts/magnitudes of each level map such that we don't have to re-calculate the counts/magnitudes while doing the optimization. Change-Id: Ifdfc89522cf2f2b9f3734d451324081f42b47cb0
-
Angie Chiang authored
Change-Id: I085f2bc706fde41afbee5ff48b56acc095f804c2
-
- 21 May, 2017 2 commits
-
-
Timothy B. Terriberry authored
reset_skip_context() was always clearing the entropy contexts for all three color planes, using a block size that corresponded with the luma plane. However, when chroma_2x2 is disabled, then for sub-8x8 luma block sizes, the corresponding chroma block size is always 4x4, and the skip flag only affects the chroma blocks corresponding to the upper-left luma block. This patch makes reset_skip_context() reset the contexts that actually correspond to the chroma blocks that are skipped (if any). It also moves reset_skip_context() to av1_reset_skip_context() in blockd.c, because blockd.h gets included before onyx_int.h, which declares the required is_chroma_reference() function. reset_skip_context() was too large and used in too many places to be a reasonable candidate for inlining, anyway. AWCY results on objective-1-fast: cb4x4-fix-base@2017-05-11T06:26:50.159Z -> cb4x4-fix-reset_skip@2017-05-11T06:28:45.482Z PSNR | PSNR Cb | PSNR Cr | PSNR HVS | SSIM | MS SSIM | CIEDE 2000 0.0301 | 0.1068 | 0.1463 | 0.0359 | 0.0260 | 0.0347 | 0.0479 A regression (near the noise range), but without this fix, the line buffer size required by the entropy contexts will be doubled. Change-Id: I12fa6e60d9c1c7c85927742775a346ea22b3193f
-
Jingning Han authored
Use recursive transform block coding order. Change-Id: I6d4671fe669e8a1897e034973de181078272cbfd
-
- 19 May, 2017 1 commit
-
-
Jingning Han authored
Use a single struct for tokenization and level map coding. Change-Id: Id685992b7db5964ee204c4b0b90379df50c56546
-
- 04 May, 2017 1 commit
-
-
Angie Chiang authored
This will guarantee that av1_optimize_b will be turned off when lossless mode is on Remove heuristic lossless check in optimize_b_greedy Change-Id: I636c776f3f6b632eb03bc57a470ea43aae4fe0f6
-
- 24 Apr, 2017 1 commit
-
-
Yaowu Xu authored
BUG=aomedia:448 Change-Id: Ieff977fca8a5033ddef2871a194870f59301ad8f
-
- 18 Apr, 2017 2 commits
-
-
Angie Chiang authored
Change-Id: I622d499187f3881b274ca6cf3745f51fa0103b18
-
Angie Chiang authored
This will separate the transform kernel selection from lv_map experiment such that we can evaluate each feature's performance separately Note that txk_sel is build on top of lv_map Change-Id: I5bd1ea99be30000efcdc2bcd42de002b78b1c3c8
-
- 16 Apr, 2017 2 commits
-
-
Angie Chiang authored
Change-Id: I585999b1709303dee8d1c7bf626b5cd0ef36341c
-
Angie Chiang authored
Change-Id: Ia5e565f910c6d0c0bc6b0dc62f72a5df1346d06e
-
- 15 Apr, 2017 1 commit
-
-
Angie Chiang authored
This is for lv_map experiment Change-Id: Ie000f7850efac32ffb46b9a4679cff2814c6246a
-
- 14 Apr, 2017 1 commit
-
-
Angie Chiang authored
Change-Id: I052721017cddd57ff9995e8dd442e4b3436a0b48
-
- 13 Apr, 2017 1 commit
-
-
Angie Chiang authored
This fix the invalid tx_type error happened when mb_to_right_edge is negative The invalid tx_type error will cause bitstream error and then let the decoder hang in the while loop of read_golomb() Change-Id: Ide6c3497cdd5b69b20b4b093241ed89ccc1b0f00
-
- 12 Apr, 2017 2 commits
-
-
Angie Chiang authored
allow_txk_type doesn't contains all the logic of using pre assigned tx_type or doing the tx_type search. Here we use get_tx_type to avoid redundant tx_type search. Change-Id: I09b6bcc60fbe15f0d78689b22d834f95b62bd99a
-
Angie Chiang authored
Change-Id: Ie388218b2202ee2f63b90c67a059cbfe54fd4a4e
-
- 11 Apr, 2017 2 commits
-
-
Angie Chiang authored
This is part of tx kernel selection feature. Change-Id: I822e5a46d39c1fd525c911fc2a06e1be041d8ec8
-
Angie Chiang authored
Change-Id: I50493fa9daf2de8859608d57f8d2010842c9eb07
-
- 05 Apr, 2017 2 commits
-
-
Jingning Han authored
This commit integrates the level map coding within cb4x4 framework. Change-Id: Ied9721df0a7ffd21d1d69d68759d91b6c320c179
-
Jingning Han authored
Change-Id: I215c4bed9ba5c7f4fc93533249610217de14ce54
-
- 27 Mar, 2017 2 commits
-
-
Angie Chiang authored
1) Add txb_entropy_ctx into MACROBLOCK_PLANE and PICK_MODE_CONTEXT 2) Add av1_get_txb_entropy_context() to compute the entropy context 3) Compute and sore the entropy context before av1_xform_quant() return Change-Id: Ia2170523af3163b9456f7c6a305c1e77ad2b23be
-
Angie Chiang authored
1) move the original implementation in av1_cost_coeffs() to cost_coeffs() and let av1_cost_coeffs become a switch for choosing original coeff cost or lv_map's coeff cost 2) change get_txb_ctx's naming. Use plane_bsize instead of bsize to make the intention clear. 3) remove txb context computing in get_txb_ctx Change-Id: I17e3d39d796e051d1c90f0a0c5d7d0888b9ca292
-
- 23 Mar, 2017 2 commits
-
-
Angie Chiang authored
Change-Id: I6bedc3a1a40e551ce4b3989382b7706a589c08f2
-
Angie Chiang authored
Change-Id: Icbf9a2f31eb7f6c385266a0236d2ef266f43e409
-
- 22 Mar, 2017 1 commit
-
-
Angie Chiang authored
The feature is implemented in the following two functions. av1_write_txb_probs av1_read_txb_probs Change-Id: I0b646e17ec54d7a10a77a6853439217091455af1
-
- 21 Mar, 2017 1 commit
-
-
Angie Chiang authored
Change-Id: I497221e91c576bc684ee65bcdbab1469b8821fe1
-