Commit 79a37242 authored by Alexander Bokov's avatar Alexander Bokov Committed by Hui Su

Introducing a model for pruning the TX size search

Use a neural-network-based binary classifier to predict the first split
decision on the highest level of the TX size RD search tree. Depending
on how confident we are in the prediction we either keep full unmodified
TX size search or use the largest possible TX size and stop any further

Average speed-up: 3-4%
Quality loss (lowres): 0.062%
Quality loss (midres): 0.018%

Change-Id: I64c0317db74cbeddfbdf772147c43e99e275891f
parent 7ac01f8f
This diff is collapsed.
......@@ -397,6 +397,7 @@ void av1_set_speed_features_framesize_independent(AV1_COMP *cpi) {
sf->alt_ref_search_fp = 0;
sf->partition_search_type = SEARCH_PARTITION;
sf->tx_type_search.prune_mode = PRUNE_2D_ACCURATE;
sf->tx_type_search.use_tx_size_pruning = 1;
sf->tx_type_search.use_skip_flag_prediction = 1;
sf->tx_type_search.fast_intra_tx_type_search = 0;
sf->tx_type_search.fast_inter_tx_type_search = 0;
......@@ -207,6 +207,12 @@ typedef struct {
// Use a skip flag prediction model to detect blocks with skip = 1 early
// and avoid doing full TX type search for such blocks.
int use_skip_flag_prediction;
// Use a model to predict TX block split decisions on the highest level
// of TX partition tree and apply adaptive pruning based on that to speed up
// RD search (currently works only when prune_mode equals to PRUNE_2D_ACCURATE
// or PRUNE_2D_FAST).
int use_tx_size_pruning;
typedef enum {
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment