[FFmpeg-devel] [PATCH] blockdsp: remove high bit depth parameter

Christophe Gisquet christophe.gisquet at gmail.com
Thu Oct 1 15:26:41 CEST 2015


2015-10-01 14:40 GMT+02:00 Ronald S. Bultje <rsbultje at gmail.com>:
> bash-3.2$ grep ff_pixblockdsp_init ../libavcodec/*
> ../libavcodec/asvenc.c:    ff_pixblockdsp_init(&a->pdsp, avctx);
> ../libavcodec/avdct.c:        ff_pixblockdsp_init(&pdsp, avctx);
> ../libavcodec/dnxhdenc.c:    ff_pixblockdsp_init(&ctx->m.pdsp, avctx);
> ../libavcodec/dvenc.c:    ff_pixblockdsp_init(&pdsp, avctx);
> ../libavcodec/mpegvideo_enc.c:    ff_pixblockdsp_init(&s->pdsp, avctx);
>
> bash-3.2$ grep AV_PIX_FMT ../libavcodec/dvenc.c ../libavcodec/dnxhdenc.c
> ../libavcodec/mpegvideo_enc.c ../libavcodec/asvenc.c
> ../libavcodec/dnxhdenc.c:    case AV_PIX_FMT_YUV422P10:
> ../libavcodec/dnxhdenc.c:        AV_PIX_FMT_YUV422P10,
>
> bash-3.2$ grep clear_block ../libavcodec/dnxhdenc.c
>             ctx->bdsp.clear_block(ctx->blocks[4]);
>             ctx->bdsp.clear_block(ctx->blocks[5]);
>             ctx->bdsp.clear_block(ctx->blocks[6]);
>             ctx->bdsp.clear_block(ctx->blocks[7]);
>
> So this may break 10bit dnxhd encoding.

Well, that's part of the routine of what I've been testing since a
while, and it doesn't seem so. But that's not a very convincing
argument.

But I'm not sure what you are finding with your greps. That dnxhdenc
is actually using clear_block depending on the highbitdepth parameter?
In dnxhdenc, ctx->blocks are still 16bits dct coeffs blocks, as in all
of those codecs. There's 12 bit content and support incoming, but even
there, coeffs are 15bits (DC is).

Also, from the patch, you can notice that in the old code that:
- clear_block is by default set to clear_block_8_c (and clears 128
bytes of data); equivalent for clear_blocks
- some archs actually don't care about the parameter and always set
clear_block* (MIPS MSA/MMI)

Furthermore:
$ grep -rn 'clear_block *=' ../libavcodec/
../libavcodec/arm/blockdsp_init_neon.c:34:        c->clear_block  =
ff_clear_block_neon;
../libavcodec/blockdsp.c:62:    c->clear_block  = clear_block_8_c;
../libavcodec/mips/blockdsp_init_mips.c:28:    c->clear_block =
ff_clear_block_msa;
../libavcodec/mips/blockdsp_init_mips.c:40:    c->clear_block =
ff_clear_block_mmi;
../libavcodec/ppc/blockdsp.c:167:        c->clear_block = clear_block_altivec;
../libavcodec/x86/blockdsp_init.c:42:            c->clear_block  =
ff_clear_block_mmx;
../libavcodec/x86/blockdsp_init.c:51:            c->clear_block  =
ff_clear_block_sse;

I've checked that all these implementations always apply to 64*2
bytes, and there's no clear_block implementation that sets another
amount. Clearly, if code expects clear_block to do that, there'd be
failures.

I don't know when that parameter has become unused, but it really is,
as otherwise there is the array fill_block_tab indexed by pixel
bytewidth for other dsp functions. I don't think anybody uses it for
>16 bits samples.

Erring away from just this, it is even more so incorrect that
clear_block* are located in a "pixelblock" context, while they are
only ever applied to dct blocks.

It belongs more to the idct context, which also use another
highbitdepth parameter solely based on raw bits per sample, which
should be the actual value used:
$ grep -rn bits_per_raw_sample ../libavcodec/ | grep -i idct
../libavcodec/idctdsp.c:243:    const unsigned high_bit_depth =
avctx->bits_per_raw_sample > 8;
../libavcodec/idctdsp.c:261:        if (avctx->bits_per_raw_sample ==
10 || avctx->bits_per_raw_sample == 9) {
../libavcodec/idctdsp.c:266:        } else if
(avctx->bits_per_raw_sample == 12) {
../libavcodec/mips/idctdsp_init_mips.c:29:
(avctx->bits_per_raw_sample != 10) &&
../libavcodec/mips/idctdsp_init_mips.c:30:
(avctx->bits_per_raw_sample != 12) &&
../libavcodec/mips/idctdsp_init_mips.c:49:
(avctx->bits_per_raw_sample != 10) &&
../libavcodec/mips/idctdsp_init_mips.c:50:
(avctx->bits_per_raw_sample != 12) &&
../libavcodec/xvididct.c:335:    const unsigned high_bit_depth =
avctx->bits_per_raw_sample > 8;

At the very worst, if something requires more than 16 bits dct coeffs,
then another bitdepth parameter is needed, but that's a different
issue

-- 
Christophe


More information about the ffmpeg-devel mailing list