[FFmpeg-devel] [PATCH 2/2] lavf/vf_scale_amf: AMF scaler/colorspace converter filter implementation

Mark Thompson sw at jkqxz.net
Sun Jun 24 22:08:06 EEST 2018


On 19/06/18 15:34, Alexander Kravchenko wrote:
> Hi Mark.
> Thanks for your review and comments.
> See my comments and question bellow
> 
>> -----Original Message-----
>> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces at ffmpeg.org] On Behalf Of Mark Thompson
>> Sent: Tuesday, June 19, 2018 2:20 AM
>> To: ffmpeg-devel at ffmpeg.org
>> Subject: Re: [FFmpeg-devel] [PATCH 2/2] lavf/vf_scale_amf: AMF scaler/colorspace converter filter implementation
>>
> 
>>
>>> +    { AV_PIX_FMT_NV12,       AMF_SURFACE_NV12 },
>>> +    { AV_PIX_FMT_BGR0,       AMF_SURFACE_BGRA },
>>> +    { AV_PIX_FMT_RGB0,       AMF_SURFACE_RGBA },
>>
>> What is happening to alpha here - should these be RGBA/BGRA?
>>
> Alpha will be ignored in case RGB->NV12/YUV conversions
> In case of conversion between RGBA variations channels will be reordered.
> Probably RGBA/BGRA also should be also added as input formats

That sounds correct.  It was only BGR0/RGB0 in the encoder part because that doesn't support the alpha plane at all.

>>> +    int err;
>>> +    int i;
>>> +    static const enum AVPixelFormat input_pix_fmts[] = {
>>> +        AV_PIX_FMT_NV12,
>>> +        AV_PIX_FMT_0RGB,
>>> +        AV_PIX_FMT_BGR0,
>>> +        AV_PIX_FMT_RGB0,
>>> +        AV_PIX_FMT_GRAY8,
>>> +        AV_PIX_FMT_YUV420P,
>>> +        AV_PIX_FMT_YUYV422,
>>> +        AV_PIX_FMT_NONE,
>>> +    };
>>> +    static const enum AVPixelFormat output_pix_fmts_default[] = {
>>> +        AV_PIX_FMT_D3D11,
>>> +        AV_PIX_FMT_DXVA2_VLD,
>>> +        AV_PIX_FMT_NONE,
>>> +    };
>>> +    output_pix_fmts = output_pix_fmts_default;
>>> +
>>> +    //in case if hw_device_ctx is set to DXVA2 we change order of pixel formats to set DXVA2 be choosen by default
>>> +    //The order is ignored if hw_frames_ctx is not NULL on the config_output stage
>>> +    if (avctx->hw_device_ctx) {
>>> +        device_ctx = (AVHWDeviceContext*)avctx->hw_device_ctx->data;
>>> +
>>> +        if (device_ctx->type == AV_HWDEVICE_TYPE_DXVA2){
>>> +            static const enum AVPixelFormat output_pix_fmts_dxva2[] = {
>>> +                AV_PIX_FMT_DXVA2_VLD,
>>> +                AV_PIX_FMT_D3D11,
>>> +                AV_PIX_FMT_NONE,
>>> +            };
>>> +            output_pix_fmts = output_pix_fmts_dxva2;
>>
>> This feels dubious.  Can you explain exactly what you want to happen here?
>>
>> I suspect you might be better exposing only the pixfmt associated with the device if one is provided.  (E.g. if you have software input
>> and a D3D11 device then you don't want to expose DXVA2 output at all.)
>>
> We can have here two cases of initialization
> 1) from avctx->hw_device_ctx; available in 'query_output'. (input frame is in host memory, we use device from -filter_hw_device option). 
> 2) from avctx->inputs[0] ->hw_frames_ctx; not available in 'query_output'.  - (input frame is hw frame, we use device context from it) -  this is higher priority.
> 
> Frame context is higher priority but it is not set on this stage (query_formats).
> 
> So it can be the following scenario:
> *) avctx->hw_device_ctx is set to DX9. This case we can set output format only DX9 in 'query_output'
> *) avctx->inputs[0] ->hw_frames_ctx is set to DX11. But we know about this on configure_output function. And we cannot change output format to DX11.

Under what circumstances would a user actually want to do that?  I think a mix of DX9 and DX11 would be a strong indication that something is wrong.

>>> +    AMF_ASSIGN_PROPERTY_BOOL(res, ctx->converter, AMF_VIDEO_CONVERTER_KEEP_ASPECT_RATIO, ctx->keep_aspect_ratio);
>>> +    AMFAV_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR(ENOMEM),
>>> + "AMFConverter-SetProperty() failed with error %d\n", res);
>>
>> What effect with this have?  The w/h calculation already deals with aspect ratio preservation (640:-1, etc.) in a common way across all
>> scale filters.
> 
> this is required if you want to fit, for example, a square video into a rectangular video and fill the empty space with a color

That's doing something quite different to the usual way of preserving aspect ratio in scale filters, so the documentation probably needs to be clear on how it works.

>>> +
>>> +    AMF_ASSIGN_PROPERTY_BOOL(res, ctx->converter, AMF_VIDEO_CONVERTER_FILL, ctx->fill);
>>> +    AMFAV_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR(ENOMEM),
>>> + "AMFConverter-SetProperty() failed with error %d\n", res);
>>
>> Is it valid not to set this?  I'm guessing from the API, but it looks like if this isn't set when you have an output rectangle then the rest of
>> the surface will be left with undefined contents because it's a newly-allocated surface.
>>
> 
> Option to fill or not fill empty space in case input stream does not fill output one.
> This can happen if user set AMF_VIDEO_CONVERTER_KEEP_ASPECT_RATIO or/and set AMF_VIDEO_CONVERTER_OUTPUT_RECT

That sounds like it means it will be never be valid not to set it, since if you don't some of the output surface will contain undefined contents and therefore invoke undefined behaviour later?

>>> +
>>> +    if(ctx->color_profile != AMF_VIDEO_CONVERTER_COLOR_PROFILE_UNKNOWN) {
>>> +        AMF_ASSIGN_PROPERTY_INT64(res, ctx->converter,
>>> + AMF_VIDEO_CONVERTER_COLOR_PROFILE, (amf_int32)ctx->color_profile);
>>
>> What does this profile do?
> This selects conversion matrix in RGB<->YUV conversions
> 
>>
>> The input properties should be set from the input AVFrame (see color_range/color_primaries/color_trc/colorspace/chroma_location -
>> you'll need all of those to support JPEG vs. normal video).
>>
>> If it's setting the output properties then you will also need to set the AVFrame fields correctly rather than copying them from the input
>> as you do below (copy_props does this).
>>
> You mean I need to have option to select conversion matrix according on input frame properties, and if user want to change it manually I need to update output frame properties?
> frame->colorspace property?

You should start by taking the input properties from the input frame.  You could then add an option to override them with manual value if you want (e.g. see how vf_colorspace does it).

The output colour properties should probably default to matching the input ones (suitably changed if you have YUV<->RGB conversion).  If you are able to set different input and output properties then you can also add a manual setting for output so that you can do 601->709 and similar.

Given the set of values you have here (including JPEG), you probably need to look at all of color_range/color_primaries/color_trc/colorspace/chroma_location in the input AVFrame.

>>> +
>>> +    res = ctx->converter->pVtbl->QueryOutput(ctx->converter, &data_out);
>>> +    AMFAV_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR(ENOMEM),
>>> + "QueryOutput() failed with error %d\n", res);
>>
>> Does this have the expected pipelining effect?  (The input is only submitted here, the operation doesn't need to finish until someone
>> actually reads the output.)
> 
> It puts processing in hw queue and does not wait until it finished. Next consumer of frame (encoder for example) will use it when it is ready. 

Ok, good.

>>> +
>>> +    if (data_out) {
>>> +        AMFGuid guid = IID_AMFSurface();
>>> +        data_out->pVtbl->QueryInterface(data_out, &guid, (void**)&surface_out); // query for buffer interface
>>> +        data_out->pVtbl->Release(data_out);
>>> +    }
>>> +
>>> +    out = amf_amfsurface_to_avframe(avctx, surface_out);
>>
>> How many frames is the following component allowed to hold on to?  If arbitrarily many, good.  If it's limited, can this limit be
>> configured?  (See extra_hw_frames.)
>>
> 
> arbitrarily many

Yay!  (So we don't need to mess around with that stuff here :)

Thanks,

- Mark


More information about the ffmpeg-devel mailing list