[FFmpeg-devel] Added HW H.264 and HEVC encoding for AMD GPUs based on AMF SDK

Mironov, Mikhail Mikhail.Mironov at amd.com
Tue Nov 14 01:00:07 EET 2017


> > +    res = ctx->factory->pVtbl->CreateContext(ctx->factory, &ctx->context);
> > +    AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR_UNKNOWN,
> "CreateContext() failed with error %d\n", res);
> > +    // try to reuse existing DX device
> > +    if (avctx->hw_frames_ctx) {
> > +        AVHWFramesContext *device_ctx = (AVHWFramesContext*)avctx-
> >hw_frames_ctx->data;
> > +        if (device_ctx->device_ctx->type == AV_HWDEVICE_TYPE_D3D11VA){
> > +            if (amf_av_to_amf_format(device_ctx->sw_format) ==
> > + AMF_SURFACE_UNKNOWN) {
> 
> This test is inverted.
> 
> Have you actually tested this path?  Even with that test fixed, I'm unable to
> pass the following initialisation test with an AMD D3D11 device.
> 

Yes, the condition should be reverted. To test I had to add 
"-hwaccel d3d11va -hwaccel_output_format d3d11" to the command line.

> > +
> > +    // Dynamic
> > +    /// Rate Control Method
> > +    { "rc",             "Rate Control Method",
> OFFSET(rate_control_mode),  AV_OPT_TYPE_INT,   { .i64 =
> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VB
> R    }, AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP,
> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_LATENCY_CONSTRAINED
> _VBR, VE, "rc" },
> > +    { "cqp",            "Constant Quantization Parameter",      0,
> AV_OPT_TYPE_CONST, { .i64 =
> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CONSTANT_QP             },
> 0, 0, VE, "rc" },
> > +    { "cbr",            "Constant Bitrate",                     0,
> AV_OPT_TYPE_CONST, { .i64 =
> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CBR                     }, 0, 0,
> VE, "rc" },
> > +    { "vbr_peak",       "Peak Contrained Variable Bitrate",     0,
> AV_OPT_TYPE_CONST, { .i64 =
> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_PEAK_CONSTRAINED_VB
> R    }, 0, 0, VE, "rc" },
> > +    { "vbr_latency",    "Latency Constrained Variable Bitrate", 0,
> AV_OPT_TYPE_CONST, { .i64 =
> AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_LATENCY_CONSTRAINED
> _VBR }, 0, 0, VE, "rc" },
> 
> I think the default for this option needs to be decided dynamically.  Just
> setting "-b:v" is a not-unreasonable thing to do, and currently the choice of
> PEAK_CONSTRAINED_VBR makes it then complain that maxrate isn't set.
> Similarly, if the only setting is some constant-quality option (-q/-
> global_quality, or your private ones below), it ignores that and use the
> default 2Mbps instead.
> 
> > +    /// Enforce HRD, Filler Data, VBAQ, Frame Skipping
> > +    { "enforce_hrd",    "Enforce HRD",                          OFFSET(enforce_hrd),
> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
> 
> Does this option work?  I don't seem to be able to push it into generating
> HRD information with any combination of options.
> 

Fixed.

> > +    { "filler_data",    "Filler Data Enable",                   OFFSET(filler_data),
> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
> > +    { "vbaq",           "Enable VBAQ",                          OFFSET(enable_vbaq),
> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
> > +    { "frame_skipping", "Rate Control Based Frame Skip",
> OFFSET(skip_frame),         AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
> > +
> > +    /// QP Values
> > +    { "qp_i",           "Quantization Parameter for I-Frame",   OFFSET(qp_i),
> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
> > +    { "qp_p",           "Quantization Parameter for P-Frame",   OFFSET(qp_p),
> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
> > +    { "qp_b",           "Quantization Parameter for B-Frame",   OFFSET(qp_b),
> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 51, VE },
> > +
> > +    /// Pre-Pass, Pre-Analysis, Two-Pass
> > +    { "preanalysis",    "Pre-Analysis Mode",                    OFFSET(preanalysis),
> AV_OPT_TYPE_BOOL,{ .i64 = 0 }, 0, 1, VE, NULL },
> > +
> > +    /// Maximum Access Unit Size
> > +    { "max_au_size",    "Maximum Access Unit Size for rate control (in bits)",
> OFFSET(max_au_size),        AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE },
> 
> Can you explain more about what this option does?  I don't seem to be able
> to get it to do anything - e.g. setting -max_au_size 80000 with 30fps CBR 1M
> (which should be easily achievable) still makes packets of more than 80000
> bits.)
> 

It means maximum frame size in bits, and it should be used together 
with enforce_hrd enabled.  I tested, it works after the related fix for enforce_hrd.
I added  dependency handling.

> 
> And some thoughts on the stream it makes:
> 
> "ffmpeg_g -report -y -f lavfi -i testsrc -an -c:v h264_amf -bsf:v trace_headers -
> frames:v 1000 out.mp4"
> 
> [AVBSFContext @ 000000000049b9c0] Sequence Parameter Set
> [AVBSFContext @ 000000000049b9c0] 40          max_num_ref_frames
> 00101 = 4
> [AVBSFContext @ 000000000049b9c0] 206         max_dec_frame_buffering
> 00101 = 4
> 
> Where did 4 come from?  It never uses more than 1 reference in the stream.

According to codec guys this field filled in by HW and represents how many 
frames can be stored in DPB buffer. But in reality HW encoder will reference 
one frame at the time.
 
> 
> [AVBSFContext @ 000000000049b9c0] Access Unit Delimiter
> 
> It seems to put AUDs in every packet by default.  Is there a way to turn that
> off?  (It messes with sending over RTP by forcing a useless extra packet
> because they can't be combined with fragmentation units.)
> 

I've added an option with handling  to the send() code. The property existed in 
AMF API but it is per-frame property and it was missing in the initial integration.

> 
> Thanks,
> 
> - Mark
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

I fixed and tested the rest of problems.

Thanks,
Mikhail


More information about the ffmpeg-devel mailing list