[FFmpeg-devel] [PATCH 2/3] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

Mark Thompson sw at jkqxz.net
Wed May 9 17:11:46 EEST 2018


On 09/05/18 08:57, Jorge Ramirez-Ortiz wrote:
> On 05/09/2018 01:33 AM, Mark Thompson wrote:
>> diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
>> index ed5193ecc1..2b33badb08 100644
>> --- a/libavcodec/v4l2_m2m_dec.c
>> +++ b/libavcodec/v4l2_m2m_dec.c
>> @@ -23,12 +23,18 @@
>>     #include <linux/videodev2.h>
>>   #include <sys/ioctl.h>
>> +
>> +#include "libavutil/hwcontext.h"
>> +#include "libavutil/hwcontext_drm.h"
>>   #include "libavutil/pixfmt.h"
>>   #include "libavutil/pixdesc.h"
>>   #include "libavutil/opt.h"
>>   #include "libavcodec/avcodec.h"
>>   #include "libavcodec/decode.h"
>>   +#include "libavcodec/hwaccel.h"
>> +#include "libavcodec/internal.h"
>> +
>>   #include "v4l2_context.h"
>>   #include "v4l2_m2m.h"
>>   #include "v4l2_fmt.h"
>> @@ -183,6 +189,15 @@ static av_cold int v4l2_decode_init(AVCodecContext *avctx)
>>       capture->av_codec_id = AV_CODEC_ID_RAWVIDEO;
>>       capture->av_pix_fmt = avctx->pix_fmt;
>>   +    /* the client requests the codec to generate DRM frames:
>> +     *   - data[0] will therefore point to the returned AVDRMFrameDescriptor
>> +     *       check the ff_v4l2_buffer_to_avframe conversion function.
>> +     *   - the DRM frame format is passed in the DRM frame descriptor layer.
>> +     *       check the v4l2_get_drm_frame function.
>> +     */
>> +    if (avctx->pix_fmt == AV_PIX_FMT_DRM_PRIME)
>> +        s->output_drm = 1;
>> +
>>       ret = ff_v4l2_m2m_codec_init(avctx);
>>       if (ret) {
>>           av_log(avctx, AV_LOG_ERROR, "can't configure decoder\n");
>> @@ -202,6 +217,11 @@ static const AVOption options[] = {
>>       { NULL},
>>   };
> 
> As a follow up to your comment on pixel format negotiation (AvCodecContext.getformat), notice that this is a tentative request from the user to select a pixel format.
> The actual pixel format negotiation - where the decoder will select the pixel format- will happen later during v4l2_try_start.

Indeed.  get_format() will have to be called during the pixel format negotiation so that the user can pick between whatever the supported software format is (NV12, NV21, YUV420P P010, whatever) or the DRM-PRIME object hardware format (if supported).

AVCodecContext.pix_fmt is intended to be set by the decoder to say what pix_fmt it intends to produce (though even in that role it's highly dubious given threaded decoders and stream changes).  For historical reasons it is also allowed to be set externally (because of libavformat interactions), but this shouldn't be used for configuration.

> This change enables the v4l2m2m decoder to output either dmabuf descriptors to be consumed by a DRM or video frame formats to be consumed by SDL (for instance).
> As an example, these changes have been tested with ffplay (SDL based display) and a simple DRM application [1]
> 
> Lukas tested with other tools.
> 
> [1]https://github.com/baylibre/ffmpeg-drm

We should make this usable in the ffmpeg application too.  The DRM object format is works fine in ffmpeg already with Rockchip decoder (consumed by the hwmap/hwdownload filters, or by mapping to OpenCL), but that doesn't need the format selection part.  (There are also kmsgrab and VAAPI, but those aren't making DRM PRIME frames directly at a decoder.)  I believe this should be straightforward with a small modification to get_format() in ffmpeg.c to accept AV_CODEC_HW_CONFIG_METHOD_INTERNAL, I can look at this once we have a codec which will need it.

- Mark


More information about the ffmpeg-devel mailing list