[FFmpeg-devel] [PATCH] avcodec: Allow user applications to provide non padded input buffers

wm4 nfxjfg at googlemail.com
Wed Jan 14 17:58:04 CET 2015


On Wed, 14 Jan 2015 17:23:08 +0100
Michael Niedermayer <michaelni at gmx.at> wrote:

> This can be extended easily to skip the buffer growing for codecs which do
> not need it.
> 
> Signed-off-by: Michael Niedermayer <michaelni at gmx.at>
> ---
>  libavcodec/avcodec.h |    8 +++++++-
>  libavcodec/utils.c   |   12 ++++++++++--
>  2 files changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h
> index 99467bb..ec95194 100644
> --- a/libavcodec/avcodec.h
> +++ b/libavcodec/avcodec.h
> @@ -4128,9 +4128,15 @@ int avcodec_decode_audio4(AVCodecContext *avctx, AVFrame *frame,
>   * Some decoders may support multiple frames in a single AVPacket, such
>   * decoders would then just decode the first frame.
>   *
> - * @warning The input buffer must be FF_INPUT_BUFFER_PADDING_SIZE larger than
> + * @warning If you use non AVBuffer based AVPackets, then the input buffer must
> + * be FF_INPUT_BUFFER_PADDING_SIZE larger than
>   * the actual read bytes because some optimized bitstream readers read 32 or 64
>   * bits at once and could read over the end.
> + * If you do use AVBuffer based AVPackets and the allocated space as indicated
> + * by the AVBuffer does not include FF_INPUT_BUFFER_PADDING_SIZE padding then
> + * many decoders will reallocate the buffer.
> + * Most AVPacket as created by the FFmpeg APIs will already include the needed
> + * padding.
>   *
>   * @warning The end of the input buffer buf should be set to 0 to ensure that
>   * no overreading happens for damaged MPEG streams.
> diff --git a/libavcodec/utils.c b/libavcodec/utils.c
> index 1ec5cae..eaa0431 100644
> --- a/libavcodec/utils.c
> +++ b/libavcodec/utils.c
> @@ -2332,7 +2332,8 @@ int attribute_align_arg avcodec_decode_video2(AVCodecContext *avctx, AVFrame *pi
>      AVCodecInternal *avci = avctx->internal;
>      int ret;
>      // copy to ensure we do not change avpkt
> -    AVPacket tmp = *avpkt;
> +    AVPacket tmp;
> +    int did_split = 0;
>  
>      if (!avctx->codec)
>          return AVERROR(EINVAL);
> @@ -2347,8 +2348,15 @@ int attribute_align_arg avcodec_decode_video2(AVCodecContext *avctx, AVFrame *pi
>  
>      av_frame_unref(picture);
>  
> +    if (avpkt->buf && avpkt->size && avpkt->buf->size - avpkt->size < FF_INPUT_BUFFER_PADDING_SIZE) {
> +        if ((ret = av_grow_packet(avpkt, FF_INPUT_BUFFER_PADDING_SIZE)) < 0)
> +            goto fail;
> +        tmp.size -= FF_INPUT_BUFFER_PADDING_SIZE;
> +    }
> +    tmp = *avpkt;
> +

This assumes that AVBuffer is actually always padded. Is this the case?
The user can create custom AVBuffers, with custom allocation and custom
free function.

>      if ((avctx->codec->capabilities & CODEC_CAP_DELAY) || avpkt->size || (avctx->active_thread_type & FF_THREAD_FRAME)) {
> -        int did_split = av_packet_split_side_data(&tmp);
> +        did_split = av_packet_split_side_data(&tmp);
>          ret = apply_param_change(avctx, &tmp);
>          if (ret < 0) {
>              av_log(avctx, AV_LOG_ERROR, "Error applying parameter changes.\n");

What's the point of the did_split change?


More information about the ffmpeg-devel mailing list