[FFmpeg-devel] [PATCH] libavcodec/utils: Simplify get_buffer compatibility wrapper.

wm4 nfxjfg at googlemail.com
Sun Feb 16 23:57:23 CET 2014

On Sun, 16 Feb 2014 23:43:44 +0100
Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:

> On Sun, Feb 16, 2014 at 11:34:23PM +0100, Reimar Döffinger wrote:
> > On Sun, Feb 16, 2014 at 11:17:05PM +0100, wm4 wrote:
> > > The assumption of the AVFrame API is that all memory ranges referenced
> > > are covered by buffer refs. You seem to think that it's ok to have
> > > merely an abstract reference for the entire frame, but apparently this
> > > is not enough. The code you remove is an attempt to account for all
> > > data correctly. Since you don't know whether all the planes are
> > > allocated with a single memory allocation or whether they're separate
> > > allocations, you have to assume the worst case, and use a buffer per
> > > plane.
> > 
> > The problem is the only thing in the documentation hinting at this is:
> > > every single data plane must be contained in one of the buffers
> > 
> > However a single buffer starting at address 0 and going all the way up
> > to the highest addressable address would fulfill this, which is
> > almost what my patch does.

This is probably not legal. Code can assume that it can access the
whole referenced buffer, not just where the plane data is. Look for
example what vf_pad.c does: it tries to use "unused" space in the
buffer data area to extend the planes without copy. But I'm not sure if
I read this code right.

Basically, forget about buffer refs as abstract refcounting: they
really refer memory, and code is free to make use of all memory that is
covered by a reference. (Of course, for hwaccel it's all different

I agree that these semantics are a bit weird and confusing.

> > If there are other assumptions this documentation needs to describe them
> > properly instead of it being magic that nobody can use except by
> > copy-pasting code from FFmpeg.


> Also, more specifically this might reveal other ways to solve this that
> does not require so much messy code.
> Also slightly related, can someone explain to me these code snippets which
> I've seen in multiple places (e.g. at the end of av_frame_make_writable in libavutil/frame.c)
> >    *frame = tmp;
> >    if (tmp.data == tmp.extended_data)
> >        frame->extended_data = frame->data;
> Because either the "if" part
> a) is a completely obvious and stupid no-op, but I can't understand how this was missed, in multiple places to top it off
> b) isn't a no-op and I don't understand how I can't see it

It's not a no-op. AVFrame was probably designed for video data, but
then someone came and thought it was a good idea to reuse it for audio.
Video has only 4 planes max, but audio (ever since "planar audio" was
invented) can have much more frames. Since the plane array was fixed
size, and someone wanted to avoid having each AVFrame user allocate the
data array in the video case, extended_data was introduced to handle
the needs of audio in case there are more channels than the data array
can handle. The ffmpeg API is fun.

More information about the ffmpeg-devel mailing list