[Ffmpeg-devel] New/extended filter API discussion

Michael Niedermayer michaelni
Sat Jan 6 00:12:59 CET 2007


On Fri, Jan 05, 2007 at 10:26:06PM +0100, Luca Abeni wrote:
> Hi Michael,
> On Fri, 2007-01-05 at 20:52 +0100, Michael Niedermayer wrote:
> [...]
> > to repeat again some goals of a video filter API
> > * well documented (see mplayer for how not to do it)
> > * writing a filter should not require complete knowledge of all
> >   (undocumented) internals of the video filter system
> > * direct rendering (useing a buffer provided by the next filter)
> While trying to put some order in my ideas about video filters, I am
> thinking about the problem of allocating the frames' buffers.
> My initial idea was that the buffer must always be provided by the next
> filter in the graph. So, the typical "push" filter would have done
> something like
> next->get_buffer()
> /* Do something, and render an image in the buffer */
> next->push_buffer()
> /* ... */
> next->release_buffer()		/* When the frame is not needed anymore */
> But then I realized that some filters can be more efficient (require
> less copies) if they allocate the buffer by themselves (I am thinking
> about a crop filter, for example).
> So, I am thinking about adding a capability (let's say
> FILTER_CAP_PROVIDE_BUFFER) to the filter that means "I can be more
> efficient if I allocate the frame's buffer by myself, instead of getting
> them from the next filter". When the filter graph is built, based on the
> capabilities of each filter it is decided if a filter gets the buffer
> >from the next one, or allocates it by itself. This is notified by
> setting a flag (let's say FILTER_FLAG_PROVIDE_BUFFER) in the filter
> context.
> So, some filters of the graph call next->get_buffer(), and some other do
> not.
> Example:
> Decoder ---> Fcrop ---> Output
> Let's assume (as an example) that the decoder does not do direct
> rendering.
> The decoder and the crop filter have FILTER_CAP_PROVIDE_BUFFER set, so
> when the filter graph is built the decoder context and the crop filter
> context are created with FILTER_FLAG_PROVIDE_BUFFER set.
> The decoder allocates a buffer, decodes a video frame in it, and then
> calls Fcrop->push_buffer(), which can change the buffer pointers in the
> AVFrame (without doing any copy) and call Output->push_buffer().
> As a second example, let's consider a chain like
> Fcrop ---> Fscale ---> Fpad
> the crop filter context will be created with FILTER_PROVIDE_BUFFER_FLAG
> set, while the scale filter context will not have such flag set. So,
> Fcrop will allocate a buffer (or receive it from the previous filter, as
> in the first example), and call Fscale->push_buffer(). Fscale will call
> Fpad->get_buffer(), scale the image into it, and then call
> Fpad->push_buffer()...
> Does this make sense, or am I confused about buffers allocation? And,
> most important, is this a good idea? If not, I'll throw away my notes
> and restart studying the problem from the beginning :)

it makes sense and its somewhat similar to what mplayer does, mplayer has no
FILTER_CAP/FLAG though but rather passes such flags as arguments to its 
"get_buffer" which then will either return with a buffer provided by the
filter core or the next filter which itself could pull the buffer from the

Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I have never wished to cater to the crowd; for what I know they do not
approve, and what they approve I do not know. -- Epicurus
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20070106/cd5c2100/attachment.pgp>

More information about the ffmpeg-devel mailing list