[Ffmpeg-devel] New/extended filter API discussion

Luca Abeni lucabe72
Fri Jan 5 22:26:06 CET 2007


Hi Michael,

On Fri, 2007-01-05 at 20:52 +0100, Michael Niedermayer wrote:
[...]
> to repeat again some goals of a video filter API
> * well documented (see mplayer for how not to do it)
> * writing a filter should not require complete knowledge of all
>   (undocumented) internals of the video filter system
> * direct rendering (useing a buffer provided by the next filter)
While trying to put some order in my ideas about video filters, I am
thinking about the problem of allocating the frames' buffers.
My initial idea was that the buffer must always be provided by the next
filter in the graph. So, the typical "push" filter would have done
something like
next->get_buffer()
/* Do something, and render an image in the buffer */
next->push_buffer()
/* ... */
next->release_buffer()		/* When the frame is not needed anymore */

But then I realized that some filters can be more efficient (require
less copies) if they allocate the buffer by themselves (I am thinking
about a crop filter, for example).

So, I am thinking about adding a capability (let's say
FILTER_CAP_PROVIDE_BUFFER) to the filter that means "I can be more
efficient if I allocate the frame's buffer by myself, instead of getting
them from the next filter". When the filter graph is built, based on the
capabilities of each filter it is decided if a filter gets the buffer
from the next one, or allocates it by itself. This is notified by
setting a flag (let's say FILTER_FLAG_PROVIDE_BUFFER) in the filter
context.
So, some filters of the graph call next->get_buffer(), and some other do
not.

Example:
Decoder ---> Fcrop ---> Output
Let's assume (as an example) that the decoder does not do direct
rendering.
The decoder and the crop filter have FILTER_CAP_PROVIDE_BUFFER set, so
when the filter graph is built the decoder context and the crop filter
context are created with FILTER_FLAG_PROVIDE_BUFFER set.
The decoder allocates a buffer, decodes a video frame in it, and then
calls Fcrop->push_buffer(), which can change the buffer pointers in the
AVFrame (without doing any copy) and call Output->push_buffer().

As a second example, let's consider a chain like
Fcrop ---> Fscale ---> Fpad
the crop filter context will be created with FILTER_PROVIDE_BUFFER_FLAG
set, while the scale filter context will not have such flag set. So,
Fcrop will allocate a buffer (or receive it from the previous filter, as
in the first example), and call Fscale->push_buffer(). Fscale will call
Fpad->get_buffer(), scale the image into it, and then call
Fpad->push_buffer()...


Does this make sense, or am I confused about buffers allocation? And,
most important, is this a good idea? If not, I'll throw away my notes
and restart studying the problem from the beginning :)



			Thanks,
				Luca





More information about the ffmpeg-devel mailing list