[Ffmpeg-devel] New/extended filter API discussion

Alexander Chemeris ipse.ffmpeg
Mon Jan 8 11:53:28 CET 2007


On 1/6/07, Luca Abeni <lucabe72 at email.it> wrote:
> On Sat, 2007-01-06 at 03:57 +0300, Alexander Chemeris wrote:
> > I have some thoughts about video filter API, because I'm thinking about
> > very similar subsystem for sipX media processing library.
> Well, I do not know when (and if :) libavfilter will be ready, but it
> will be designed to be as generic as possible. So maybe we can work
> together on it, and you can use it in sipX.
Yeah, it would be great, but unlikely possible to use libavfilter directly.
sipX is wrote in C++, compiles with wide range of compilers and runs under
many OSes - currently *nix/WinXP/WinCE/MasOs X/VxWorks is supported and
this list will increase, I hope. While ffmpeg is tied with gcc and may not
be available on some of this OSes. However, good libavfilter design will
greatly simplify its wrapping and linking to sipX and every other project.

> I have some questions and comments, for clarifying my design ideas and
> comparing them with other people who are interested in the design of a
> filter layer.
> > So, when I thought
> > about benefits and drawbacks of "push", "pull" and "process" models of
> > opperation, I came to the view, that "process" approach is simplest among
> > others while almost as powerful as "push" and "pull". By "process" model
> > I mean following -- each filter have 'process()' function which simply take
> > input frames, process them and push into output buffer. Most work in this
> > approach should be done by flowgraph class itself - it knows how filters
> > are connected and could take output frames from preceding filter and pass
> > them to subsequent filter.
> How does this satisfy the "the number of frames consumed by a filter
> does not have to match the number output ((inverse)telecine, ...)"
> requirement?
There may be frame buffer in between of filters, which will cache unprocessed
frames. And moreover it will make it possible to refer "past" and "future"
frames without their recalculation.

> > One drawback of "push" and "pull" models is that they could not have
> > multiple inputs (for "push") or outputs (for "pull"). Lets consider
> > "push" model.
> > If filter graph have two inputs and one of them pushes frame behaviour
> > of the second input is ill defined -- it could be ignored, or pulled, or smth
> > else, but I do not see good approach here.
> I might be naive here, but I do not see the problem.
> The filter knows what to do with its inputs (maybe it can generate a
> frame as soon as an input frame arrives, or maybe it needs a frame from
> each input before generating an output frame, or... It depends on the
> filtering algorithm).
> Now, let's consider a "Picture-in-Picture" filter Fpip as an example:
> let's assume that it has 2 inputs I1 and I2, and it puts an image coming
> from I2 (maybe rescaled) in the upper right corner of an image coming
> from I1.
> When the first input filter pushes a frame to Fpip, it calls
> Fpip->push_buffer(Fpip, f, 0). Fpip checks if a frame from I2 is already
> available; if yes it produces an output picture, if not it just store
> the received frame f and returns.
> When the second input filter pushes a frame to Fpip, it calls
> Fpip->push_buffer(Fpip, f, 1). Now Fpip knows that a frame from I1 is
> already available, and can render an output frame and push it to the
> next filter.
> The trick is in the third parameter of push_buffer().
Already wrote about it in response to Michael's mail, but:
1) Filter should be aware of situation that frames may come with different
framerate or have gaps (due to network loss, e.g.).
2) Filter should somehow avoid blocking in situation where one input suddenly
stop producing frames.

There may be more issues, though. I'm sure that all of them could be solved,
but it may be not so easy.

> I think "pull" filter can work in a similar way.
Sure, there is no difference between them at this point.

> > And finally I had an idea of universal "push-pull-process" model. Let all
> > processing is done in 'process()' function, but instead of simple input
> > and output buffers it have two functions - 'get_input()' and 'set_output()'.
> > Then if 'get_input()' could take frame from input buffer or ask preceding
> > filter for frame, and 'set_output()' behave similar, we could switch from
> > "push" to "pull" or "process" model even at runtime.
> Since I am interested in supporting both push and pull, I think this is
> interesting... I have to think a little bit more about this approach.
> Anyway, in my idea the "push" or "pull" behaviour is a global static
> property of the filter chain... I did not think about dynamically
> switching between push and pull.
I did not stated this, but I assume that frame passing method is global for
filter graph. I meant onlly that one filter could be used in "push" or "pull"
filter without modification, with external magic only.

> > Why this runtime switching is needed? Because each model is native for
> > different cases. "Push" is natural when processing frames as they come,
> > e.g. when converting from one format to other with maximum speed. "Pull"
> > is well-suited when frames are requested by rendering side (as in Avisynth).
> If I understand correctly, this is addressed by my proposal about buffer
> allocations (see my previous email). Or maybe I am misunderstanding what
> you are saying?
Hum, I see no problems with buffer allocation at all. In sipX I solved this
problem for all time by refrence counted frames. Frame is allocated at the
place where it should be, and freed when it become unneeded. Now I do not
bother about frame deallocation - it is done automatically. Reference
counting is easy to implement and it reduces possibility of memory leaks

Alexander Chemeris.

More information about the ffmpeg-devel mailing list