[FFmpeg-devel] lavfi: push / poll / flush / drain implementation

Nicolas George nicolas.george at normalesup.org
Mon Mar 19 13:17:23 CET 2012


Le decadi 30 ventôse, an CCXX, Stefano Sabatini a écrit :
> I'm not against this if it can be done and we won't lose features by
> doing so. Another concern is if we should do it by retaining backward
> API/ABI compatibility.

I agree. I believe what I am proposing is not, technically, a break of ABI:
the documentation was always a bit fuzzy about what the functions did
exactly, and especially what is an "error".

> Note that lavdev/lavfi doesn't use this mechanism at all (it only
> relies of request_frame(), called through the buffersink
> interface).

That is true, and I believe it goes in the right direction. But to do so, it
relies on the PTS of the frames; this could cause accumulation: imagine you
want to encode the same sequence twice, once in slow motion.

> I'm uneasy about that "reasonable limits". In general this condition
> seems very difficult to specify with precision. Also the order of arrival
> may affect which internal buffers are full/empty, which may in turn
> affect the behavior of the filter.

The problem of the buffers being empty is exactly what I would like to
avoid: if a buffer is empty, the filter should wait for a frame on the
corresponding input.

Buffers being full is another matter, and that is what I meant by
"reasonable limits": if the user messed up the filter graph so that it can
not, logically, run without exhausting the memory, I think it is better for
the filters to either fail or flood the tty with warnings about dropped
frames, as opposed to eat up all available memory and eventually crash.

> For example the overlay filter may re-use an old cached buffer when no
> new ones are still available, so in this case the behavior depends on
> the *sequence of events* (inputs and commands) operating on the
> filter.

That is currently true, but I believe it is very wrong, and apart from the
fact that vf_overlay can currently cause stack overflows if used after the
split filter, it may cause a different result depending on implementation
details.

IMHO, the overlay filter should _always_ wait for properly synchronized
frames on both its input before outputing anything. (And in that case,
"properly synchronized" means that it has to wait for an additional frame on
one of the inputs if the timestamps do not match exactly.)

> Pushing a frame: this is not always possible. Suppose that you build a
> player based on a movie source connected to an output device, in this
> case you only have control on the sink, so you can only request frames.

Pushing the flush frames would then be the duty of the sources in the
graphs.

> Returning EOF: this was already proposed, check here if you missed the
> thread:
> http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/136130

I overlooked it indeed.

But as far as I can see, the problems that made this difficult are exactly
what I try to resolve by stating the various guidelines.

> This is possibly more complicate. Consider the case of overlay. When a
> frame is requested, it makes the effort of returning a frame.
> 
> For keeping synchronization with the main input, it needs to cache two
> overlay buffers, which may be available or not depending on the exact
> past history of events and on the timestamps of the cached frames. So
> in this case you only know if you need another frame when you're
> already pushing frames.

I thought a lot specifically about vf_overlay, and I am pretty sure it can
work the way I describe. For it to work, it should work like that:

- request_frame repeatedly requests frames on the input that needs filling
  the most. Stops if avfilter_request_frame returns an error or if a frame
  has been produced.

- start_frame adds to the proper queue, then try to produce frames with what
  it has.

My main point is that start_frame does not need to request_frames by itself,
because something else will do it.

On the whole, I believe it would make the code much simpler and also more
reliable (no risk of infinite recursion like we have now).

> Alternatively you may not push frames in this case, and delay it to
> the following request_frame() call.

This is something that, I believe, must be eliminated, because that makes
places where frames can accumulate.

> The problem here is to define how a filter should behave when a
> request_frame() is called, in particular if it should try hard at
> sending a frame or should simply notify the filterchain that it can't
> issue a frame at the current time (EAGAIN may do), and if this
> mechanism may have bad implications.

I believe it must try hard, but not harder that the filter that are
connected to its input.

Apart from possible real-time filters, EAGAIN would in fact come only from
the buffersrc. If the filter graph has only movie and synthetic sources,
EAGAIN would not occur, and request_frame will always cause something to be
pushed.

> > * To flush frames at EOF, get request_frame to return AVERROR_EOF.
> Isn't this already implemented? (at least movie works like this)

In movie yes, lavd/lavfi possibly, but not ffmpeg+buffersrc. I posted a
patch to that effect, that was what lead to this thread.

Regards,

-- 
 Nicolas George


More information about the ffmpeg-devel mailing list