[FFmpeg-devel] [PATCH 3/5] ffmpeg: flush and drain video filters.

Nicolas George nicolas.george at normalesup.org
Wed Mar 14 23:30:08 CET 2012

Le quintidi 25 ventôse, an CCXX, Michael Niedermayer a écrit :
> Its a bit difficult to agree to axioms
> without fully understanding the consequences these axioms have.
> But taking this in isolation it sounds reasonable


> Theres a small problem with the first axiom though:
> As reference:
>     a filter should
>     output its frame as soon at it can, as a reaction from one of its input, and
>     not wait for a request on its output.
> A frame duplication filter should not output all frames at once as
> this could overload the CPU and interrupt smooth playback.
> this is especially problematic if the following filters are CPU
> intensive. The problem could easily happen with varable fps input
> from some frame skip using encoder that decided to skip 2 seconds
> or more of black frames. While the filter duplicates 60 frames and
> the next denoises them teh audio que might become empty in a single
> threaded application ...
> yadif in 2field->2frame mode would be another example. yadif currently
> does not return the 2nd frame before it has to

This could be a problem indeed, but I do not think that inserting FIFOs
could be enough control to solve it in the general situation.

Also, remember that start_frame and friends are recursive: when a frame
duplication filter outputs all frames "at once", the first frame actually
goes all the way it can go (possibly the output) before the second frame is
started. For simple cases, getting the output(s) to take care of the
scheduling/prioritizing should be enough.

Last of all, I am afraid that for any solution, we could invent a
pathological situation that will make it look bad.

> this is not so good, a overlay filter might have one of its inputs
> reach EOF and the other still should be passed through.

Actually, that is not how overlay currently behaves. And I believe both
options have pros and cons.

> Or from a different POV a news video might have a commentator who
> comments various news footage shown in a picture behind her. These
> videos will not always run / be shown during the whole news but the
> other input (here the commentators video & audio) still continue and
> should pass through the filter that combines both.

That is true. But on the other hand, how can the overlay filter know whether
it must output the frame immediately because there is no overlay image or
it must wait for an overlay image that is just lagging somewhat? I do not
think it is the role of the overlay filter to make that decision: rather, a
correct solution would probably to get another filter to insert transparent
frames in the overlay stream, depending on the exact situation (live
broadcast or offline encoding, etc.).

Also, that conundrum actually exists for a poll_frame-based solution too,
unless I am mistaken, especially with overlay's current implementation.

I will try to start a new thread to summarize my ideas tomorrow, to get
things clearer.


  Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20120314/e0e4ac84/attachment.asc>

More information about the ffmpeg-devel mailing list