[FFmpeg-devel] [PATCH 6/6] lavfi: make AVFilterLink opaque in two major bumps.
george at nsup.org
Wed Dec 21 11:27:13 EET 2016
Le decadi 30 frimaire, an CCXXV, Michael Niedermayer a écrit :
> it shouldnt really be complex, not in concept, maybe in (efficient)
I think it is.
> For example as a concept one could imagine each filter runs as its
> own thread and waits on its inputs availability and output space.
> if it gets input and has output space its woken up and works and
> produces output and then wakes its surroundings up.
> no difference between a linear chain or a complex graph here
Sorry, but it does not work, at least not just like that:
If you make the pipes between filters asynchronous and unlimited, then
you could have movie from a fast codec flooding its output while overlay
is waiting on something slower on its other input. OOM.
If you make the pipes synchronous or limited, then you have problems
when there are cycles in the undirected graph, i.e. if there are
split+merge situations (a situation supported from the beginning and
present in many of the examples): merge starves on one input while split
is blocked trying to feed its other input. Deadlock.
Maybe it could be made to work with some kind of "elastic" pipes: the
fuller they are, the lower the priority. But my point is already proven:
it is complicated.
Note that what I ended up doing is based on exactly that. But it does
not only track where frames are possible but also where they are needed.
> iam not sure its usfull but
> Another view from a different direction would be to see the filter
> graph as a network of pipes with flowing water and interconnected
> pumps. This too is a simple concept and intuition would suggest that
> this would not easily end up with water being stuck nor too much
> accumulating. It does not 1:1 match real filters though which work
> on discrete frames, i wonder if this view is usefull
> It would allow awnsering global questions about a graph, like what
> inputs are useless when some output is welded shut or vice versa
Basically, you are suggesting to apply graph theory to the filter graph.
That would be very smart. Alas to do that, it is necessary to actually
know the graph. We do not: we do not know how inputs and outputs are
connected within filters. For example, the connection is very different
for overlay and concat, but from the outside they are indistinguishable.
And select is even worse, of course.
> i think the original lavfi design didnt really had any issue with graphs with
> multiple inputs or outputs. A user app could decide were to in and out
> put. but FFmpeg didnt support this at the time IIRC so the people working
> on the original lavfi had nothing to implement.
> the problems came when this support was added much later
Not only that: before I added it (i.e. not in the original design),
lavfi did not give the application enough information to decide what
input to feed. It is still missing in lithe fork's implementation.
> > - Add a callback AVFilter.activate() to replace filter_frame() on all
> > inputs and request_frame() on all outputs. Most non-trivial filters
> > are written that way in the first place.
> ok thats mostly cosmetic id say.
I think the difference between a mediocre API and a good one is partly
But this change is not cosmetic at all. Right now, if you want to notify
a filter that EOF arrived on one input, you have to request a frame on
one of its output and hope that it will in turn cause a read on that
input. But you could end up pumping on another input instead.
> > - Change buffersink to implement that callback and peek directly in the
> > FIFO.
> ok, "cosmetic"
Except for the EOF thingie, which is the biggest glitch at this time
> > - Rewrite framesync (the utility system for filters with several video
> > inputs that need synchroized frames) to implement activate and use the
> > FIFO directly.
> cosmetic :)
> > - Allow to give buffersrc a timestamp on EOF, make sure the timestamp is
> > forwarded by most filters and allow to retrieve it from buffersink.
> > (If somebody has a suggestion of a good data structure for that...)
Actually, the question in parentheses was about the priority queue, they
got separated when I reorganized my paragraphs. Sorry.
> This possibly is a big subject for discussion on its own, but maybe
> not i dont know.
Indeed, there are pros and cons. I found that an actual timestamp was
slightly better, mainly because we often do not know the duration
> > - Allow to set a callback on buffersinks and buffersrcs to be notified
> > when a frame arrives or is needed. It is much more convenient than
> > walking the buffers to check one by one.
> agree that walking is bad.
> cannot ATM argue on what is better as i dont feel that i have a clear
> enough view of this and the surroundings.
You could have said it: cosmetic :)
> > - Allow to merge several filter graphs into one. This may be useful when
> > applications have several graphs to run simultaneously, so that they
> > do not need to decide which one to activate. Another option would be
> > to have the functions in the next step work on possibly several
> > graphs.
> This may be orthogonal but i think a filter graph should be a filter
> (maybe not at struct level litterally but it should be possible to have a
> AVFilter that is backed by a arbitrary user specified filter graph)
That would be nice, but I was referring to joining independent graphs,
just to spare the application the task of scheduling between the graphs
> > - Add a function to run the graph until "something" happens; "something"
> > meaning a stop instruction called by the callbacks.
> dont understand
Just the obvious use: feed input, "run" the graph, reap outputs, repeat.
What we have now, but not bound to the "oldest sink" thing I introduced
long ago (that made sense with the recursive design).
And with threading, we want the graph to continue running while we reap
More information about the ffmpeg-devel