[FFmpeg-devel] [PATCH 6/6] lavfi: make AVFilterLink opaque in two major bumps.

Michael Niedermayer michael at niedermayer.cc
Wed Dec 21 15:39:16 EET 2016


On Wed, Dec 21, 2016 at 10:27:13AM +0100, Nicolas George wrote:
> Le decadi 30 frimaire, an CCXXV, Michael Niedermayer a écrit :
[...]
> 
> > iam not sure its usfull but
> > Another view from a different direction would be to see the filter
> > graph as a network of pipes with flowing water and interconnected
> > pumps. This too is a simple concept and intuition would suggest that
> > this would not easily end up with water being stuck nor too much
> > accumulating. It does not 1:1 match real filters though which work
> > on discrete frames, i wonder if this view is usefull
> > It would allow awnsering global questions about a graph, like what
> > inputs are useless when some output is welded shut or vice versa
> 
> Basically, you are suggesting to apply graph theory to the filter graph.
> That would be very smart. Alas to do that, it is necessary to actually
> know the graph. We do not: we do not know how inputs and outputs are
> connected within filters. For example, the connection is very different
> for overlay and concat, but from the outside they are indistinguishable.
> And select is even worse, of course.

The framework could monitor filters to determine their apparent
behavior. This would of course not give exact prediction of future
behavior.
I cant say how useful it would be to use this of course ...


> 
> > i think the original lavfi design didnt really had any issue with graphs with
> > multiple inputs or outputs. A user app could decide were to in and out
> > put. but FFmpeg didnt support this at the time IIRC so the people working
> > on the original lavfi had nothing to implement.
> > the problems came when this support was added much later
> 
> Not only that: before I added it (i.e. not in the original design),
> lavfi did not give the application enough information to decide what
> input to feed. It is still missing in lithe fork's implementation.

well, in the original design a filter graph can either be used in a
pull based application in which primarly data is requested from its
outputs and the requests recursivly move to its inputs triggering
data read through callbacks from some source filter. [applications
could implement their own source filter as there was a public API]

Or in a push based application each source would have a fifo,
if its empty the application needs to push data into the fifo, data
again is returned by requesting from the sink(s).

Which sink to pull data from could be determied by first pulling
from ones that had data when polled and then it would be up to the
application to decide, your lowest timestamp choice would have been
a possibility, keeping track of apparent in-out relations would
be another. (this was either way application side and not lavfis
choice)

So i stand by my oppinion that the original lavfi design didnt really
had an issue with graphs with multiple inputs or outputs.

No question it wasnt perfect and considering it wasnt used at the time
at all that shouldnt be surprising

but it really doesnt matter now, we moved forward from there and need
to move more forward


> 
> > > - Add a callback AVFilter.activate() to replace filter_frame() on all
> > >   inputs and request_frame() on all outputs. Most non-trivial filters
> > >   are written that way in the first place.
> > ok thats mostly cosmetic id say.
> 
> I think the difference between a mediocre API and a good one is partly
> cosmetic.
> 
> But this change is not cosmetic at all. Right now, if you want to notify
> a filter that EOF arrived on one input, you have to request a frame on
> one of its output and hope that it will in turn cause a read on that
> input. But you could end up pumping on another input instead.
> 
> > > - Change buffersink to implement that callback and peek directly in the
> > >   FIFO.
> > ok, "cosmetic"
> 
> Except for the EOF thingie, which is the biggest glitch at this time
> AFAIK.
> 
> > > - Rewrite framesync (the utility system for filters with several video
> > >   inputs that need synchroized frames) to implement activate and use the
> > >   FIFO directly.
> > cosmetic :)
> 
> Ditto.

differences in corner cases yes, i didnt mean to imply that its
purely and 100% cosmetic. More that its basically a cosmetic change
replacing how the more or less same code is triggered and that maybe
some of this could be done by some gsoc student or other volunteer.
Aka at least part of this seems time consuming but not highly complex
work.


> 
> > > - Allow to give buffersrc a timestamp on EOF, make sure the timestamp is
> > >   forwarded by most filters and allow to retrieve it from buffersink.
> > > 
> > >   (If somebody has a suggestion of a good data structure for that...)
> 
> Actually, the question in parentheses was about the priority queue, they
> got separated when I reorganized my paragraphs. Sorry.
> 
> > AVFrame.duration
> > This possibly is a big subject for discussion on its own, but maybe
> > not i dont know.
> 
> Indeed, there are pros and cons. I found that an actual timestamp was
> slightly better, mainly because we often do not know the duration
> immediately.

i dont think not knowing the duration is a problem.
you need replicated frames possibly elsewere already. Like for
subtitles, its not much different to duplicating the last frame with
the remainining to EOF duration to be added to the last 1 or 0 duration
but i didnt think deeply about this now so i might miss details
the issue is also not specific to subtitles, audio tracks with "holes"
in them exist too so do video slidshows. At least in some usecases
limiting the distance between frames is needed. (for example to
ensure random access as in keyframes. The issue can to some extend
be pushed into the container format i guess but for truely streamed
formats if you dont repeat your video frame and subtitles which
are currently disaplyed it just wont get displayed if you start viewing
around that point)
so to me it seems there are a lot of issues that all can be dealt with
by some support to replicate frames in long stretches of no frames
and later drop them if they arent needed, the last EOF duration
containing frame could then be just another such case
but again i didnt think deeply about this


> 
> > > - Allow to set a callback on buffersinks and buffersrcs to be notified
> > >   when a frame arrives or is needed. It is much more convenient than
> > >   walking the buffers to check one by one.
> > agree that walking is bad.
> > cannot ATM argue on what is better as i dont feel that i have a clear
> > enough view of this and the surroundings.
> 
> You could have said it: cosmetic :)

i should be more verbose, i said cosmetic but i meant more than just
what cosmetic means litteraly :)


[...]


-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

It is what and why we do it that matters, not just one of them.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20161221/fb5ecbba/attachment.sig>


More information about the ffmpeg-devel mailing list