[Ffmpeg-devel] New/extended filter API discussion

Michael Niedermayer michaelni
Sat Jan 6 11:50:01 CET 2007


Hi

On Fri, Jan 05, 2007 at 05:23:19PM -0800, Aaron Williams wrote:
> Hi,
> 
> Alexander Chemeris wrote:
> > Hello,
> >
> > On 1/5/07, Michael Niedermayer <michaelni at gmx.at> wrote:
> >> to repeat again some goals of a video filter API
> >> * well documented (see mplayer for how not to do it)
> >> * writing a filter should not require complete knowledge of all
> >>   (undocumented) internals of the video filter system
> >> * direct rendering (useing a buffer provided by the next filter)
> >> * inplace rendering (for example adding some subtitles shouldnt need the
> >>   whole frame to be read and written)
> >> * slices based rendering (improves cache locality, but there are issues
> >>   with out of order decoding ...)
> >> * multiple inputs
> >> * multiple outputs (could always trivially be handled by several filters
> >>   with just a single output each)
> >> * timestamps per frame
> >> * also th number of frames consumed by a filter does not have to
> >> match the
> >>   number output ((inverse)telecine, ...)
> >>
> >> also i suggest that whoever designs the filter system looks at mplayers
> >> video filters as they support a large number of the things above
> > Take a look into Avisynth (and VirtualDub, maybe) filter API. It runs
> > under
> > Windows only, but have very interesting filter API with automatic buffer
> > management, based on pull model as opposed to push model. It uses C++
> > heavily, however may inspire you with some design ideas.

why not provide some links to the relevant source code? or documentation for
the respective filter APIs
and i do know the filter system VirtualDub used a few years ago (i dunno if
they still use that) but it was a ridiculous joke, IIRC 2 fixed buffers, a
single linear 1-in-1-out chain and filters 0,2,4,... used buffer A as input
and B as output where filters 1,3,5,... used buffer B as input and A as
output


> >
> > I have some thoughts about video filter API, because I'm thinking about
> > very similar subsystem for sipX media processing library. So, when I
> > thought
> > about benefits and drawbacks of "push", "pull" and "process" models of
> > opperation, I came to the view, that "process" approach is simplest among
> > others while almost as powerful as "push" and "pull". By "process" model
> > I mean following -- each filter have 'process()' function which simply
> > take
> > input frames, process them and push into output buffer. Most work in this
> > approach should be done by flowgraph class itself - it knows how filters
> > are connected and could take output frames from preceding filter and pass
> > them to subsequent filter.
> >
> > One drawback of "push" and "pull" models is that they could not have
> > multiple inputs (for "push") or outputs (for "pull"). Lets consider
> > "push" model.
> > If filter graph have two inputs and one of them pushes frame behaviour
> > of the second input is ill defined -- it could be ignored, or pulled,
> > or smth
> > else, but I do not see good approach here. 

if your filter has 2 inputs and you receive a push from input #1 theres
nothing ill defined, you simply are then in a state of having one input frame
and waiting for the second push, when that comes in you can output your frame
as push to the next filter


> > The same for "pull" and
> > multiple
> > outputs.

multiple outputs are always equivalent to several filters with single outputs
so this is independant of the filter architecture wrong

[...]
> I like the proposals you are giving.  I have a couple of additional
> suggestions, though it might be too early for this. 

yes


> It would be good if
> there was a way to tell the filters to 'hurry up' or cut back on
> filtering, since the source could be a live capture.

passing parameters to filters is our least problem, thats trivial to do
pretty much independant of how the filter system is designed


> 
> Another thought I had was that the filters, both audio and video, will
> need access to not only the current frame, but also likely the previous
> and next as well.

> 
> Also, in the case of audio, different filters might want different
> window sizes.  For example, for normalization I found the recommended
> RMS window size is 50ms regardless of the sample rate.

in case of 1-in-1-out audio each filter would likely read N samples, 
output M samples and step forward in the input by S samples (yes its
very likely that a filter will need some input samples for several
output samples, as one example look at a resample filter

iteration 1:
input: SSSSSS
output:O O
iteration 2:
input:  SSSSSSSS
output:    O
iteration 3:
input:    SSSSSSSSS
output:      O O

that could be implemented by providing X buffers as input or a single
input buffer with is nicely continuous, later of course is nicer from
a filters point of view


> 
> One final thought is that filters may also want to have the ability to
> output data to another stream (i.e. a file) for logging or multi-pass
> processing.  I.e. my current normalization code is quite crude, but I
> need to feed back information to the next pass for actual volume adjustment.

again this is trivial, no matter which filter system calling fopen in it
can be done ... (it can be argues that thats ugly but thats another issue)

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

No snowflake in an avalanche ever feels responsible. -- Voltaire
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20070106/5bfe4645/attachment.pgp>



More information about the ffmpeg-devel mailing list