[FFmpeg-devel] How to help with libavfilter

Stefano Sabatini stefano.sabatini-lala
Sun Dec 19 21:02:16 CET 2010


On date Sunday 2010-12-19 17:45:13 +0100, tajz encoded:
> Stefano Sabatini a ?crit :
> >On date Saturday 2010-12-18 18:37:33 +0100, tajz encoded:
> >>Hi,
> >>
> >>I am new to ffmpeg. I successfully build from sources (last trunk).
> >>I read doc, some posts related to multiple inputs and mixing, and a bit
> >>of source code.
> >>
> >>I see that one can give multiple "-i file" to ffmpeg, but it seems there
> >>is only one main source "[in]" in a video filter graph ("-vf").
> >>
> >>I would like to mix two videos, using a filtergraph like :
> >>"[in1] scale=100x50 [vid1]; [vid1] [in2] overlay [out]".
> >>
> >>But it seems that only the first '-i' file is attached to the [in]
> >>stream of the filtergraph syntax.
> >>In graphparser.c I found "[in]", and it seems there is no other built-in
> >>input.
> >>
> >>"overlay"-ing a "color" sourced stream other the first "-i" works. I
> >>just want to overlay another a second "-i" to the first.
> >>
> >>Am I right ?
> >>Is there a plan to support this feature (having each '-i' input
> >>magically mapped to a [in1], [in2], etc stream) ?
> >
> >Yes, and help is welcome, feel free to create a feature request in
> >roundup.
> 
> OK.
> I would be glad to help for that.
> 
> I found it is not so easy to go to the intrinsics of FFMPEG.
> Currently, I am working on a video filter of my own, may be a bit
> outside the FFMPEG straight target features.
> 
> When it will work (I have a dream), I will focus on multiple in/out
> in filtergraph.
> 
> This "filter" needs some graphics primitives (lines, rectangles,
> text). Is there any (using AVFilterBuffer) ? I don't find one, other
> filters draw directly in the buffers.
> 
> 
> 
> >Check also:
> >http://roundup.ffmpeg.org/issue2040
> 
> It looks like what I hope :
> ...
> -vf "[in1] ... [out2] ... [in2] ... [out1] ..." \
> -i filei1 -i filei2 fileo1 fileo2

[Changed the thread name to something more easily found in the
archive.]

My main focus now is fixing the various regressions, in particular:
* vflip crash with direct rendering (patch pending)
* aspect ratio regression (patch by Baptiste pending, waiting for
  review from Michael)
* memcpy-less ffmpeg (depends on the vflip patch)
* reconfiguration of the filtergraph when the input w/h/apsect
  changes.
* split and fifo auto-insertion
* compose filters (vertical and horizontal compose, this should allow
  generic mosaic configurations)
* movie source cleanup and porting from the soc repo
* movie sink

Other areas which need work:
* pad filter parametrization (like done in crop)
* libswscale API cleanup (especially regarding colorspace conversion)
* scale filter parametrization
* finalization of the many filters posted which still require some
  work (posterize, select, drawtext, rotate, lut, fish, 2xsai, eq)
* porting of more filters (from MPlayer, AVIsynth)
* implementation of more wrappers (e.g. for graphicsmagik, libgimp,
  cimg, more filters from the libopencv, etc..)
* generic filters (generic equation filter like libmpcodecs vf_geq.c),
  convolution/correlation filters, image transforms like the one
  proposed here: http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/108045

If you want to implement some generic routines that's also welcome
(e.g. for drawing lines/curves/region), but my gut feeling is that it
would be even better to implement some wrapper around generic
animation generation engine (e.g. libembryo/libimg).

As for what regards audio inclusion:
* AVConvert API cleanup and move to libavcore (unreviewed patch
  floating)
* AVConvert code optimization
* AVResample API cleanup and move to libavcore
* resample filter and ffplay/ffmpeg integration
* sox filter wrapper
* ladspa filter wrapper
* libaf filters porting

Work in any of these areas is very welcome, ask if you have any
question.
-- 
FFmpeg = Formidable & Funny Mortal Prodigious Experimenting Gadget



More information about the ffmpeg-devel mailing list