[Ffmpeg-devel] Embedded preview vhook/voutfb.so /dev/fb
Fri Mar 30 17:28:18 CEST 2007
On Fri, Mar 30, 2007 at 02:07:12PM +0800, Bobby Bingham wrote:
> >Now, on to picture sequence filters....
> >These would be similar to picture filters, but also have an amount of
> >necessary 'context' along the time axis, i.e. a number of past and
> >future frames that need to be available in order for the filter to
> >operate. Again, what to do when the in/out correspondence isn't
> >one-to-one is a little tricky. Perhaps mimic the way scaling ends up
> >working along the spatial axes...
> Is there any reason the picture filters can't just be a picture
> sequence filter with temporal context = 0?
They could, but the idea was to have a simpler filter-writing API for
pure picture filters where it doesn't have to know anything about the
idea of "frames" or "sequence".
Also it's possible that a filter operating on sequences of frames
could need to see them in-order, even if it doesn't explicitly need
context to be able to look back at old frames. For example it might
remember the brightness of the previous frame internally, or remember
a phase (in inverse telecine or phase shifting), etc..
Or if it's a temporal blur, it might remember the sum/average of the
past few frames in an internally-kept buffer (or a permanent output
buffer), rather than having to re-sum them from context each time.
This is the time-dimension analogue of what Michael was talking about
with boxblur and slices. :)
More information about the ffmpeg-devel