[Ffmpeg-devel] Embedded preview vhook/voutfb.so /dev/fb

Rich Felker dalias
Wed Mar 28 18:16:02 CEST 2007

On Wed, Mar 28, 2007 at 01:28:30PM +0200, Michael Niedermayer wrote:
> > If the picture filter needs context of surrounding pixels (not just
> > the source rectangle), it would provide the caller an amount of needed
> > context, and the caller (the filtering engine) would ensure that this
> > much context was always available. The caller would be responsible (is
> > this a good idea?) for generating fake context at the boundaries of
> > the image in this case.
> i do not know if this is a good idea, or if this is just "we know how its
> done currently sucks and we dont know if a radical different design
> sucks less..." or in other words existing filters tend to support dealing
> with picture boundaries somehow already and in general they do so with
> no performance loss. providing extra context and for many fiters that
> might be MANY pixels of context might have some negative performance
> effect also what do you put in this extra context pixels? many filters
> will not work correctly if thats random or black pixels and i dont see
> how you want to do boundary mirroring efficiently especially in
> combination with direct rendering this is likely to become fairly complex

indeed, the alternative is just to either require filters to handle
the boundary case (telling them the data is at a boundary), or make
both cases possible and leave it to the filter author to decide
whether they can handle it.

i think you're probably right that it's best for the filter to just be
informed about boundaries and handle them itself. that's why i put the
parenthetical question in my post -- to get suggestions from the guru.

> > So far I've been thinking of a one-to-one correspondence between input
> > and output pixels, i.e. nothing like scaling. How to make this idea
> > work well with scaling is a bit of a research question.
> the swscaler works with normal horizontal slices ordered top to bottom
> and it does so efficiently with zero memcpy with the current libmpcodecs
> design, extra context would not help it,

maybe you misunderstand what i mean by context. let's pretend for now
that no size change is happening, but we're still using filters with
the same number of taps. each tap constitutes an input line that's
needed, and thus you need to be able to read lines before or after the
actual line you're rendering now.

i know swscaler already has some means of dealing with this for doing
slices, but what i described is much more general. if you needed one
previous line and one following line of context, the filter would just
specify this, and the caller would ensure either that it's available
or that you're at the image boundary. this allows filter authors to
just code the algorithm rather than coding the same corner cases over
and over again (and making it necessary for everyone who wants to
write a fast filter to be an expert in slices).

> even if its rewriten i dont see
> how extra context would do any good, the swscaler scales horizontally
> first into a temporary buffer then vertically from that, so if it
> had vertical context that would do no good
> one or 2 of the special converters in the swcaler might benefit maybe
> or maybe not but i think the advantages are tiny if at all

with this design, swscaler could be factored into a separate
horizontal scaler and vertical scaler with no performance loss, as
long as the calling filter engine setup the right temporary buffer
between them. it would also be able to operate on any regions of the
image in any order, not just top-to-bottom or bottom-to-top full-width
slices. i think that's a pretty compelling argument in favor of an api
like this, even if swscaler wouldn't take advantage of it now (or
ever), just the fact that you _can_ do stuff like this..

> and last i think this local filter stuff is independant of the actual
> filter system, it can easily be added on top of any filter system and
> its the filter system we need first the localized wraper can be designed
> and added after the actal filter system is in place and working

imo not independent. the api for writing filters is of the utmost
importance because it's what filter authors see, and writing filters
is very unattractive if it sucks. as long as the api is designed
right, you can redesign the calling filter system as many times as you
like in hopes of optimizing it more and more.. on the other hand,
adding wrappers to create a new api will (usually) just make things
slower and klunkier..


More information about the ffmpeg-devel mailing list