[Ffmpeg-devel] Embedded preview vhook/voutfb.so /dev/fb
Wed Mar 28 06:11:25 CEST 2007
On Wed, Mar 28, 2007 at 08:51:48AM +0900, Bobby Bingham wrote:
> Rich Felker wrote:
> >I won't block efforts to support libmpcodecs. But if there's a SoC
> >project I wish it could lead in a new direction.. Maybe that's just
> >wishful thinking. I will not be around to aid in SoC mentoring but I
> >could write up and leave my notes for others to examine.
> I've been hoping to work on a filter system for SoC, so I'd be very
> interested to see any such notes. Especially any part the points out
> specific problems with the current libmpcodecs design and suggestions
> for improvements.
OK, the super-quick explanation of the design is that there would be 2
classes of filters, each subdivided into subclasses (class is just an
english word here, no C++ connotation..):
1. picture filters. these act on individual pictures and have no
knowledge or care that the pictures are a sequence of frames in a
movie; in fact; they could just as easily be used as still picture
filters in a photo viewer or in gimp...
2. picture sequence filters. these are aware of processing frame
sequences.. things like pts, order, neighboring frames, etc. no need
for one-to-one correspondence between input and output frames even.
The division allows the calling filter system to be very clever if it
wants to. Picture filters don't need to process frames in order, so
they could operate in decode order during out-of-order decoding,
A picture filter would just define a function (or functions) to
operate upon rectangular pixel arrays, in a set of supported image
formats (packed/planar/yuv/rgb/etc.). Optionally it could provide
separate functions for aligned and unaligned locations, and the filter
layer would automatically call the unaligned functions for boundary
pixels while using the optimized aligned functions (if available) for
most of the picture. These functions could also be called on
If the picture filter needs context of surrounding pixels (not just
the source rectangle), it would provide the caller an amount of needed
context, and the caller (the filtering engine) would ensure that this
much context was always available. The caller would be responsible (is
this a good idea?) for generating fake context at the boundaries of
the image in this case.
So far I've been thinking of a one-to-one correspondence between input
and output pixels, i.e. nothing like scaling. How to make this idea
work well with scaling is a bit of a research question.
Now, on to picture sequence filters....
These would be similar to picture filters, but also have an amount of
necessary 'context' along the time axis, i.e. a number of past and
future frames that need to be available in order for the filter to
operate. Again, what to do when the in/out correspondence isn't
one-to-one is a little tricky. Perhaps mimic the way scaling ends up
working along the spatial axes...
The idea is not very well-developed here; more thought is needed.
Finally, some rationale:
The main goal of this system is that filter implementation should be
trivial, so that people can write good filters (and make them perform
optimally, using DR/slices/etc.) without having to be experts at frame
shuffling. All of the performance logic and frame handling logic can
be taken care of by the filter layer engine, and the filters can just
do their function and nothing else.
A huge side benefit is that the filters themselves are not tightly
linked to the implementation of the filter layer. Picture filters
especially can easily be taken out of the framework and used with a
much simpler caller, as part of GIMP or a photo-browser app or a video
game or emulator or any other program that could benefit from fancy,
high-performance image processing.
It's also extensible. If things change regarding memory performance
and what sort of rendering pipeline is optimal, the filters don't have
to be rewritten to follow. Instead, the calling filter engine just
needs to be changed.
Errm, I guess that wasn't super-quick. Hopefully it gets the point
across though. I thought all this out a couple years ago, but sadly
I've lost touch with development and never got the chance to
experiment with implementing it. I'd love to see something built on
these ideas though.
P.S. Note that, upon Michael's special request, my notes have been
delivered in C form. :)))))))))))
More information about the ffmpeg-devel