[FFmpeg-user] How to create trapezoid videos with ffmpeg?
nickrobbins at yahoo.com
Sat Oct 18 14:55:29 CEST 2014
> On Saturday, October 18, 2014 8:36 AM, Moritz Barsnick <barsnick at gmx.net> wrote:
> You basically got it right. (I happened to follow the thread on
> ffmpeg-devel, thanks for forwarding it.)
> a) An inverse calculation, like in your example given. But that's just
> totally over the top, no-one can be expected to be capable of such a
> calculation. I tried to figure it out from the code, but couldn't.
I happen to be a mathematician, and I've taught courses on projective geometry and perspective. If you know the right question to ask mathematically it's straight forward, but the answer is a mess.
> b) What I meant - and you got it totally right on ffmpeg-devel - is to
> re-use the code from the perspective filter, because it contains all
> the transformations, considerations, and colorspace cruft. Just the
> wrong parameters. So the questions regarding "how to re-use a filter's
> algorithms/mechanisms without duplicating its code" are spot-on.
I'm leaning to making it an option to the perspective filter, so you can say how you want the parameters intrepreted, as the locations in the source of the corners of the new video or the locations in the new video of the corners of the source.
I don't know anything about the colorspace stuff but that could presumably be put in. If I understand correctly (a big if), the array pv in the context stores the inverse locations of the new image. If that is outside of the original image, it could just be set to [0,0,0,0] or whatever, but I don't know about color formats or anything.
If you want to handle that, I'll tackle the inverse transformation part.
> That said, I pointed out that the perspective filter is doing peculiar
> things with the color edges when using such "negative" parameters.
> a look at my testsrc example's output. Possibly the perspective filter
> was only written with "inside" reference points in mind. The opposite
> filter would also need to be able to fill the remaining "empty" space
> with some kind of transparency/alpha channel, so that overlaying the
> warped frame over another stream is possible. That's basically what the
> original poster was looking for, and what makes sense to me from a user
> I'm happy to try coding, but don't know nearly enough about the libav*
> code to cope. ;-)
> ffmpeg-user mailing list
> ffmpeg-user at ffmpeg.org
More information about the ffmpeg-user