[FFmpeg-devel] [PATCH] Add alphaextract, alphamerge filters

Steven Robertson steven at strobe.cc
Sun Jul 15 00:24:49 CEST 2012


On Wed, Jul 11, 2012 at 1:21 PM, Nicolas George
<nicolas.george at normalesup.org> wrote:
>> +For example, to reconstruct full frames from a normal YUV-encoded video
>> +and a separate video created with @var{alphaextract}, you might use:
>> + at example
>> +movie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]
>> + at end example
>> +
>> +Since this filter is designed for reconstruction, it operates on frame
>> +sequences without considering timestamps, and terminates when either
>
> Does this work? I know some encoders will drop frames if they are identical
> to the previous one. If the encoder for the image makes different decisions
> from the encoder for the alpha, the sequences of frames will not be in sync.
> Using timestamps would be safer.

It works in all cases that I've tested. I agree that timestamps would guard
against that problem, but I also agree that they would be more complicated ;)
I'd prefer to get this in in its simpler form and wait for that API to become
available.

>> +static void draw_slice(AVFilterLink *link, int y0, int h, int slice_dir)
>> +{
>> +    AlphaExtractContext *extract = link->dst->priv;
>> +    AVFilterBufferRef *cur_buf = link->cur_buf;
>> +    AVFilterBufferRef *out_buf = link->dst->outputs[0]->out_buf;
>> +
>> +    if (extract->is_packed_rgb) {
>> +        int x, y, pin, pout;
>> +        for (y = y0; y < (y0 + h); y++) {
>> +            for (x = 0; x < out_buf->video->w; x++) {
>> +                pin = y * cur_buf->linesize[0] + x * 4 + extract->rgba_map[A];
>> +                pout = y * out_buf->linesize[0] + x;
>> +                out_buf->data[0][pout] = cur_buf->data[0][pin];
>> +            }
>> +        }
>
> Unless I am mistaken, that is the place where you could have changed pin and
> pout to pointers and did not.

Fixed.

>> +static void draw_frame(AVFilterContext *ctx,
>> +                       AVFilterBufferRef *main_buf,
>> +                       AVFilterBufferRef *alpha_buf)
>> +{
>> +    AlphaMergeContext *merge = ctx->priv;
>> +    int h = main_buf->video->h;
>> +
>> +    if (merge->is_packed_rgb) {
>> +        int x, y;
>> +        uint8_t *pin, *pout;
>> +        for (y = 0; y < h && y < alpha_buf->video->h; y++) {
>> +            for (x = 0; x < main_buf->video->w && x < alpha_buf->video->w; x++) {
>
> IMHO, you could just let config_output fail if the inputs have different
> sizes.

Done.

>> +                pin = alpha_buf->data[0] + y * alpha_buf->linesize[0] + x;
>> +                pout = main_buf->data[0] + y * main_buf->linesize[0]
>> +                     + x * 4 + merge->rgba_map[A];
>> +                *pout = *pin;
>
> I believe you could be more efficient and more readable with something like
> that:
>
>     pin = alpha_buf->data[0];
>     pout = main_buf->data[0] + merge->rgba_map[A];
>     for (y) {
>         for (x) {
>             *pout = *pin;
>             pin += 1;
>             pout += 4;
>         }
>         pin += alpha_buf->linesize[0] - w;
>         pout += main_buf->linesize[0] - w * 4;
>     }

Done.

>> +static int request_frame(AVFilterLink *outlink)
>> +{
>> +    AVFilterContext *ctx = outlink->src;
>> +    int ret;
>> +    ret = ff_request_frame(ctx->inputs[1]);
>> +    if (ret < 0)
>> +        return ret;
>> +    ret = ff_request_frame(ctx->inputs[0]);
>
> I am afraid this is wrong: you can end up here when you already have a frame
> in inputs[1] and you just need one on inputs[0]. Requesting another frame on
> inputs[1] may fail with EAGAIN or EOF, and you'll end up forwarding that
> error instead of requesting a frame on inputs[0].
>
> The code should look somewhat like that:
>
>     merge->frame_requested = 1;
>     while (merge->frame_requested) {
>         in = ff_bufqueue_peek(&merge->queue_main, 0) ? 0 : 1;
>         ret = ff_request_frame(ctx->inputs[in]);
>         if (ret < 0)
>             return ret;
>     }

I'm not sure I get the intent here. I came up with this, which I believe will
work for the cases I can think of. If it doesn't, can you help me understand
why?

    if (!ff_bufqueue_peek(&merge->queue_alpha, 0)) {
      ret = ff_request_frame(ctx->inputs[1]);
      if (ret < 0)
          return ret;
    }
    if (!ff_bufqueue_peek(&merge->queue_main, 0)) {
      ret = ff_request_frame(ctx->inputs[0]);
      if (ret < 0)
          return ret;
    }
    return 0;

>> +    .outputs   = (const AVFilterPad[]) {
>> +      { .name               = "default",
>> +        .type               = AVMEDIA_TYPE_VIDEO,
>> +          .config_props     = config_output,
>
> Strange indentation.

Fixed.

Thanks,
Steve
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-Add-alphaextract-alphamerge-filters.patch
Type: application/octet-stream
Size: 17828 bytes
Desc: not available
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20120714/acdecd37/attachment.obj>


More information about the ffmpeg-devel mailing list