[FFmpeg-devel] [PATCH 5/5] src_movie: implement multiple outputs.

Stefano Sabatini stefasab at gmail.com
Fri Jul 20 15:52:26 CEST 2012


On date Thursday 2012-07-19 14:47:46 +0200, Nicolas George encoded:
> The audio and video code paths were too different,
> most of the decoding has been rewritten.
> 
> Signed-off-by: Nicolas George <nicolas.george at normalesup.org>
> ---
>  doc/filters.texi        |   51 ++--
>  libavfilter/src_movie.c |  689 +++++++++++++++++++++++++++--------------------
>  2 files changed, 420 insertions(+), 320 deletions(-)
> 
> 
> Merging the two code paths allows the feature to be implemented at a total
> cost of ~100 lines of code. I do not think that splitting the patch into
> "merge the code paths" and "multiple outputs" would make much sense, the
> second patch would end up overwriting most of the first one anyway. And it
> would actually be a lot of work for no actual result.
> 
> The reason to disable looping with multiple outputs is the problem with
> streams of slightly different duration; see what I wrote in the
> documentation part of the concat filter in a recent patch.
> 
> I also believe that the original code was bogus at the looping-flushing
> interaction. Actually, on one of my test files, the current version
> segfaults when flushing; I did not try to find out why.
> 
> Regards,
> 
> -- 
>   Nicolas George
> 
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 4a6c092..92a57b5 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -960,35 +960,8 @@ aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) : 0.1*sin(2*PI*(360+2.5/2)*t)"
>  
>  @section amovie
>  
> -Read an audio stream from a movie container.
> -
> -It accepts the syntax: @var{movie_name}[:@var{options}] where
> - at var{movie_name} is the name of the resource to read (not necessarily
> -a file but also a device or a stream accessed through some protocol),
> -and @var{options} is an optional sequence of @var{key}=@var{value}
> -pairs, separated by ":".
> -
> -The description of the accepted options follows.
> -
> - at table @option
> -
> - at item format_name, f
> -Specify the format assumed for the movie to read, and can be either
> -the name of a container or an input device. If not specified the
> -format is guessed from @var{movie_name} or by probing.
> -
> - at item seek_point, sp
> -Specify the seek point in seconds, the frames will be output
> -starting from this seek point, the parameter is evaluated with
> - at code{av_strtod} so the numerical value may be suffixed by an IS
> -postfix. Default value is "0".
> -
> - at item stream_index, si
> -Specify the index of the audio stream to read. If the value is -1,
> -the best suited audio stream will be automatically selected. Default
> -value is "-1".
> -
> - at end table
> +This is the same as @ref{src_movie} source, except it selects an audio
> +stream by default.
>  
>  @section anullsrc
>  
> @@ -3614,9 +3587,10 @@ to the pad with identifier "in".
>  "color=c=red@@0.2:s=qcif:r=10 [color]; [in][color] overlay [out]"
>  @end example
>  
> + at anchor{src_movie}
>  @section movie
>  
> -Read a video stream from a movie container.
> +Read audio and/or video stream(s) from a movie container.
>  
>  It accepts the syntax: @var{movie_name}[:@var{options}] where
>  @var{movie_name} is the name of the resource to read (not necessarily
> @@ -3639,13 +3613,22 @@ starting from this seek point, the parameter is evaluated with
>  @code{av_strtod} so the numerical value may be suffixed by an IS
>  postfix. Default value is "0".
>  
> + at item streams, s

> +Specifies the streams to read. Several streams streams can be specified,

duplicated "streams"

> +separated by "+". The source will then have as many outputs, in the same

"+" as separator may conflict with "+" chars present in the specifier
(e.g. in case it is part of the ID). So maybe some escaping/deescaping
may be needed (add a TODO just in case).

> +order. The syntax is explained in the @ref{Stream specifiers} chapter. Two
> +special names, "dv" and "da" specify respectively the default (best suited)
> +video and audio stream. Default is "dv", or "da" if the filter is called as
> +"amovie".
> +
>  @item stream_index, si
>  Specifies the index of the video stream to read. If the value is -1,
>  the best suited video stream will be automatically selected. Default
> -value is "-1".
> +value is "-1". Deprecated. If the filter is called "amovie", it will select
> +audio instead of video.
>  
>  @item loop
> -Specifies how many times to read the video stream in sequence.
> +Specifies how many times to read the stream in sequence.
>  If the value is less than 1, the stream will be read again and again.
>  Default value is "1".
>  
> @@ -3674,6 +3657,10 @@ movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie];
>  movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie];
>  [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
>  
> +# read the first video stream and the audio stream with id 0x81 from
> +# dvd.vob; the video is connected to the pad named "video" and the audio is
> +# connected to the pad named "audio":
> +movie=dvd.vob:s=v:0+#0x81 [video] [audio]
>  @end example
>  
>  @section mptestsrc
> diff --git a/libavfilter/src_movie.c b/libavfilter/src_movie.c
> index b1d8fd3..982c64a 100644
> --- a/libavfilter/src_movie.c
> +++ b/libavfilter/src_movie.c
> @@ -25,15 +25,16 @@
>   *
>   * @todo use direct rendering (no allocation of a new frame)
>   * @todo support a PTS correction mechanism
> - * @todo support more than one output stream
>   */
>  
>  /* #define DEBUG */
>  
>  #include <float.h>
>  #include "libavutil/avstring.h"
> +#include "libavutil/avassert.h"
>  #include "libavutil/opt.h"
>  #include "libavutil/imgutils.h"
> +#include "libavutil/timestamp.h"
>  #include "libavformat/avformat.h"
>  #include "audio.h"
>  #include "avcodec.h"
> @@ -42,11 +43,10 @@
>  #include "internal.h"
>  #include "video.h"
>  
> -typedef enum {
> -    STATE_DECODING,
> -    STATE_FLUSHING,
> -    STATE_DONE,
> -} MovieState;
> +typedef struct {
> +    AVStream *st;
> +    int done;
> +} MovieStream;
>  
>  typedef struct {
>      /* common A/V fields */
> @@ -55,22 +55,18 @@ typedef struct {
>      double seek_point_d;
>      char *format_name;
>      char *file_name;
> -    int stream_index;

> +    char *streams;

streams_str?

> +    int stream_index; /**< for compatibility */
>      int loop_count;
>  
>      AVFormatContext *format_ctx;
> -    AVCodecContext *codec_ctx;
> -    MovieState state;
> +    int eof;
> +    AVPacket pkt, pkt0;
>      AVFrame *frame;   ///< video frame to store the decoded images in
>  
> -    /* video-only fields */
> -    int w, h;
> -    AVFilterBufferRef *picref;
> -
> -    /* audio-only fields */
> -    int bps;            ///< bytes per sample
> -    AVPacket pkt, pkt0;
> -    AVFilterBufferRef *samplesref;
> +    int max_stream_index;

> +    MovieStream *st;

please call this "streams" (so that the reader knows it's a set), "st"
is usually used for a single stream.

> +    int *out_index;
>  } MovieContext;
>  
>  #define OFFSET(x) offsetof(MovieContext, x)
> @@ -78,8 +74,10 @@ typedef struct {
>  static const AVOption movie_options[]= {
>  {"format_name",  "set format name",         OFFSET(format_name),  AV_OPT_TYPE_STRING, {.str =  0},  CHAR_MIN, CHAR_MAX },
>  {"f",            "set format name",         OFFSET(format_name),  AV_OPT_TYPE_STRING, {.str =  0},  CHAR_MIN, CHAR_MAX },
> -{"stream_index", "set stream index",        OFFSET(stream_index), AV_OPT_TYPE_INT,    {.dbl = -1},  -1,       INT_MAX  },
> +{"streams",      "set streams",             OFFSET(streams),      AV_OPT_TYPE_STRING, {.str =  0},  CHAR_MAX, CHAR_MAX },
> +{"s",            "set streams",             OFFSET(streams),      AV_OPT_TYPE_STRING, {.str =  0},  CHAR_MAX, CHAR_MAX },
>  {"si",           "set stream index",        OFFSET(stream_index), AV_OPT_TYPE_INT,    {.dbl = -1},  -1,       INT_MAX  },
> +{"stream_index", "set stream index",        OFFSET(stream_index), AV_OPT_TYPE_INT,    {.dbl = -1},  -1,       INT_MAX  },
>  {"seek_point",   "set seekpoint (seconds)", OFFSET(seek_point_d), AV_OPT_TYPE_DOUBLE, {.dbl =  0},  0,        (INT64_MAX-1) / 1000000 },
>  {"sp",           "set seekpoint (seconds)", OFFSET(seek_point_d), AV_OPT_TYPE_DOUBLE, {.dbl =  0},  0,        (INT64_MAX-1) / 1000000 },
>  {"loop",         "set loop count",          OFFSET(loop_count),   AV_OPT_TYPE_INT,    {.dbl =  1},  0,        INT_MAX  },
> @@ -88,14 +86,91 @@ static const AVOption movie_options[]= {
>  
>  AVFILTER_DEFINE_CLASS(movie);
>  
> -static av_cold int movie_common_init(AVFilterContext *ctx, const char *args,
> -                                     enum AVMediaType type)
> +static int movie_config_output_props(AVFilterLink *outlink);
> +static int movie_request_frame(AVFilterLink *outlink);
> +
> +static AVStream *find_stream(void *log, AVFormatContext *avf, const char *spec)
> +{
> +    int i, ret, already = 0, stream_id = -1;
> +    char type_char, dummy;
> +    AVStream *found = NULL;
> +    enum AVMediaType type;
> +
> +    ret = sscanf(spec, "d%[av]%d%c", &type_char, &stream_id, &dummy);

this syntax is not documented with ret > 1 (e.g "da2"). I suppose it
is on purpose. Would be useful to support the syntax:
b<type><stream_index>

or something mapping av_find_best_stream() facilities?

> +    if (ret >= 1 && ret <= 2) {

ret == 1 || ret == 2?

> +        type = type_char == 'v' ? AVMEDIA_TYPE_VIDEO : AVMEDIA_TYPE_AUDIO;
> +        ret = av_find_best_stream(avf, type, stream_id, -1, NULL, 0);
> +        if (ret < 0) {
> +            av_log(log, AV_LOG_ERROR, "No %s stream with index '%d' found\n",
> +                   av_get_media_type_string(type), stream_id);
> +            return NULL;
> +        }
> +        return avf->streams[ret];
> +    }
> +    for (i = 0; i < avf->nb_streams; i++) {
> +        ret = avformat_match_stream_specifier(avf, avf->streams[i], spec);
> +        if (ret < 0) {
> +            av_log(log, AV_LOG_ERROR,
> +                   "Invalid stream specifier \"%s\"\n", spec);
> +            return NULL;
> +        }
> +        if (!ret)
> +            continue;
> +        if (avf->streams[i]->discard != AVDISCARD_ALL) {
> +            already++;
> +            continue;
> +        }
> +        if (found) {
> +            av_log(log, AV_LOG_WARNING,
> +                   "Ambiguous stream specifier \"%s\", using #%d\n", spec, i);
> +            break;
> +        }
> +        found = avf->streams[i];
> +    }
> +    if (!found) {
> +        av_log(log, AV_LOG_WARNING, "Stream specifier \"%s\" %s\n", spec,
> +               already ? "matched only already used streams" :
> +                         "did not match any stream");
> +        return NULL;
> +    }
> +    if (found->codec->codec_type != AVMEDIA_TYPE_VIDEO &&
> +        found->codec->codec_type != AVMEDIA_TYPE_AUDIO) {
> +        av_log(log, AV_LOG_ERROR, "Stream specifier \"%s\" matched a %s stream,"
> +               "currently unsupported by libavfilter.\n", spec,
> +               av_get_media_type_string(found->codec->codec_type));
> +        return NULL;
> +    }
> +    return found;
> +}
> +
> +static int open_stream(void *log, MovieStream *st)
> +{
> +    AVCodec *codec;
> +    int ret;
> +
> +    codec = avcodec_find_decoder(st->st->codec->codec_id);
> +    if (!codec) {
> +        av_log(log, AV_LOG_ERROR, "Failed to find any codec\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if ((ret = avcodec_open2(st->st->codec, codec, NULL)) < 0) {
> +        av_log(log, AV_LOG_ERROR, "Failed to open codec\n");
> +        return ret;
> +    }
> +
> +    return 0;
> +}
> +
> +static av_cold int movie_init(AVFilterContext *ctx, const char *args)
>  {
>      MovieContext *movie = ctx->priv;
>      AVInputFormat *iformat = NULL;
> -    AVCodec *codec;
>      int64_t timestamp;
> -    int ret;
> +    int nb_streams, ret, i;
> +    char default_streams[16], *streams, *spec, *cursor;
> +    char name[16];
> +    AVStream *st;
>  
>      movie->class = &movie_class;
>      av_opt_set_defaults(movie);
> @@ -114,6 +189,22 @@ static av_cold int movie_common_init(AVFilterContext *ctx, const char *args,
>  
>      movie->seek_point = movie->seek_point_d * 1000000 + 0.5;
>  
> +    streams = movie->streams;
> +    if (!streams) {
> +        snprintf(default_streams, sizeof(default_streams), "d%c%d",
> +                 !strcmp(ctx->filter->name, "amovie") ? 'a' : 'v',
> +                 movie->stream_index);
> +        streams = default_streams;
> +    }
> +    for (cursor = streams, nb_streams = 1; *cursor; cursor++)
> +        if (*cursor == '+')
> +            nb_streams++;
> +

> +    if (movie->loop_count != 1 && nb_streams != 1) {
> +        av_log(ctx, AV_LOG_ERROR, "Can not loop with several streams.\n");
> +        return AVERROR_PATCHWELCOME;
> +    }

Uhm, why not? (but I suppose it will be far more complex, and thus
should be addressed in another patch).

> +
>      av_register_all();
>  
>      // Try to find the movie format (container)
> @@ -148,347 +239,369 @@ static av_cold int movie_common_init(AVFilterContext *ctx, const char *args,
>          }
>      }
>  
> -    /* select the media stream */
> -    if ((ret = av_find_best_stream(movie->format_ctx, type,
> -                                   movie->stream_index, -1, NULL, 0)) < 0) {
> -        av_log(ctx, AV_LOG_ERROR, "No %s stream with index '%d' found\n",
> -               av_get_media_type_string(type), movie->stream_index);
> -        return ret;
> +    for (i = 0; i < movie->format_ctx->nb_streams; i++)
> +        movie->format_ctx->streams[i]->discard = AVDISCARD_ALL;
> +
> +    movie->st = av_calloc(nb_streams, sizeof(*movie->st));
> +    if (!movie->st)
> +        return AVERROR(ENOMEM);
> +
> +    for (i = 0; i < nb_streams; i++) {
> +        spec = av_strtok(streams, "+", &cursor);
> +        if (!spec)
> +            return AVERROR_BUG;
> +        streams = NULL; /* for next strtok */
> +        st = find_stream(ctx, movie->format_ctx, spec);
> +        if (!st)
> +            return AVERROR(EINVAL);
> +        st->discard = AVDISCARD_DEFAULT;
> +        movie->st[i].st = st;
> +        movie->max_stream_index = FFMAX(movie->max_stream_index, st->index);
>      }
> -    movie->stream_index = ret;
> -    movie->codec_ctx = movie->format_ctx->streams[movie->stream_index]->codec;
> -
> -    /*
> -     * So now we've got a pointer to the so-called codec context for our video
> -     * stream, but we still have to find the actual codec and open it.
> -     */
> -    codec = avcodec_find_decoder(movie->codec_ctx->codec_id);
> -    if (!codec) {
> -        av_log(ctx, AV_LOG_ERROR, "Failed to find any codec\n");
> -        return AVERROR(EINVAL);
> +    if (av_strtok(NULL, "+", &cursor))
> +        return AVERROR_BUG;
> +

> +    movie->out_index = av_calloc(movie->max_stream_index + 1,
> +                                 sizeof(*movie->out_index));
> +    if (!movie->out_index)
> +        return AVERROR(ENOMEM);
> +    for (i = 0; i <= movie->max_stream_index; i++)
> +        movie->out_index[i] = -1;
> +    for (i = 0; i < nb_streams; i++)
> +        movie->out_index[movie->st[i].st->index] = i;
> +
> +    for (i = 0; i < nb_streams; i++) {
> +        AVFilterPad pad = { 0 };
> +        snprintf(name, sizeof(name), "out%d", i);
> +        pad.type          = movie->st[i].st->codec->codec_type;
> +        pad.name          = av_strdup(name);
> +        pad.config_props  = movie_config_output_props;
> +        pad.request_frame = movie_request_frame;
> +        ff_insert_outpad(ctx, i, &pad);
> +        ret = open_stream(ctx, &movie->st[i]);
> +        if (ret < 0)
> +            return ret;
>      }
>  
> -    if ((ret = avcodec_open2(movie->codec_ctx, codec, NULL)) < 0) {
> -        av_log(ctx, AV_LOG_ERROR, "Failed to open codec\n");
> -        return ret;
> +    if (!(movie->frame = avcodec_alloc_frame()) ) {
> +        av_log(log, AV_LOG_ERROR, "Failed to alloc frame\n");
> +        return AVERROR(ENOMEM);
>      }
>  
>      av_log(ctx, AV_LOG_VERBOSE, "seek_point:%"PRIi64" format_name:%s file_name:%s stream_index:%d\n",
>             movie->seek_point, movie->format_name, movie->file_name,
>             movie->stream_index);
>  
> -    if (!(movie->frame = avcodec_alloc_frame()) ) {
> -        av_log(ctx, AV_LOG_ERROR, "Failed to alloc frame\n");
> -        return AVERROR(ENOMEM);
> -    }
> -
>      return 0;
>  }
>  
> -static av_cold void movie_common_uninit(AVFilterContext *ctx)
> +static av_cold void movie_uninit(AVFilterContext *ctx)
>  {
>      MovieContext *movie = ctx->priv;
> +    int i;
>  
> -    av_free(movie->file_name);
> -    av_free(movie->format_name);
> -    if (movie->codec_ctx)
> -        avcodec_close(movie->codec_ctx);
> +    for (i = 0; i < ctx->nb_outputs; i++) {
> +        av_freep(&ctx->output_pads[i].name);
> +        if (movie->st[i].st)
> +            avcodec_close(movie->st[i].st->codec);
> +    }

> +    av_freep(&movie->file_name);
> +    av_freep(&movie->format_name);
> +    av_freep(&movie->streams);

av_opt_free(&movie);

> +    av_freep(&movie->st);
> +    av_freep(&movie->out_index);
> +    av_freep(&movie->frame);
>      if (movie->format_ctx)
>          avformat_close_input(&movie->format_ctx);
> -
> -    avfilter_unref_buffer(movie->picref);
> -    av_freep(&movie->frame);
> -
> -    avfilter_unref_buffer(movie->samplesref);
> -}
> -
> -#if CONFIG_MOVIE_FILTER
> -
> -static av_cold int movie_init(AVFilterContext *ctx, const char *args)
> -{
> -    MovieContext *movie = ctx->priv;
> -    int ret;
> -
> -    if ((ret = movie_common_init(ctx, args, AVMEDIA_TYPE_VIDEO)) < 0)
> -        return ret;
> -
> -    movie->w = movie->codec_ctx->width;
> -    movie->h = movie->codec_ctx->height;
> -
> -    return 0;
>  }
>  
>  static int movie_query_formats(AVFilterContext *ctx)
>  {
>      MovieContext *movie = ctx->priv;
> -    enum PixelFormat pix_fmts[] = { movie->codec_ctx->pix_fmt, PIX_FMT_NONE };
> +    int list[] = { 0, -1 };
> +    int64_t list64[] = { 0, -1 };
> +    int i;
> +
> +    for (i = 0; i < ctx->nb_outputs; i++) {
> +        MovieStream *st = &movie->st[i];
> +        AVCodecContext *c = st->st->codec;

> +        AVFilterLink *link = ctx->outputs[i];

Nit: outlink?

> +
> +        switch (c->codec_type) {
> +        case AVMEDIA_TYPE_VIDEO:
> +            list[0] = c->pix_fmt;
> +            ff_formats_ref(ff_make_format_list(list), &link->in_formats);
> +            break;
> +        case AVMEDIA_TYPE_AUDIO:
> +            list[0] = c->sample_fmt;
> +            ff_formats_ref(ff_make_format_list(list), &link->in_formats);
> +            list[0] = c->sample_rate;
> +            ff_formats_ref(ff_make_format_list(list), &link->in_samplerates);
> +            list64[0] = c->channel_layout ? c->channel_layout :
> +                        av_get_default_channel_layout(c->channels);
> +            ff_channel_layouts_ref(avfilter_make_format64_list(list64),
> +                                   &link->in_channel_layouts);
> +            break;
> +        }
> +    }
>  
> -    ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
>      return 0;
>  }
>  
>  static int movie_config_output_props(AVFilterLink *outlink)
>  {
> -    MovieContext *movie = outlink->src->priv;
> -
> -    outlink->w = movie->w;
> -    outlink->h = movie->h;
> -    outlink->time_base = movie->format_ctx->streams[movie->stream_index]->time_base;
> +    AVFilterContext *ctx = outlink->src;
> +    MovieContext *movie  = ctx->priv;
> +    unsigned out_id = FF_OUTLINK_IDX(outlink);
> +    MovieStream *st = &movie->st[out_id];
> +    AVCodecContext *c = st->st->codec;
> +
> +    outlink->time_base = st->st->time_base;
> +
> +    switch (c->codec_type) {
> +    case AVMEDIA_TYPE_VIDEO:
> +        outlink->w          = c->width;
> +        outlink->h          = c->height;
> +        outlink->frame_rate = st->st->r_frame_rate;
> +        break;

> +    case AVMEDIA_TYPE_AUDIO:
> +        break;

pointless?

> +    }
>      return 0;
>  }
>  
> -static int movie_get_frame(AVFilterLink *outlink)
> +static AVFilterBufferRef *frame_to_buf(enum AVMediaType type, AVFrame *frame,
> +                                       AVFilterLink *outlink)
>  {
> -    MovieContext *movie = outlink->src->priv;
> -    AVPacket pkt;
> -    int ret = 0, frame_decoded;
> -    AVStream *st = movie->format_ctx->streams[movie->stream_index];
> -
> -    if (movie->state == STATE_DONE)
> -        return 0;
> -
> -    while (1) {
> -        if (movie->state == STATE_DECODING) {
> -            ret = av_read_frame(movie->format_ctx, &pkt);
> -            if (ret == AVERROR_EOF) {
> -                int64_t timestamp;
> -                if (movie->loop_count != 1) {
> -                    timestamp = movie->seek_point;
> -                    if (movie->format_ctx->start_time != AV_NOPTS_VALUE)
> -                        timestamp += movie->format_ctx->start_time;
> -                    if (av_seek_frame(movie->format_ctx, -1, timestamp, AVSEEK_FLAG_BACKWARD) < 0) {
> -                        movie->state = STATE_FLUSHING;
> -                    } else if (movie->loop_count>1)
> -                        movie->loop_count--;
> -                    continue;
> -                } else {
> -                    movie->state = STATE_FLUSHING;
> -                }
> -            } else if (ret < 0)
> -                break;
> -        }
> -
> -        // Is this a packet from the video stream?
> -        if (pkt.stream_index == movie->stream_index || movie->state == STATE_FLUSHING) {
> -            avcodec_decode_video2(movie->codec_ctx, movie->frame, &frame_decoded, &pkt);
> -
> -            if (frame_decoded) {
> -                /* FIXME: avoid the memcpy */
> -                movie->picref = ff_get_video_buffer(outlink, AV_PERM_WRITE | AV_PERM_PRESERVE |
> -                                                    AV_PERM_REUSE2, outlink->w, outlink->h);
> -                av_image_copy(movie->picref->data, movie->picref->linesize,
> -                              (void*)movie->frame->data,  movie->frame->linesize,
> -                              movie->picref->format, outlink->w, outlink->h);
> -                avfilter_copy_frame_props(movie->picref, movie->frame);
> -
> -                /* FIXME: use a PTS correction mechanism as that in
> -                 * ffplay.c when some API will be available for that */
> -                /* use pkt_dts if pkt_pts is not available */
> -                movie->picref->pts = movie->frame->pkt_pts == AV_NOPTS_VALUE ?
> -                    movie->frame->pkt_dts : movie->frame->pkt_pts;
> -
> -                if (!movie->frame->sample_aspect_ratio.num)
> -                    movie->picref->video->sample_aspect_ratio = st->sample_aspect_ratio;
> -                av_dlog(outlink->src,
> -                        "movie_get_frame(): file:'%s' pts:%"PRId64" time:%lf pos:%"PRId64" aspect:%d/%d\n",
> -                        movie->file_name, movie->picref->pts,
> -                        (double)movie->picref->pts * av_q2d(st->time_base),
> -                        movie->picref->pos,
> -                        movie->picref->video->sample_aspect_ratio.num,
> -                        movie->picref->video->sample_aspect_ratio.den);
> -                // We got it. Free the packet since we are returning
> -                av_free_packet(&pkt);
> -
> -                return 0;
> -            } else if (movie->state == STATE_FLUSHING) {
> -                movie->state = STATE_DONE;
> -                av_free_packet(&pkt);
> -                return AVERROR_EOF;
> -            }
> -        }
> -        // Free the packet that was allocated by av_read_frame
> -        av_free_packet(&pkt);
> +    AVFilterBufferRef *buf = NULL, *copy;
> +

> +    switch (type) {
> +    case AVMEDIA_TYPE_VIDEO:
> +        buf = avfilter_get_video_buffer_ref_from_frame(frame,
> +                                                       AV_PERM_WRITE |
> +                                                       AV_PERM_PRESERVE |
> +                                                       AV_PERM_REUSE2);
> +        break;
> +    case AVMEDIA_TYPE_AUDIO:
> +        buf = avfilter_get_audio_buffer_ref_from_frame(frame,
> +                                                       AV_PERM_WRITE |
> +                                                       AV_PERM_PRESERVE |
> +                                                       AV_PERM_REUSE2);
> +        break;

Any reason we don't have a unified avfilter_get_buffer_ref_from_frame()?

>      }
> -
> -    return ret;
> +    if (!buf)
> +        return NULL;
> +    buf->pts = av_frame_get_best_effort_timestamp(frame);
> +    copy = ff_copy_buffer_ref(outlink, buf);
> +    if (!copy)
> +        return NULL;
> +    buf->buf->data[0] = NULL; /* it belongs to the frame */
> +    avfilter_unref_buffer(buf);
> +    return copy;
>  }
>  
> -static int movie_request_frame(AVFilterLink *outlink)
> +static char *describe_bufref_to_str(char *dst, size_t dst_size,
> +                                    AVFilterBufferRef *buf,
> +                                    AVFilterLink *link)
>  {
> -    AVFilterBufferRef *outpicref;
> -    MovieContext *movie = outlink->src->priv;
> -    int ret;
> -
> -    if (movie->state == STATE_DONE)
> -        return AVERROR_EOF;
> -    if ((ret = movie_get_frame(outlink)) < 0)
> -        return ret;
> -
> -    outpicref = avfilter_ref_buffer(movie->picref, ~0);
> -    ff_start_frame(outlink, outpicref);
> -    ff_draw_slice(outlink, 0, outlink->h, 1);
> -    ff_end_frame(outlink);
> -    avfilter_unref_buffer(movie->picref);
> -    movie->picref = NULL;
> -
> -    return 0;

> +    switch (buf->type) {
> +    case AVMEDIA_TYPE_VIDEO:
> +        snprintf(dst, dst_size,
> +                 "video pts:%s time:%s pos:%"PRId64" size:%dx%d aspect:%d/%d",
> +                 av_ts2str(buf->pts), av_ts2timestr(buf->pts, &link->time_base),
> +                 buf->pos, buf->video->w, buf->video->h,
> +                 buf->video->sample_aspect_ratio.num,
> +                 buf->video->sample_aspect_ratio.den);
> +                 break;
> +    case AVMEDIA_TYPE_AUDIO:
> +        snprintf(dst, dst_size,
> +                 "audio pts:%s time:%s pos:%"PRId64" samples:%d",
> +                 av_ts2str(buf->pts), av_ts2timestr(buf->pts, &link->time_base),
> +                 buf->pos, buf->audio->nb_samples);
> +                 break;
> +    default:
> +        snprintf(dst, dst_size, "%s BUG", av_get_media_type_string(buf->type));
> +        break;
> +    }
> +    return dst;

Note: this could be transformed into a shared function and used by
*showinfo, or maybe dropped at all (since it is basically duplicating
*showinfo functionality, which predates).

>  }
>  
> -AVFilter avfilter_vsrc_movie = {
> -    .name          = "movie",
> -    .description   = NULL_IF_CONFIG_SMALL("Read from a movie source."),
> -    .priv_size     = sizeof(MovieContext),
> -    .init          = movie_init,
> -    .uninit        = movie_common_uninit,
> -    .query_formats = movie_query_formats,
> -
> -    .inputs    = (const AVFilterPad[]) {{ .name = NULL }},
> -    .outputs   = (const AVFilterPad[]) {{ .name      = "default",
> -                                    .type            = AVMEDIA_TYPE_VIDEO,
> -                                    .request_frame   = movie_request_frame,
> -                                    .config_props    = movie_config_output_props, },
> -                                  { .name = NULL}},
> -};
> -
> -#endif  /* CONFIG_MOVIE_FILTER */
> -
> -#if CONFIG_AMOVIE_FILTER
> +#define describe_bufref(buf, link) \
> +    describe_bufref_to_str((char[1024]){0}, 1024, buf, link)
>  
> -static av_cold int amovie_init(AVFilterContext *ctx, const char *args)
> +static int rewind_file(AVFilterContext *ctx)
>  {
>      MovieContext *movie = ctx->priv;
> -    int ret;
> +    int64_t timestamp = movie->seek_point;
> +    int ret, i;
>  
> -    if ((ret = movie_common_init(ctx, args, AVMEDIA_TYPE_AUDIO)) < 0)
> +    if (movie->format_ctx->start_time != AV_NOPTS_VALUE)
> +        timestamp += movie->format_ctx->start_time;
> +    ret = av_seek_frame(movie->format_ctx, -1, timestamp, AVSEEK_FLAG_BACKWARD);
> +    if (ret < 0) {
> +        av_log(ctx, AV_LOG_ERROR, "Unable to loop: %s\n", av_err2str(ret));
> +        movie->loop_count = 1; /* do not try again */
>          return ret;
> +    }
>  
> -    movie->bps = av_get_bytes_per_sample(movie->codec_ctx->sample_fmt);
> +    for (i = 0; i < ctx->nb_outputs; i++) {
> +        avcodec_flush_buffers(movie->st[i].st->codec);
> +        movie->st[i].done = 0;
> +    }
> +    movie->eof = 0;
>      return 0;
>  }
>  
> -static int amovie_query_formats(AVFilterContext *ctx)
> +/**
> + * Try to push a frame the requested output.
> + *
> + * @return  1 if a frame was pushed on the requested output,
> + *          0 if another atempt is possible,

attempt

[...]

Could you send the whole file, that should be more readable.
-- 
FFmpeg = Forgiving & Fostering Magnificient Perfectionist Excellent God


More information about the ffmpeg-devel mailing list