Opened 10 years ago

Closed 9 years ago

Last modified 8 years ago

#3116 closed defect (needs_more_info)

resource leaks,handle leaks(WIN7,avcodec_decode_video2,threads)

Reported by: q1q2q3q4ln Owned by:
Priority: important Component: undetermined
Version: git-master Keywords: handle leaks
Cc: rogerdpack@gmail.com Blocked By:
Blocking: Reproduced by developer: no
Analyzed by developer: no

Description

Summary of the bug:resource leaks,handle leaks
How to reproduce:

1 OS is Window 7(64BIT)
2 Using the API avcodec_decode_video2 to decode H264 data.
3 Test in the through the [Windows Task Manager] to find Handle each data play an increasing number 2 (resource leaks,handle leaks)

(windbg:CreateEvent)

4 100% reproduction
5 Using the FFMPEG version is ffmpeg-20131102-git-1fb3b49-Win32-dev and ffmpeg-20131102-git-1fb3b49-Win32-shared
6 If this modify the code, the problem can solved, but you cannot use a multithreaded decoding

e.g:(modify->)
disable the thread
pCodecCtxVideo->thread_count = 1;
Open codec
if(avcodec_open(pCodecCtxVideo, pCodecVideo, NULL)<0) {

7 May be avcodec_close free function has not been deleted some thread's xxx clean

Attachments (1)

2013_08_29-00_54_28_184__60039.zip (1.8 MB ) - added by q1q2q3q4ln 10 years ago.
MP4 file, H264 1080x720

Download all attachments as: .zip

Change History (15)

comment:1 by Carl Eugen Hoyos, 10 years ago

Is this a regression (was it not reproducible with older versions of FFmpeg)?
Please provide a minimal test case.

comment:2 by q1q2q3q4ln, 10 years ago

ffmpeg-20130706 is reproduction too.

by q1q2q3q4ln, 10 years ago

MP4 file, H264 1080x720

comment:3 by Carl Eugen Hoyos, 10 years ago

How can I reproduce the resource leaks using the file you uploaded?

comment:4 by q1q2q3q4ln, 10 years ago

You can use the API(avcodec_decode_video2 ) to write a program to play the file(N times), calls the API to delete every environment
So you go through Window Resource Manager to observe your program HANDLE, you'll find its number increased each time you play end

comment:5 by q1q2q3q4ln, 10 years ago

e.g. ffplay

comment:6 by djani.m, 10 years ago

Hello,

I can confirm same behavior with version ffmpeg-20131119-git-0dd8e96 Zeranoe build, detected Handle leaks visible in WinDbg, Task Manager, Process Explorer etc.

Also I can confirm that after applying suggested workaround, leak is no more present, in my case workaround is sufficient.
I don't use H264, leak is noticed with Quicktime RLE, Quicktime PNG and Avi XVID files.

comment:7 by Carl Eugen Hoyos, 10 years ago

How can I reproduce the handle leaks (without writing a program myself, ie please provide code that allows to reproduce the problem and please explain why it is not reproducible with ffplay and ffmpeg)?

comment:8 by djani.m, 10 years ago

Hello,

Thanks for quick reply, my code is part of bigger project, I will prepare some standalone test case and attach it during the day.

comment:9 by djani.m, 10 years ago

Hi,

Here is promised example, due to file size I wasn't able to attach it. You can download it at https://drive.google.com/file/d/0B4U1Z_YH0sqqVmdhamhqMHRKRG8/edit?usp=sharing

I can't answer why leak is not noticeable in ffplay and ffmpeg, if I could do that than there would be no problem for me to fix it from my code :)

There is one piece of information, my colleague and I have tested this on 4 machines, 2 Win7 x64 and 2 XP machines. Interesting part is that on 1 XP machine we can't notice this leak, and on other 3 leak is noticeable.

Proposed workaround solves problem on all machines.

If you need more info or testing, I will gladly provide you all help that I can.

Best Regards

comment:10 by Michael Niedermayer, 10 years ago

I think for this to get fixed, you will have to run it through a memory debugger on windows to find out what code allocates the thing that leaks

comment:11 by Michael Niedermayer, 9 years ago

Resolution: needs_more_info
Status: newclosed

No reply for 4 months -> ticket closed
Please reopen once you can provide output of some memory debugger which identifies the line number and
file name where the leaking entity is being allocated

comment:12 by relwin, 9 years ago

/* 
    Modified example code demuxing_decoding.c
    to expose 2 handle leaks when avcodec_decode_video2() executes within a thread, Bug #3116.
    Also demonstrates workaround fix, dec_ctx->thread_count = 1; 
    WindowsXP/7 + SDL2.0 threads, MinGW32, GCC 4.4.3
    ffmpeg-20150919-git-8c9853a-win32-dev
    
    #define THD_TEST -- use threaded code
    #define THD_TEST_BUGFIX -- adds workaround
    
 * Copyright (c) 2012 Stefano Sabatini
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */

/**
 * @file
 * Demuxing and decoding example.
 *
 * Show how to use the libavformat and libavcodec API to demux and
 * decode audio and video data.
 * @example demuxing_decoding.c
 */

#include <libavutil/imgutils.h>
#include <libavutil/samplefmt.h>
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>

static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL, *audio_stream = NULL;
static const char *src_filename = NULL;
static const char *video_dst_filename = NULL;
static const char *audio_dst_filename = NULL;
static FILE *video_dst_file = NULL;
static FILE *audio_dst_file = NULL;

static uint8_t *video_dst_data[4] = {NULL};
static int      video_dst_linesize[4];
static int video_dst_bufsize;

static int video_stream_idx = -1, audio_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket pkt;
static int video_frame_count = 0;
static int audio_frame_count = 0;

/* Enable or disable frame reference counting. You are not supposed to support
 * both paths in your application but pick the one most appropriate to your
 * needs. Look for the use of refcount in this example to see what are the
 * differences of API usage between them. */
static int refcount = 0;

/*=================== Thread Test =======================*/
#include <SDL2/SDL.h>
#include <SDL2/SDL_thread.h>
#ifdef __MINGW32__
#undef main /* Prevents SDL from overriding main() */
#endif
#define THD_TEST
#define THD_TEST_BUGFIX

typedef int (SDLCALL *ThreadProc)(void *);

static SDL_Thread* thread_start(ThreadProc fn, void* userdata, const char* name)
{
    SDL_threadID threadID;
    SDL_assert(fn != NULL);

    SDL_Thread* thread = SDL_CreateThread(fn, name, userdata);
    if (thread == NULL)
    {
        fprintf(stderr, "SDL: Failed to run '%s' thread - %s\n", name, SDL_GetError());
    }
    else
    {
        threadID = SDL_GetThreadID(thread);
        fprintf(stderr,"Thread %14s ID=%08x, TID=%08x\n",name, (uint32_t) threadID, (uint32_t)thread);
    }
    return thread;
}

static int thread_wait(SDL_Thread* tid, const char* name)
{
    int status;

    SDL_threadID threadID = SDL_GetThreadID(tid);
    fprintf(stderr,"Thread wait %14s ID=%08x, TID=%08x\n",name, (uint32_t) threadID, (uint32_t)tid);
    SDL_WaitThread(tid, &status);
    if(status)
    {
        fprintf(stderr,"Thread start '%s' finished with status %d\n", name, status);
    }
    return status;
}
/*=========================================*/

static int decode_packet(int *got_frame, int cached)
{
    int ret = 0;
    int decoded = pkt.size;

    *got_frame = 0;

    if (pkt.stream_index == video_stream_idx) {
        /* decode video frame */
        ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
        if (ret < 0) {
            fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
            return ret;
        }

        if (*got_frame) {

            if (frame->width != width || frame->height != height ||
                frame->format != pix_fmt) {
                /* To handle this change, one could call av_image_alloc again and
                 * decode the following frames into another rawvideo file. */
                fprintf(stderr, "Error: Width, height and pixel format have to be "
                        "constant in a rawvideo file, but the width, height or "
                        "pixel format of the input video changed:\n"
                        "old: width = %d, height = %d, format = %s\n"
                        "new: width = %d, height = %d, format = %s\n",
                        width, height, av_get_pix_fmt_name(pix_fmt),
                        frame->width, frame->height,
                        av_get_pix_fmt_name(frame->format));
                return -1;
            }

            printf("video_frame%s n:%d coded_n:%d pts:%s\n",
                   cached ? "(cached)" : "",
                   video_frame_count++, frame->coded_picture_number,
                   av_ts2timestr(frame->pts, &video_dec_ctx->time_base));

            /* copy decoded frame to destination buffer:
             * this is required since rawvideo expects non aligned data */
#ifndef THD_TEST
        //disabled when testing
            av_image_copy(video_dst_data, video_dst_linesize,
                          (const uint8_t **)(frame->data), frame->linesize,
                          pix_fmt, width, height);

            // write to rawvideo file
            fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
#endif // THD_TEST
        }
    } else if (pkt.stream_index == audio_stream_idx) {
        /* decode audio frame */
        ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
        if (ret < 0) {
            fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
            return ret;
        }
        /* Some audio decoders decode only part of the packet, and have to be
         * called again with the remainder of the packet data.
         * Sample: fate-suite/lossless-audio/luckynight-partial.shn
         * Also, some decoders might over-read the packet. */
        decoded = FFMIN(ret, pkt.size);

        if (*got_frame) {
            size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
            printf("audio_frame%s n:%d nb_samples:%d pts:%s\n",
                   cached ? "(cached)" : "",
                   audio_frame_count++, frame->nb_samples,
                   av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));

            /* Write the raw audio data samples of the first plane. This works
             * fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
             * most audio decoders output planar audio, which uses a separate
             * plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
             * In other words, this code will write only the first audio channel
             * in these cases.
             * You should use libswresample or libavfilter to convert the frame
             * to packed data. */
#ifndef THD_TEST
            //disabled when testing
            fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
#endif // THD_TEST
        }
    }

    /* If we use frame reference counting, we own the data and need
     * to de-reference it when we don't use it anymore */
    if (*got_frame && refcount)
        av_frame_unref(frame);

    return decoded;
}



//sample task to decode
static int demux_thread_test(void* arg)
{
    int ret = 0, got_frame;
//return 0;                     //ignore decoding to test

    /* read frames from the file */
    while (av_read_frame(fmt_ctx, &pkt) >= 0) {
        AVPacket orig_pkt = pkt;
        do {
            ret = decode_packet(&got_frame, 0);
            if (ret < 0)
                break;
            pkt.data += ret;
            pkt.size -= ret;
        } while (pkt.size > 0);
        av_free_packet(&orig_pkt);
    }

    /* flush cached frames */
    pkt.data = NULL;
    pkt.size = 0;
    do {
        decode_packet(&got_frame, 1);
    } while (got_frame);

    return 0;
}



static int open_codec_context(int *stream_idx,
                              AVFormatContext *fmt_ctx, enum AVMediaType type)
{
    int ret, stream_index;
    AVStream *st;
    AVCodecContext *dec_ctx = NULL;
    AVCodec *dec = NULL;
    AVDictionary *opts = NULL;

    ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
    if (ret < 0) {
        fprintf(stderr, "Could not find %s stream in input file '%s'\n",
                av_get_media_type_string(type), src_filename);
        return ret;
    } else {
        stream_index = ret;
        st = fmt_ctx->streams[stream_index];

        /* find decoder for the stream */
        dec_ctx = st->codec;
        dec = avcodec_find_decoder(dec_ctx->codec_id);
        if (!dec) {
            fprintf(stderr, "Failed to find %s codec\n",
                    av_get_media_type_string(type));
            return AVERROR(EINVAL);
        }

        /* Init the decoders, with or without reference counting */
        av_dict_set(&opts, "refcounted_frames", refcount ? "1" : "0", 0);

#ifdef THD_TEST_BUGFIX
        dec_ctx->thread_count = 1;  //this fixes thread bug
#endif // THD_TEST_BUGFIX

        if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
            fprintf(stderr, "Failed to open %s codec\n",
                    av_get_media_type_string(type));
            return ret;
        }
        *stream_idx = stream_index;
    }

    return 0;
}





static int get_format_from_sample_fmt(const char **fmt,
                                      enum AVSampleFormat sample_fmt)
{
    int i;
    struct sample_fmt_entry {
        enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
    } sample_fmt_entries[] = {
        { AV_SAMPLE_FMT_U8,  "u8",    "u8"    },
        { AV_SAMPLE_FMT_S16, "s16be", "s16le" },
        { AV_SAMPLE_FMT_S32, "s32be", "s32le" },
        { AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
        { AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
    };
    *fmt = NULL;

    for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
        struct sample_fmt_entry *entry = &sample_fmt_entries[i];
        if (sample_fmt == entry->sample_fmt) {
            *fmt = AV_NE(entry->fmt_be, entry->fmt_le);
            return 0;
        }
    }

    fprintf(stderr,
            "sample format %s is not supported as output format\n",
            av_get_sample_fmt_name(sample_fmt));
    return -1;
}



int main (int argc, char **argv)
{
    int ret = 0, got_frame;

    if (argc != 4 && argc != 5) {
        fprintf(stderr, "usage: %s [-refcount] input_file video_output_file audio_output_file\n"
                "API example program to show how to read frames from an input file.\n"
                "This program reads frames from a file, decodes them, and writes decoded\n"
                "video frames to a rawvideo file named video_output_file, and decoded\n"
                "audio frames to a rawaudio file named audio_output_file.\n\n"
                "If the -refcount option is specified, the program use the\n"
                "reference counting frame system which allows keeping a copy of\n"
                "the data for longer than one decode call.\n"
                "\n", argv[0]);
        exit(1);
    }
    if (argc == 5 && !strcmp(argv[1], "-refcount")) {
        refcount = 1;
        argv++;
    }
    src_filename = argv[1];
    video_dst_filename = argv[2];
    audio_dst_filename = argv[3];

    /* register all formats and codecs */
    av_register_all();

    /* open input file, and allocate format context */
    if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
        fprintf(stderr, "Could not open source file %s\n", src_filename);
        exit(1);
    }

    /* retrieve stream information */
    if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
        fprintf(stderr, "Could not find stream information\n");
        exit(1);
    }

    if (open_codec_context(&video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
        video_stream = fmt_ctx->streams[video_stream_idx];
        video_dec_ctx = video_stream->codec;

        video_dst_file = fopen(video_dst_filename, "wb");
        if (!video_dst_file) {
            fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
            ret = 1;
            goto end;
        }

        /* allocate image where the decoded image will be put */
        width = video_dec_ctx->width;
        height = video_dec_ctx->height;
        pix_fmt = video_dec_ctx->pix_fmt;
        ret = av_image_alloc(video_dst_data, video_dst_linesize,
                             width, height, pix_fmt, 1);
        if (ret < 0) {
            fprintf(stderr, "Could not allocate raw video buffer\n");
            goto end;
        }
        video_dst_bufsize = ret;
    }

    if (open_codec_context(&audio_stream_idx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
        audio_stream = fmt_ctx->streams[audio_stream_idx];
        audio_dec_ctx = audio_stream->codec;
        audio_dst_file = fopen(audio_dst_filename, "wb");
        if (!audio_dst_file) {
            fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename);
            ret = 1;
            goto end;
        }
    }

    /* dump input information to stderr */
    av_dump_format(fmt_ctx, 0, src_filename, 0);

    if (!audio_stream && !video_stream) {
        fprintf(stderr, "Could not find audio or video stream in the input, aborting\n");
        ret = 1;
        goto end;
    }

    frame = av_frame_alloc();
    if (!frame) {
        fprintf(stderr, "Could not allocate frame\n");
        ret = AVERROR(ENOMEM);
        goto end;
    }

    /* initialize packet, set data to NULL, let the demuxer fill it */
    av_init_packet(&pkt);
    pkt.data = NULL;
    pkt.size = 0;

    if (video_stream)
        printf("Demuxing video from file '%s' into '%s'\n", src_filename, video_dst_filename);
    if (audio_stream)
        printf("Demuxing audio from file '%s' into '%s'\n", src_filename, audio_dst_filename);

#ifndef THD_TEST
    /* read frames from the file */
    while (av_read_frame(fmt_ctx, &pkt) >= 0) {
        AVPacket orig_pkt = pkt;
        do {
            ret = decode_packet(&got_frame, 0);
            if (ret < 0)
                break;
            pkt.data += ret;
            pkt.size -= ret;
        } while (pkt.size > 0);
        av_free_packet(&orig_pkt);
    }

    /* flush cached frames */
    pkt.data = NULL;
    pkt.size = 0;
    do {
        decode_packet(&got_frame, 1);
    } while (got_frame);
#endif
#ifdef THD_TEST
    // SDL setup, only threads used
    if( SDL_Init(0))
    {
        fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
        exit(1);
    }

    SDL_Thread* demux_tid;
    demux_tid = thread_start( demux_thread_test, NULL, "decodetest");
    thread_wait(demux_tid, "decodetest");
#endif // THD_TEST

    printf("Demuxing succeeded.\n");

    if (video_stream) {
        printf("Play the output video file with the command:\n"
               "ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
               av_get_pix_fmt_name(pix_fmt), width, height,
               video_dst_filename);
    }

    if (audio_stream) {
        enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;
        int n_channels = audio_dec_ctx->channels;
        const char *fmt;

        if (av_sample_fmt_is_planar(sfmt)) {
            const char *packed = av_get_sample_fmt_name(sfmt);
            printf("Warning: the sample format the decoder produced is planar "
                   "(%s). This example will output the first channel only.\n",
                   packed ? packed : "?");
            sfmt = av_get_packed_sample_fmt(sfmt);
            n_channels = 1;
        }

        if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
            goto end;

        printf("Play the output audio file with the command:\n"
               "ffplay -f %s -ac %d -ar %d %s\n",
               fmt, n_channels, audio_dec_ctx->sample_rate,
               audio_dst_filename);
    }

end:
    avcodec_close(video_dec_ctx);
    avcodec_close(audio_dec_ctx);
    avformat_close_input(&fmt_ctx);
    if (video_dst_file)
        fclose(video_dst_file);
    if (audio_dst_file)
        fclose(audio_dst_file);
    av_frame_free(&frame);
    av_free(video_dst_data[0]);

    return ret < 0;
}

comment:13 by Hendrik, 9 years ago

If you build your FFmpeg with pthreads, thats "normal". You can avoid it by using native w32threads in the build.

comment:14 by Roger Pack, 8 years ago

Cc: rogerdpack@gmail.com added

Why do you say it's normal? Is there some documentation of this expected behavior? Is this a bug in win32-pthreads?

Note: See TracTickets for help on using tickets.