[FFmpeg-devel] [PATCH 2/6] Frame-based multithreading framework using pthreads

Justin Ruggles justin.ruggles
Sat Feb 5 19:10:04 CET 2011


On 02/05/2011 12:28 AM, Alexander Strange wrote:

> +/**
> + * Context used by codec threads and stored in their AVCodecContext thread_opaque.
> + */
> +typedef struct PerThreadContext {
> +    struct FrameThreadContext *parent;
> +
> +    pthread_t      thread;
> +    pthread_cond_t input_cond;      ///< Used to wait for a new frame from the main thread.
> +    pthread_cond_t progress_cond;   ///< Used by child threads to wait for progress to change.
> +    pthread_cond_t output_cond;     ///< Used by the main thread to wait for frames to finish.
> +
> +    pthread_mutex_t mutex;          ///< Mutex used to protect the contents of the PerThreadContext.
> +    pthread_mutex_t progress_mutex; ///< Mutex used to protect frame progress values and progress_cond.
> +
> +    AVCodecContext *avctx;          ///< Context used to decode frames passed to this thread.
> +
> +    AVPacket       avpkt;           ///< Input frame (for decoding) or output (for encoding).
> +    int            allocated_buf_size; ///< Size allocated for avpkt.data
> +
> +    AVFrame picture;                ///< Output picture (for decoding) or input (for encoding).
> +    int     got_picture;            ///< The output of got_picture_ptr from the last avcodec_decode_video() call.
> +    int     result;                 ///< The result of the last codec decode/encode() call.


Here and many other place use the term "picture" a lot for the AVFrame.
 I understand the current code is focused on video decoding, but it
would be nice if the framework (especially the public parts) was more
neutral by using the term "frame" instead of "picture" so it will make
sense for audio at some point as well.

Thanks,
Justin



More information about the ffmpeg-devel mailing list