[FFmpeg-devel] Creating a AVFrame from a QVideoFrame
stefasab at gmail.com
Sun Jun 8 12:45:29 CEST 2014
On date Saturday 2014-06-07 07:52:12 -0600, Mike Nelson encoded:
> I'm struggling with converting a QVideoFrame (QT) into a AVFrame in order
> to encode video coming from a webcam.
> Documentation for QVideoFrame:
> I understand the basics of image formats, stride and such but I'm missing
> something. This is what I've got so far, adapted from a libav example (
> static void fill_yuv_image(const QVideoFrame &frame, AVFrame *pict,
> int frame_index, int width, int height)
> pict->pts = frame.startTime() * 1000000.0; // time_base is 1/1000000
> pict->width = frame.width();
> pict->height = frame.height();
> pict->format = STREAM_PIX_FMT; //todo: get this from the frame
> pict->data = (uint8_t*)frame.bits();
> pict->linesize = frame.bytesPerLine();
Here you're setting only the Y plane in AVFrame. You also need to set
the U and V planes (and corresponding linesizes). Also, I don't know
that is the image layout (AKA pixel format) in QVideoFrame.
Note: this question really belongs to libav-user, ffmpeg-devel is for
FFmpeg development, please post the followup on libav-user.
FFmpeg = Fostering & Fancy MultiPurpose Erroneous Genius
More information about the ffmpeg-devel