[Libav-user] Raw H264 frames in mpegts container

Ferenc Deak frehone at yahoo.com
Wed May 11 14:30:13 CEST 2011


                Hi all,


I would really appreciate some help with the following issue:

I have a gadget with a camera, producing H264 compressed video
frames, these frames are being sent to my application. These frames are
not in a container, just raw data.


I want to use ffmpeg and libav functions to create a video file, which can be used later.

If I decode the frames, then encode them, everything works fine, I
get a valid video file. (the decode/encode steps are the usual libav
commands, nothing fancy here, I took them from the almighty internet,
they are rock solid)... However, I waste a lot of time by decoding and
encoding, so I would like to skip this step and directly put the frames
in the output stream. Now, the problems come.

Here is the code I came up with for producing the encoding:

AVFrame* picture;

avpicture_fill((AVPicture*) picture, (uint8_t*)frameData, 
                 codecContext->pix_fmt, codecContext->width,
                 codecContext->height);
int outSize = avcodec_encode_video(codecContext, videoOutBuf, 
                 sizeof(videoOutBuf), picture);
if (outSize > 0) 
{
    AVPacket packet;
    av_init_packet(&packet);
    packet.pts = av_rescale_q(codecContext->coded_frame->pts,
                  codecContext->time_base, videoStream->time_base);
    if (codecContext->coded_frame->key_frame) 
    {
        packet.flags |= PKT_FLAG_KEY;
    }
    packet.stream_index = videoStream->index;
    packet.data =  videoOutBuf;
    packet.size =  outSize;

    av_interleaved_write_frame(context, &packet);
    put_flush_packet(context->pb);
}


Where the variables are like:

frameData is the decoded frame data, that came from the camera, it was decoded in a previous step and videoOutBuf is a plain uint8_t buffer for holding the data

I have modified the application in order to not to decode the frames, but simply pass through the data like:

    AVPacket packet;
    av_init_packet(&packet);

    packet.stream_index = videoStream->index;
    packet.data = (uint8_t*)frameData;
    packet.size = currentFrameSize;

    av_interleaved_write_frame(context, &packet);
    put_flush_packet(context->pb);


where 

frameData is the raw H264 frame
and currentFrameSize is the size of the raw H264 frame, ie. the number of bytes I get from the gadget for every frame.

And suddenly the application is not working correctly anymore, the
produced video is unplayable. This is obvious, since I was not setting
a correct PTS for the packet. What I did was the following (I'm
desperate, you can see it from this approach :) )

    packet.pts = timestamps[timestamp_counter ++];


where timestamps is actually a list of PTS's produced
by the working code above, and written to a file (yes, you read it
properly, I logged all the PTS's for a 10 minute session, and wanted to
use them).

The application still does not work.

Now, here I am without any clue what to do, so here is the question:

I would like to create an "mpegts" video stream using libav
functions, insert in the stream already encoded video frames and create
a video file with it. How do I do it?


Thanks,
f.

            
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20110511/be821d10/attachment.html>


More information about the Libav-user mailing list