[Libav-user] imgutils.h decode dst buffer going from avpicture_fill to av_image_fill_arrays

Charles linux2 at orion15.org
Wed Aug 17 21:16:08 EEST 2016


I am trying to get single frame decoding of h.264 video in an openGL 
application.

Having an issue seeing how to get from avpicture_fill to 
av_image_fill_arrys :

avpicture_fill( (AVPicture *) m_avframeRGB, buffer, AV_PIX_FMT_RGB24, 
m_avcodec_ctx->width, m_avcodec_ctx->height )

int av_image_fill_arrays(uint8_t *dst_data[4], int dst_linesize[4],
                          const uint8_t *src,
                          enum AVPixelFormat pix_fmt, int width, int 
height, int align )

The code follows examples at:
../ffmpeg/decoding_encoding.c ../ffmpeg/scaling_video.c
https://github.com/mpenkov/ffmpeg-tutorial/blob/master/tutorial01.c
https://github.com/filippobrizzi/raw_rgb_straming/blob/master/client/x264decoder.cpp

Unable to find examples using av_image_fill_arrays.

What I would like to to do is pass in the pointer to a texture array and 
have the sws_scale load the RGB data into that location.
Sample code I am finding appears to do multiple copies of the buffers. 
Comments like this one:
//Manadatory function to copy the image form an AVFrame to a generic buffer.
av_image_copy_to_buffer( (uint8_t*) m_avframeRGB, m_size, (const uint8_t 
* const*)rgb_buffer, &m_height, AV_PIX_FMT_RGB24, m_width, m_height, 
magic_align );

init()
{
    avcodec_register_all();
ctx ...
ctx ...
... width height codec
    m_sws_ctx = sws_getContext( m_width, m_height, AV_PIX_FMT_YUV420P, 
m_width, m_height, AV_PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL );
/// I think these two calls are just setting up a common struct format 
but not actually allocating buffers to hold the pixel data
    m_avframe = get_alloc_picture( AV_PIX_FMT_YUV420P, m_width, 
m_height, true );
    m_avframeRGB = get_alloc_picture( AV_PIX_FMT_RGB24, m_width, 
m_height, true );
}
getFrame( char * rgb_buffer, size_t size )
{
av_return = avcodec_send_packet( m_avcodec_context, &av_packet )
while ( ! frame_done ) gobble packets
     /// use the input buffer as the output of the YUV to RGB24
sws_scale( m_sws_ctx, m_avframe->data, m_avframe->linesize, 0, m_height, 
rgb_buffer, m_avframeRGB->linesize );
}

Am I even on the right highway here?

An even better way may to be get the YUV pointer and run a fragment 
shader across it, is there a way to leave the buffer in the video ram?

Any pointers would be appreciated.

Thanks
cco




More information about the Libav-user mailing list