[Libav-user] Applied pad on the decoded frame

Bruce Wheaton bruce at spearmorgan.com
Tue Oct 15 00:47:05 CEST 2013


On Oct 14, 2013, at 4:46 AM, Dolevo Jay <cmst at live.com> wrote:

> I have encoder and decoder application in separate projects. I use x264 to encode the incoming frames and use libav to decode them. If the frame has a specific resolution like 1366 x 768, the decode frame contains extra black border at the right side of the frame. I have debugged it and realized that the av_pic.linesize[0] is 50 more than the linesize during the encoding. 
> Here is the code:
> 
>            lengthDec = avcodec_decode_video2(c1, av_pic, &pic, &pkt);
>     if (pic)
>        {
>        avpicture_fill((AVPicture *)rgbFrame, RGBimg, PIX_FMT_RGB32, w, h);
>        sws_scale(ctx, av_pic->data, av_pic->linesize, 0, h, rgbFrame->data, rgbFrame->linesize);
>        }
> 
> So, in this code, I decode the packet and convert the decoded data into the rgb. 
> Why does the avcodec_decode_video2 returns a padded linesize?
> Could anyone tell me how I can eliminate the black border?

The actual encoded data has padding, added to make encoding and decoding more efficient. The returned frame has the padding still, since it's more efficient again - just decoding into a buffer without having to copy all the lines.

That's very standard. Expect to see that in almost all codecs and frame sizes.

You need to use the viewable width, not the linesize when scaling.

Bruce


More information about the Libav-user mailing list