[Libav-user] Direct rendering or avoiding copying locally a decoded AVFrame before rendering
Julian Herrera (TVGenius)
julian.herrera at tvgenius.net
Mon Oct 7 15:18:35 CEST 2013
I have developed a video player based on ffmpeg for iOS, which is intended to play MPEG-2 (SD) and H264 (HD) streams from a DVB-S source. The rendering of the decoded video frames is performed by an OpenGL component also responsible for converting YUV frames to RGB.
I used the code in ffplay.c as a reference for the steps necessary to decode and display the video frames, but I was wondering whether I can improve the process by eliminating the need of copying locally each decoded frame before enqueuing them for rendering.
Basically I would like to implement the fix proposed in line 1612 of ffplay.c (ffmpeg v. 2.0) to bypass the local copy of the decoded frame. So the idea is to allocate a brand new AVFrame every time before calling avcodec_decode_video2() and then send the frame to the renderer, instead of passing a single reusable AVFrame to the decoder and copying locally the result frame (this single copying step consumes around 10% of the processor time on an iPad 4 whilst decoding h264-based 1080i streams).
I tried such idea unsuccessfully. The resulted frames were rendered out of order, so the video was played as if the images were going back and forth in time. If I call av_frame_clone(videoFrame) before sending the frame to the renderer, the video is played correctly but the same high processor usage occurs, which is what I am trying to prevent.
Could you shed some light on how to solve this issue if it is possible?
More information about the Libav-user