[Libav-user] Hardware decoding via libva

Oleg mybrokenbeat at gmail.com
Sat Dec 22 14:22:01 CET 2012


You're misunderstanding some concepts. There are two operations that can be accelerated by GPU: decoding and yuv->rgb converting. First thing can be achieved by vaapi as you mentioned. Second, by using OpenGL shader(I prefer OpenGL as it's cross-platform. Other option is to use DirectX on Win platform) that will convert YUV->RGB and draw converted frame immediately.


  
22.12.2012, в 14:15, faeem написал(а):

> On 22/12/2012 02:53, Carl Eugen Hoyos wrote:
>> faeem <faeem.ali at ...> writes:
>> 
>> I know of two examples, the va-api code in vlc and the code in a patch for MPlayer, see for example http://thread.gmane.org/gmane.comp.video.mplayer.devel/61734/focus=61744 [...] 
> Thanks. I'll be looking into those examples ASAP.
> 
>>>   // FIXME use direct rendering
>>> 
>>> I need to know how to fix that FIXME.
>> This is not related to va-api at all.
> I selected that //FIXME because it specified "direct rendering", which I took to mean "handled by the GPU". It seems I was mistaken there.
> 
> My conceptual understanding of va-api thus far, within the ffmpeg framework, is that libavcodec will read an encoded video frame, then use va-api and the GPU to perform decoding of that frame instead of performing decoding in software.
> 
> The end result will probably be a frame in YUV. I'll need to run the YUV to RGB conversion on each frame if I'm running OpenGL and this will still be CPU intensive. I would benefit from the hardware frame decoding though, and that alone should make a significant difference.
> 
> Is this correct?
> 
> Faeem
> 
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user



More information about the Libav-user mailing list