[Libav-user] HW Accelerator in Windows: DXVA2 with new API

Hector Alonso hector.alonso.aparicio at gmail.com
Tue Aug 19 11:25:40 CEST 2014


Hi,
I'm developing a H264 decoder in Windows and I want to use the hardware
acceleration capabilities: DXVA2
I've already built FFMPEG from git (lastest version: N-65404-gd34ec64;
libavcodec 55.73.101 ...) using Mingw64 for 32bits this way:


echo 'export PATH=.:/local/bin:/bin:/mingw64/bin' > .profile
source .profile


git config --global core.autocrlf false
  git clone git://git.videolan.org/x264.git x264 cd x264 ./configure
--host=x86_64-w64-mingw32
--enable-static --enable-shared && make && make install cd ..   git clone
git://github.com/mstorsjo/fdk-aac.git fdk-aac cd fdk-aac ./autogen.sh
./configure
--host=x86_64-w64-mingw32 --enable-static --enable-shared && make && make
install cd ..

git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg cd ffmpeg
./configure --enable-gpl --enable-nonfree --enable-libx264
--enable-libfdk_aac --enable-memalign-hack --enable-runtime-cpudetect
--enable-dxva2 --enable-decoder?h264_dxva2 --disable-hwaccels
--enable-hwaccel=h264_dxva2 --enable-static --enable-shared
--extra-cflags=-I/local/include --extra-ldflags='-L/local/lib -static' &&
make && make install

I've adapted the newest version of ffmpeg_dxva2.c to C++ (with a header
file including InputStream, HWAccelID and dxva2_init declarations), now it
builds with VisualStudio 2012 and everything is running correctly with my
current decoder (not HW accelerated).

I found an old post of this list (
https://lists.ffmpeg.org/pipermail/ffmpeg-user/2012-May/006600.html)
describing the steps for running it, but now te API has changed...
I've also studied the VLC method, but it is also working with the older API
(get_buffer instead of get_buffer2, and so).
In the newest ffmpeg.c they actually use the hwaccel including DXVA2, but
in quite a complicated way... could you please give an example or a piece
of advice  on how to do it?

1.- Create a InputStream instance, populate it and address it to
AVCodecContext->opaque. Which are the needed attributes? It requires width
and height of the input before opening it!

2.- call dxva_init() with the decoder AVCodecContext allocated using
avcodec_alloc_context3 from CODEC_ID_H264 codec before
calling avcodec_open2 ?

3.- Call dxva2_retrieve_data() with the decoder AVCodecContext  and a
AV_PIX_FMT_NV12
frame when avcodec_decode_video2 returns a complete frame (got_output) ?

4.- Copy (convert) the resulting frame to a YUV420P OpenGL texture, apply a
pixelformat conversion shader and show it! <- This step is working fine (I
did it for a separated decoder using Intel Media SDK last week).

(Future step. - use directly the DX surface using OpenGL extensions.)

Now I'm stuck in the first and second steps, when avcodec_decode_video2  is
called it crashes. I've debugged step by step the dxva initialization and
the DX surfaces are being created correctly. What am I missing?

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://ffmpeg.org/pipermail/libav-user/attachments/20140819/a7b34102/attachment.html>


More information about the Libav-user mailing list