[FFmpeg-devel] Howto utilize GPU acceleration outside expected places?
Lynne
dev at lynne.ee
Thu Oct 10 12:27:26 EEST 2024
On 10/10/2024 10:50, martin schitter wrote:
>
>
> On 10.10.24 08:06, Lynne via ffmpeg-devel wrote:
>> You can copy libavutil/vulkan* into whatever project you want, and
>> change 4 #include lines to make it compile.
>> This lets you use the same API to construct and execute shaders as you
>> would within lavc/lavfi/lavu into your own project. You will need to
>> make libavutil a dependency as hwcontext_vulkan.c cannot be exported
>> and must remain within libavutil.
>
> Thanks for this description and the link. It's very interesting but not
> exactly the kind of utilization I'm looking for.
>
> I don't want to use the ffmpeg vulkan utils in another application, as
> demonstrated in your example. I just want to use it within ffmpeg
> itself, but slightly different as in all places which I saw until now.
>
> To bring it to the point:
>
> I would like to accelerate this trivial 10 and 12bit decoding of bit-
> packed-scanlines in my DNxUncompressed implementation using compute
> shaders, whenever the required hardware support is available resp. was
> enabled on startup and fallback to CPU processing otherwise.
>
> This is a different usage scenario than utilizing the hardware encoding
> and decoding facilities provided by vulkan, which seem to be the only
> kind of vulkan related GPU utilization in libavcodec until now.
>
> And because it should be used automatically if available and otherwise
> fall back to CPU processing it also differs significantly from the
> vulkan- and placebo filters, which are strictly divided from their CPU
> counterparts and don't provide any fallback.
>
> I simply couldn't find any location in ffmpegs source code, which looks
> close to this vague idea. Perhaps it's simply impossible for one or the
> other reason that I do not see -- who knows?
>
> I would be happy if you could possibly point me to already existing
> relevant code in ffmpeg to learn how to develop this kind of behavior as
> efficient as possible.
Petro Mozil wrote a Vulkan VC-2 decoder that'll soon get merged:
https://github.com/pmozil/FFmpeg
Its an example on how to accelerate via compute shaders. What he did was
he simply introduced vc2_vulkan as a hwaccel. It makes sense, even for
partially accelerated codecs, since the hwaccel infrastructure handles
everything for you, **including a fallback for CPU decoding**, and users
automatically get accelerated support.
Generally, the flow should be exactly the same as with vulkan_decode.c
for video - but instead of writing decoding commands in the command
buffer, you run a shader.
(this is just an example, vulkan_decode.c is for Vulkan video decoding
in specific, you shouldn't use that bit of code, just like Petro didn't).
If you need help, IRC, as usual.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_0xA2FEA5F03F034464.asc
Type: application/pgp-keys
Size: 624 bytes
Desc: OpenPGP public key
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20241010/90bbf13a/attachment.key>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 236 bytes
Desc: OpenPGP digital signature
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20241010/90bbf13a/attachment.sig>
More information about the ffmpeg-devel
mailing list