[FFmpeg-user] Overlay performance
dheitmueller at kernellabs.com
Mon Aug 26 18:38:37 EEST 2019
On Mon, Aug 26, 2019 at 11:28 AM Darrin Smith <darrinps at gmail.com> wrote:
> Are there any "tricks" to improve the performance of merging an image
> (overlay) with a video?
> I have a video snippet (cropped from a larger video) that I then add an
> overlay to so additional data is added in the video. The png I use is the
> same size as the video source. I typically notice a 3X time to merge the
> png and the video as compared to the length of the video clip. So, if the
> clip is 5 seconds long, it normally takes FFMpeg 15 seconds to merge them
> together to create a final video. I'm using a Pixel 3XL. Not THE fastest
> out there, but certainly in the upper tier.
The overlay filter isn't really designed to run on slow ARM CPUs (i.e.
blending not really optimized). Usually you would do this sort of
compositing further down the pipeline in OpenGL on an embedded target.
Does the PNG really need to be the same size, or did you just do that
because it was convenient? If the latter, see if you can just blend
the region you care about (potentially using multiple overlay filters
if there are a couple of regions). In general, anything that isnt'
hardware accelerated and touches every pixel in every video frame is
going to run very poorly on an ARM target.
Also, might be worth dumping out the pipeline and making sure you're
not getting some unexpected YUV->RGB->YUV colorspace conversion in the
Devin J. Heitmueller - Kernel Labs
More information about the ffmpeg-user