[FFmpeg-user] How does FFmpeg + NVENC scale with multiple instances?

PSPunch shima at pspunch.com
Fri Mar 25 08:10:50 CET 2016

On 2016/03/25 15:03, Sreenath BH wrote:
 > 1. You have to assign a gpu core by using "-gpu <number>' flag to ffmpeg.
 > So we need to figure out which gpu is free and use that.
 > 2. I could be wrong, but once a gpu core is assigned, it will use only
 > that core and will not switch to other cores. If you have only one gpu
 > core, this is not an issue.
 > 3. When you run two  ffmpeg instances on same gpu core, the time taken
 > to transcode almost  doubles.

Sorry for lacking clarity.
I actually have a working setup on my GTX660M.

However, from what I read, with lower grade GPUs the number of 
simultaneous encoding instances is limited to 2 by the driver regardless 
of how many cores you have. (I assume for marketing purposes)

That is exactly what I see on mine. Two realtime HD transcodes with out 
sweat on a mobile GPU. Amazing... but the 3rd instance of FFmpeg fails 
at start up.

If you've got 4 instances running with a build using the current SDK, I 
am assuming you have a non-lower-end card that will run even more.
Would you mind sharing what GPU you use?

> The nvidia encoder, when used for transcoding, creates files with high
> bitrates(and large file sizes) as compared with libx264 s/w codec. I
> have not found ways to control bitrate without loss of picture
> quality.

I was able to gain some control using the -maxrate option.

I see what you mean about quality and agree with you.
For the particular application in mind, I found the balance in bitrate 
v.s. quality acceptable if it allows us to transcode 10s of live streams 
in one box... hopefully on an ordinary desktop PC.

David Shimamoto

More information about the ffmpeg-user mailing list