[FFmpeg-user] Strange CPU usage in MPEG2 encoding

SF sylvain at lahiette.com
Mon Sep 1 12:39:00 CEST 2014


Le mercredi 13 août 2014 14:17:45, vous avez écrit :
> Le mardi 12 août 2014, 16:34:18 Carl Eugen Hoyos a écrit :
> > 
> > SF <sylvain <at> lahiette.com> writes:
> > 
> > > *  If i add "maxrate" and "bufsize" options, then 
> > > it varies between 5 ms and 80ms (yes !) for the 
> > > same video sequence. 
> > 
> > > It seems to be related to movement in the scene, 
> > > but i do not see the relationship between the 
> > > maxrate/bufsize parameters and the movement.
> > 
> > I don't know much about motion compensation and rate 
> > control but I find the relationship unavoidable.
> 
> Yes, i understand the relationship, but with such variations, i do 
> not understand.
> 
> > 
> > Is the performance issue you see reproducible with 
> > "ffmpeg" (the application)?
> 
> Here is the equivalent FFMPEG command : 
> ./ffmpeg -loglevel debug -f v4l2 -re -i /dev/video0 -vcodec mpeg2video -pix_fmt yuv420p
>  -s 704x576 -r 25 -b:v 2500k -maxrate 3000k -bufsize 2700k  -threads 1 
> -f mpegts -muxrate 2550k udp://192.168.144.56:55000?pkt_size=1316
> 
> FFMPEG is consuming approx. 9% of the CPU, which seems OK for me. 
> 
> If i provide the same parameters by calling directly libavcodec/libavformat, 
> it uses 30% of the CPU. After investigation, encoding an intra takes more than
> 40 ms ( !!!) and approx. 18ms for P pictures. I checked the values in AVCodecContext
> between what is set by FFMPEG and what i set in my application and i see no differences.
> 
> Any idea what can cause this CPU over-consumption when encoding I frames ?
> 

I come back on this topic, because i am still stuck with it. I am still getting a correct CPU performance when 
using ffmpeg from the command line (cf. in email above), but a very high CPU consumption when calling libavcodec API, 
with the same options (as far as i see).

After dumping various values from inside ffmpeg (AVCodecContext) and comparing with the values from my 
application, i see no clear differences. 

While searching, i found that if i set the bufsize option to a very high value when calling libavcodec API, i got 
a reasonnable CPU consumption. But then the output TS is not really compliant and i got issues with 
decoders.

Is someone here have ideas about what is going on ? Why the bufsize + maxrate option can create so much CPU load, 
even with still pictures ? And why this does not occur with the same FFMPEG command-line ?

Thanks for your ideas :)

Sylvain.



More information about the ffmpeg-user mailing list