[FFmpeg-user] Audio delay with captured audio and video

timothyb89 timothyb89 at gmail.com
Mon Sep 3 08:54:51 CEST 2012


I apologize if this has been asked/answered before but searching the
lists didn't turn up anything useful that I hadn't already tried. I'm
trying to use ffmpeg in a screencasting setup where I take 2 pulse
inputs (microphone and system audio), video with x11grab, and then
output to an RTMP server. I use the filter_complex parameter with amix
to mix the 2 audio streams, and the async filter to prevent audio
desync.

The async filter appears to be working properly as I had issues with
increasing desync before (it was roughly 0.5 seconds of added delay
per minute), and adding the filter resolved that issue in limited
testing. However, the audio is still consistently 1 second behind the
video.

My base command was this:
$ ffmpeg -f x11grab -s 1920x1080 -r 25 -i :0.0+1280,0 -f pulse -i
monitor_device -f pulse -i mic_device -vcodec libx264 -preset:v faster
-x264opts crf=30 -s 1920x1080 -acodec libmp3lame -ab 128k -ar 44100
-threads 0 -f flv -pix_fmt yuyv422 -filter_complex
amix=inputs=2:duration=first:dropout_transition=3 -af
asyncts=compensate=1 rtmp://live.justin.tv/app/[cut] -loglevel verbose

I tried using -itsoffset to delay the video by 1 second:
$ ffmpeg -itsoffset 1.0 -f x11grab -s 1920x1080 -r 25 -i :0.0+1280,0 [...]

... however values for -itsoffset seemed to make the delay fall
further behind, as if it was applying it persistently to the audio
stream rather than the video stream. Normally I'd just adjust the
delay manually after the fact but unfortunately that's not possible to
do while streaming.

So, is there some way to either:
a) fix/prevent the initial audio delay between the mixed pulse devices
and the video; or
b) account for it with some sort of offset?

Thanks in advance for any help!


More information about the ffmpeg-user mailing list