[FFmpeg-user] Unable to sync audio and video

Livio Tenze ltenze at gmail.com
Wed Feb 17 10:14:56 EET 2021


On Tue, Feb 16, 2021 at 11:09 AM Nicolas George <george at nsup.org> wrote:

> Livio Tenze (12021-02-16):
> > At the moment I have only one thread (the main one, as in doc/examples).
> > The process I implemented is the following:
> > 0) Initialize PTS output frame (one PTS for video and another for audio)
> > 1) check PTS difference between PTS video and audio
> > 2) if ptsvideo>ptsaudio then decode one video frame, and encode it in the
> > output stream
> >     otherwise decode audio frame, and it in the output stream.
> > 3) go to point 1
>
> That should work, provided you checked that your timestamps relate to
> the same origin. If some timestamps relate to the system boot and some
> to 1970-01-01, you will get a desync.
>

The timestamp I am currently using is related to the pts obtained from the
AVPacket packets: I use the first PTS packet as reference. Is it a right
approach for syncing?


> Plus, if the capture did not start at the same time, you will get extra
> frames at the beginning of a stream, and it is possible that some
> players will not catch up or catch up slowly. It would probably be more
> reliable to discard frames captured before the first frame of the other
> stream.
>

I haven't found info about this issue: does the av_read_frame call return
always the latest acquired packet or does it return a buffered packet? I
haven't found this info. The question is related to real-time acquisition.

>
>
> > Do you think that this behaviour is due to missing multithread? I found
> the
>
> Yes. Without parallelism for the capture, the codec initialization could
> take so much time as to cause a buffer overrun in one of the capture
> drivers.
>
> Ok, thank you for this suggestion. Do you suggest to use one thread for
every source and one thread for encoding? Is it a good approach in your
opinion?

>
> Regards,
>
> Thanks a lot!
Livius

>
>


More information about the ffmpeg-user mailing list