[FFmpeg-devel] drop entire frame when RTP packets are lost
michaelni at gmx.at
Wed Jul 4 02:19:29 CEST 2012
On Tue, Jul 03, 2012 at 04:01:49PM -0400, Martin Carroll wrote:
> > the problem is caused by the OS UDP buffer overflowing this is because
> > disabled our ring buffer without the ring buffer the code depends on
> the OS having
> > large enough buffers which it plain doesnt ...
> Yes, I had already spotted that, and to "fix" it I did a side-experiment
> in which I
> hard-coded a very large receive buffer (in the setsockopt() call in
> udp.c). Even
> with a very large buffer, I still eventually start losing packets. I
> did not bother
> to mention that side experiment, because I was under the impression that
> *existing* code allegedly worked.
I tried the same before writing the mail and my results where
are you sure you updated net.core.rmem_max and net.core.rmem_default ?
because these limit the buffer size on linux?
> Given your statements re how to fix it, I conclude that ffplay, as
> written, does not
> support the playing of RTP streams that are longer than under, say, a
> minute or so.
> Please correct me if I'm wrong...
Iam not aware of such a limitation.
the way stream probing works is libavformat causes a irregularity in
the calling of the rtp code which then causes the OS buffers to
overflow as the UDP ring buffer is not useable with RTP ATM.
This has nothing to do with ffplay, ffplay reads data as it needs it
ffmpeg reads all data it can get as quick as it can. You can achive
the same with ffplay by increasing MIN_FRAMES but this has other
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The educated differ from the uneducated as much as the living from the
dead. -- Aristotle
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 198 bytes
Desc: Digital signature
More information about the ffmpeg-devel