[FFmpeg-devel] [PATCH] Yield on AVERROR(EAGAIN).

Luca Abeni lucabe72
Sat Mar 6 10:39:55 CET 2010


On Fri, 2010-03-05 at 12:35 +0100, Michael Niedermayer wrote:
[...]
> >>> I might be wrong (maybe I remember the wrong problem :), but I seem to
> >>> remember that it consumed 100% of the CPU if the network returned EAGAIN
> >>> because of AVFMT_FLAG_NONBLOCK.
> >> question is, even if so, why would this hold adding nonblock support to
> >> tcp/udp up?
> >> Its not as if av_find_stream_info() didnt consume 100% now in that case
> >
> > Ok; let me run some tests during the weekend and I'll provide some
> > more reliable information. But (as far as I remember) with blocking
> > network input (as it is right now) av_find_stream_info() does not consume
> > 100% of the CPU (because the process is almost always blocked).
> > With nonblocking network input, it consumes 100% of the CPU.
> 
> sounds like you need a usleep() somewhere
Yes, my impression is that an usleep() would be needed in
av_find_stream_info(). But I did not want to insert a sleeping function
in the "generic" libavformat code, so I wanted to find an alternative
solution.

Anyway, I ran some new tests receiving an RTP stream, and these are the
results:
1) For RTP input, modifying the rtp_read() in rtpproto.c is useless,
since such function is called from rtsp.c, which already implements the
"select()" thing. So, I'd think that all the select and timeout stuff in
rtpproto.c is useless (in fact, it seems to me that it is never used).
Of course, udp.c is a different story.
2) If I modify the code to honour the NONBLOCK flag, trying to receive a
non-existing RTP stream consumes 100% of the CPU time.
3) Even if the stream exists, 100% of the CPU time is consumed in the
first seconds (during av_find_stream_info(), and probably while
processing the packets buffered during av_find_stream_info()).


			Luca




More information about the ffmpeg-devel mailing list