[FFmpeg-devel] Behaviour of url_read_complete

Reimar Döffinger Reimar.Doeffinger
Sat Jan 23 18:08:01 CET 2010


On Sat, Jan 23, 2010 at 08:07:58AM -0800, Art Clarke wrote:
> On Fri, Jan 22, 2010 at 2:25 PM, Reimar D?ffinger
> <Reimar.Doeffinger at gmx.de>wrote:
> 
> > On Fri, Jan 22, 2010 at 07:33:23PM +0100, Reimar D?ffinger wrote:
> > > Hello,
> > > I am a bit unsure about the purpose of url_read_complete.
> > > However I would find it more convenient if its behaviour was as
> > > with patch below.
> > > What are your opinions?
> > > The users of it in FFmpeg I saw would still work with that change.
> >
> > Here is a proper patch.
> >
> 
> This negatively impacts our Octopus media server (
> http://www.xuggle.com/octopus) when reading streaming media.  Our demuxers
> share threads with other objects doing work, and when reading network
> sources and getting EAGAIN we were using that opportunity to share
> (cooperatively) the thread to do other work.  Now, effectively the thread is
> blocked doing a hot spin loop on the socket waiting for data which means we
> have to move those other jobs on to other threads (which changes our scaling
> model as the thread-context switch is not desired).  Under conditions where
> a RTMP, RTP or HTTP server we're reading from is temporarily slow, it now
> slows down our entire media server very noticeably.  Boo hoo :(

Please explain when where and why this issue appears.
With the previous code on encountering a EAGAIN all data received so far
would simply have been silently dropped.
So my understanding is that you can only have performance issues if before
your code never worked right.
I am not against reverting this if you can explain what is going on,
but I am very reluctant to do it if this probably only uncovers bugs and
places where url_read_complete should never have been used.



More information about the ffmpeg-devel mailing list