[FFmpeg-devel] [PATCH] oggpagesize option added
Wed Jan 26 22:11:47 CET 2011
I think this is a pretty poor solution for the stated problem. It
would basically be copying the braindead behaviour we're stuck with in
libogg because of some poorly thought out parts of our ABI.
The fundamental issue here is that more tightly packed pages mean
greater efficiency, but they also mean higher delay. If you don't care
about delay? pack away. But if you care then you have issues.
The reason this solution is not good is that a straight size limit
does _not_ bound the delay very tightly. If the packets being emitted
by the codec are very small then you can still get up to 255 of them
in a single page? which is the same worst case delay you had with
maximum size pages! This means that your delay bounded streaming app
will mostly work, but if the bitrate drops down it will stall and
The _correct_ behaviour is to decide how much delay you are willing to
tolerate and flush at least that often? regardless of how much data is
in the pages. Gstreamer does this. Ffmpeg2theora also has a cap on
the number of packets it will attempt to place per page.
Libogg itself doesn't handle this well because the required data isn't
passed into it. It does its own dumb flushing based on "4 packets or
4k", though callers can use an alternative entry which doesn't impose
this limit if they're going to be smart and initiate the flushing on
their own. (and even the original, automatic flushing, interface can
be manually flushed). I don't see any reason for ffmpeg to be this
I'm forwarding a off-list thread I had with Andres that discussed this
some. I apologize for not doing so before now? I'd lost track of the
thread after asking for his permission to take it back on thread.
2011/1/20 Andres Gonzalez <acandido at hi-iberia.es>:
> On 20/01/11 16:50, Gregory Maxwell wrote:
>> 2011/1/20 Andres Gonzalez<acandido at hi-iberia.es>:
>>> The main purpose is streaming OGG files encoded in real-time with FFmpeg
>>> an Icecast server.
>>> By default, FFmpeg writes pages as big as possible (this is, about 64
>>> This is OK for most uses (less overhead) but if you want to stream this
>>> file to an Icecast server, Icecast cannot manage big pages. Clients
>>> an HTTP response, and nothing else. So the easier way to solve this is
>>> sending small pages from the source client.
>>> ffmpeg2theora makes small pages (~ 4 KB) and works well with Icecast, but
>>> does not have filters (AFAIK), which are very important for me.
>>> Gstreamer's shout2send element also make pages ~ 4 KB, but that's a
>> What you want for this is a _delay_ based packetization, which
>> gstreamer can do. E.g. Don't limit pages by size, limit them by the
>> amount of seconds of data allowed inside them. This is much better
>> for this purpose and will also avoid creating many continued pages.
> Interesting idea. If I understand you, I agree with that in the sense that,
> just making smaller pages could end in big sequences of small pages of the
> same logical stream, which may cause a delay or starving of some of the
> streams in the player, on the listener client.
> But I would like to keep an eye on the fact that the key problem is that
> Icecast does not manage well big buffers of data. That said, it's true that
> if I know (or set explicitly) the bitrate, I could set a delay limit such to
> get packets that don't exceed a size limit.
> By the way, is that "delay limit" functionality available on ffmpeg already?
> Here's a little more information about this topic, if you want to have a
>> Mind if I reflect this back to the list?
> No problem, feel free.
> Andr?s Gonz?lez
> T: 91 458 51 19 (ext:165)
> HI-IBERIA INGENIERIA Y PROYECTOS
> C/ Bolivia 5 - Madrid - 28016
More information about the ffmpeg-devel