[FFmpeg-devel] FLAC parser inefficiency

Justin Ruggles justin.ruggles
Thu Dec 9 18:29:07 CET 2010


On 12/08/2010 11:22 PM, Uoti Urpala wrote:

> The FLAC parser buffers input unnecessarily even when there's already
> enough data to return the next packet. This leads to growing internal
> buffer size and bad performance when data is fed in large enough chunks.
> In flac_parse(), the only return statement that can return a read amount
> less than the whole buf_size given as available input is the "return
> get_best_header(fpc, poutbuf, poutbuf_size);" one, and that cannot
> happen on two consecutive calls as the statement is under "if
> (fpc->best_header_valid)" but get_best_header() sets that to false. Thus
> if you consider a hypothetical case where a program provides say
> buf_size=10M bytes of available input to each parse call then it's
> obvious that things are not going to work - the code will return at most
> one packet of output per call but will read at least 10 MB of input per
> every two calls.

Ok, I will take a look at this tomorrow when I will have more time.

> BTW how is client code supposed to avoid trying to decode the junk
> packets produced (the "/* Output a junk frame. */" part)? Normally when
> parsing before decoding you'd want to throw away that data instead.
> Check for avctx->frame_size being 0 after the parse call? But is that
> supposed to behave the same way with other parsers?

All parsers are supposed to output all data they receive.  The only job
for them is to split it into frames, not filter anything out that it
thinks are not frames.  For example, damaged frames may not pass the
parser's test, but if it can find the next valid frame boundary, the
frame splitting may still be correct and the decoder might be able to
handle the damaged frame.  It's not up to the parser to decide that.


More information about the ffmpeg-devel mailing list