[FFmpeg-devel] [PATCH] avformat/options_table: Set the default maximum number of streams to 100
nfxjfg at googlemail.com
Fri Dec 9 11:04:23 EET 2016
On Fri, 9 Dec 2016 02:44:11 +0100
Michael Niedermayer <michael at niedermayer.cc> wrote:
> On Thu, Dec 08, 2016 at 09:47:53PM +0100, Nicolas George wrote:
> > L'octidi 18 frimaire, an CCXXV, Michael Niedermayer a écrit :
> > > A. Is a heap limit for av_*alloc*() acceptable ?
> > > B. Are case based limits acceptable ?
> > No. This is the task of the operating system.
> > > also even if C is choosen, a small set of limits on the main parameters
> > > still is needed to allow efficient fuzzing, all issues reported
> > > by oss-fuzz recently are "hangs" due to slow decoding,
> > Then set a limit at the operating system level.
> You are misunderstanding the problem i think
> The goal of a fuzzer is to find bugs, crashes, undefined, bad things,
> OOM, hangs.
> If the code under test can allocate arbitrary amounts of memory and
> take arbitrary amounts of time in a significant number of non-bug
> cases then the fuzzer cannot reliably find the corresponding bugs.
> moving the threshold of where to declare something OOM or hang around
> will not solve this.
> blocking high resolution, high channel count, high stream count
> cases OTOH should improve the rate of false positives.
> also, secondary, resources spent on waiting for hangs to separate from
> slow decoding and real OOM to separate from cases just needing alot
> of memory, are resources that could be used for other things like
> fuzzing more seperate cases.
> but either way, iam the wrong one to disscuss changes to oss-fuzz with
> if you do have ideas that would improve it ...
I'm not sure why we need to accommodate the very special needs of
fuzzers now, instead of fuzzers finding ways to avoid these situations.
Fuzzers could for example just inject their own malloc impl into the
process, that limits allocations, or something.
More information about the ffmpeg-devel