[FFmpeg-devel] [PATCH] Fix DV uninitialized reads
Fri Sep 25 20:51:22 CEST 2009
On 09/22/2009 11:59 AM, Reimar D?ffinger wrote:
> On Tue, Sep 22, 2009 at 11:40:14AM -0700, Baptiste Coudurier wrote:
>> On 09/22/2009 01:20 AM, Reimar D?ffinger wrote:
>>> I don't think the specs say anything about it (except for the header
>>> DIFs), but since the current code pads the header and audio DIFs by 0xff
>>> it is consistent... (though flush_put_bits will pad with 0, so it is not
>>> that consistent).
>> Well I'd prefer being stricly correct according to specs.
> Then you'll have to find someone who has $450 too much for the DV spec.
> I'd like to add that this is rather ridiculous considering it currently
> contains random data and I always have to run extra circle over extra circle
> to make things absolutely 100% perfect when I try to fix a bug,
> the end result being that things stay buggy for ages.
I understand, and I agree that fixing a bug is better than letting it
this way for ages, however once we are at fixing it, we can make sure
that's it's fixed in the best possible way.
It just the matter of the padding value (0 or 1).
>>>> Maybe s->sys->block_sizes[j] makes more sense ?
>>> Not in my opinion. Like this it is clear that it simply pads the
>>> remainder of the put_bit context. If you use s->sys->block_sizes[j]
>>> the code is only understandable to someone who knows that this put_bit
>>> context is somehow related to the block sizes.
>> In my opinion it is unclear why we are padding the put_bit context
>> and for how much. Using s->sys->block_sizes makes it clear that it's
>> a block and its size is fixed and where the size is setup.
> Well, you were asking a question and I answered it. Using block_size is
> only understandable for someone who knows the DV format and what sys->block_size
> is. Just filling up the buffer obviously reserved for the bitstream
> seems rather logical to me in either case.
> Of course it would be better with a special function like fill_put_bits.
IMHO, to modify the dv code you'd better know it.
>>>>> + if (pos< size)
>>>>> + memset(pbs[j].buf + pos, 0xff, size - pos);
>>>> Is it worth to check for pos< size ?
>>> For speed? No. But if for some reason ever the decoder writes beyond the
>>> buffer end without the check it would try to overwrite all memory with
>> IMHO if this happen it's an error and encoder should return -1 to
>> indicate it.
> Given that the return value of that function is never checked that does not seem
> any better to me (though not really worse either, though really it should be an
> assert, though one that is also enabled for non-debug builds...).
I agree with the assert. There is also another error above that should
return -1, so I guess the return code should be, indeed, checked.
Key fingerprint 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA
FFmpeg maintainer http://www.ffmpeg.org
More information about the ffmpeg-devel