[FFmpeg-devel] [PATCH 2/3] avformat/mov: Fix parsing of saio/siaz atoms in encrypted content.

Carl Eugen Hoyos ceffmpeg at gmail.com
Sat Jan 6 01:41:38 EET 2018


2018-01-05 23:58 GMT+01:00 Jacob Trimble <modmaker-at-google.com at ffmpeg.org>:
> On Fri, Jan 5, 2018 at 2:01 PM, Carl Eugen Hoyos <ceffmpeg at gmail.com> wrote:
>> 2018-01-05 22:29 GMT+01:00 Jacob Trimble <modmaker-at-google.com at ffmpeg.org>:
>>> On Fri, Jan 5, 2018 at 12:41 PM, Carl Eugen Hoyos <ceffmpeg at gmail.com> wrote:
>>>> 2018-01-05 20:49 GMT+01:00 Jacob Trimble <modmaker-at-google.com at ffmpeg.org>:
>>>>
>>>>> +    entry_count = avio_rb32(pb);
>>>>> +    encryption_index->auxiliary_offsets = av_malloc_array(sizeof(size_t), entry_count);
>>>>
>>>> (sizeof(variable) instead of sizeof(type), please.)
>>>>
>>>> But since this could be used for a dos attack, please change this
>>>> to something similar to 1112ba01.
>>>> If it is easy to avoid it, very short files should not allocate
>>>> gigabytes.
>>>
>>> Switched to calculating the size based on the number of remaining
>>> bytes and returning an error if it doesn't match what is read.
>>
>> Sorry if I miss something:
>>
>>> +    entry_count = (atom.size - 8 - (has_saio_type ? 8 : 0)) / (version == 0 ? 4 : 8);
>>> +    if (avio_rb32(pb) != entry_count) {
>>> +        av_log(c->fc, AV_LOG_ERROR, "incorrect entry_count in saio\n");
>>> +        return AVERROR_INVALIDDATA;
>>> +    }
>>> +    encryption_index->auxiliary_offsets =
>>> +        av_malloc_array(sizeof(*encryption_index->auxiliary_offsets), entry_count);
>>
>> Does this avoid a 1G allocation for a file of a few bytes?
>>
>> Didn't you simply increase the number of needed bytes to change in a valid
>> mov file to behave maliciously from one to two?
>
> From what I can tell, the mov_read_default method (which is what reads
> child atoms) will verify "atom.size" to fit inside the parent atom.
> This means that for "atom.size" to be excessively large for the file
> size, the input would need to be non-seekable (since I think the
> top-level atom uses the file size) and all the atoms would need to
> have invalid sizes.

(I did not check this but I am not convinced the sample I fixed last
week is so sophisticated.)

> But technically I guess it is possible.

Thank you.

> But this is basically malloc some number of bytes then read that many
> bytes.  The only alternative I can think of (in the face of
> non-seekable inputs) is to try-read in chunks until we hit EOF or the
> end of the expected size.  This seems like a lot of extra work that is

> not mirrored elsewhere.

On the contrary.

But you are right, I forgot to write that you have to add an "if (!eof)"
to the reading loops, see the function that above commit changed.

> There are several cases where this isn't explicitly checked.

Yes, and they will be fixed, please don't add another one.

> Also, how does this attack work?  If the number is way too big, well
> get EOM and error out.

Which not only causes dos but also makes checking for other (if you
like: more dangerous) issues more difficult which is bad. We already
know that there are cases where the issue is hard to avoid, I believe
this is not such a case.

It is possible to create (valid) samples that allocate a huge amount
of memory but very small files should not immediately allocate an
amount of several G.

Carl Eugen


More information about the ffmpeg-devel mailing list