[FFmpeg-devel] [PATCH] Speex parser

Baptiste Coudurier baptiste.coudurier
Sat Sep 12 04:01:15 CEST 2009


Hi Justin,

On 09/04/2009 03:55 PM, Justin Ruggles wrote:
> Baptiste Coudurier wrote:
>
>> On 08/30/2009 06:26 PM, Justin Ruggles wrote:
>>> Baptiste Coudurier wrote:
>>>
>>>> On 8/30/2009 5:50 PM, Justin Ruggles wrote:
>>>>> Justin Ruggles wrote:
>>>>>
>>>>>> Justin Ruggles wrote:
>>>>>>
>>>>>>> Michael Niedermayer wrote:
>>>>>>>
>>>>>>>> On Sun, Aug 30, 2009 at 10:40:45AM -0400, Justin Ruggles wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Here is a Speex parser.  It does not split any frames, but only analyzes
>>>>>>>>> them in order to set the proper AVCodecContext parameters.
>>>>>>>> the decoder can do this, av_find_stream_info() should already create a
>>>>>>>> decoder to fill these in when they are missing.
>>>>>>> Why should it have to rely on the decoder...especially since we do not
>>>>>>> have a native decoder?  So that one MUST compile in an external library
>>>>>>> for stream copy to work properly.
>>>>>> If there is no problem with packet duration being 0 or wrong, then I
>>>>>> think stream copy could work without the parser or decoder.  I tried flv
>>>>>> to ogg and it seemed to work since timestamps were just copied from one
>>>>>> container to the other.  Packet duration was still wrong though, and I
>>>>>> don't know if that causes other problems.
>>>>> Ok, I think I figured out the solution to this part at least.  Speex
>>>>> needs to be added to the list of codecs in has_codec_parameters() that
>>>>> require frame_size to be non-zero, then the libspeex decoder should not
>>>>> set frame_size at init when it does not have extradata since it could be
>>>>> the wrong value.
>>>> Or finally set sample_fmt to SAMPLE_FMT_NONE triggering decoding of the
>>>> first frame.
>>> That seems like a hack.  We always know the sample format at decoder
>>> init, but we don't necessarily know the frame size.
>>
>> This depends on codec. Codecs supporting different bit depth certainly
>> don't know the bit depth at init. Does it look like a hack ? I really
>> don't think so.
>
>
> I thought you were just talking about Speex.  In general, yes I agree
> with you that the decoder might not know it at init.
>
> If you are suggesting that having SAMPLE_FMT_NONE as default would be a
> preferable situation, then I agree with you.  But it seems wrong to me
> to force av_find_stream_info() to always defer to the decoder to
> complete all stream parameters.  In my opinion, the sample format output
> by the decoder has nothing to do with the stream parameters as far as
> the demuxer is concerned.

Well IMHO it is more complicated than that, but I agree with you in 
principle. Ie sample_fmt and raw_bits_per_sample are related like 
colorspace and pix_fmt are related.

This applies to pix_fmt as well.

If no pix_fmt was computed, it basically means frames/stream could not 
be decoded, the same applies to sample_fmt I think.

That's why I'm sceptical about setting pix_fmt and sample_fmt at codec init.

> A grep of libavformat shows 4 uses of sample_fmt, not including the one
> in has_codec_parameters().  3 look like incorrect uses.  One is just
> questionable.  What do you think about fixing these and removing the
> sample_fmt requirement from has_codec_parameters() and possibly from
> libavformat completely?  Would it then be safe to make SAMPLE_FMT_NONE
> the default instead of SAMPLE_FMT_S16?

IMHO It is already safe to change default to SAMPLE_FMT_NONE.

-- 
Baptiste COUDURIER
Key fingerprint                 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA
FFmpeg maintainer                                  http://www.ffmpeg.org



More information about the ffmpeg-devel mailing list