[FFmpeg-devel] [PATCH] Set correct frame_size for Speex decoding
Sun Aug 16 17:27:33 CEST 2009
On Sun, Aug 16, 2009 at 05:07:18PM +0200, Michael Niedermayer wrote:
> On Sat, Aug 15, 2009 at 07:55:46PM -0400, Justin Ruggles wrote:
> > Michael Niedermayer wrote:
> > >>> what exactly is the argument you have that speex should not be handled like
> > >>> every other codec?!
> > >>> split it in a parser, the muxer muxes ONLY a single speex packet per
> > >>> container packet. Any extension from that is low priority and "patch welcome"
> > >>> IMHO ...
> > >> The downside for Speex is the container overhead since individual frames
> > >> are very small.
> > >
> > > this is true for many (most?) speech codecs
> > >
> > > also if we write our own speex encoder, it will only return one frame at a
> > > time.
> > Why would it have to?
> because the API requires it, both encoders and decoders have to implement the
> API, a video encoder also cannot return 5 frames in one packet.
> APIs can be changed but you arent arguing for a API change you argue for
> ignoring the API and just doing something else.
> > If the user sets frame_size before initialization
> > to a number of samples that is a multiple of a single frame, it could
> > return multiple speex frames at once, properly joined together and
> > padded at the end. With libspeex this is very easy to do because the
> > functionality is built into the API.
> > I understand the desire to keep what are called frames as separate
> > entities, but in the case of Speex I see it as more advantagous to allow
> > the user more control when encoding. If frames are always split up,
> > this limits the users options for both stream copy *and* for encoding to
> > just 1 frame per packet.
> > If you're dead-set against this idea, then I will finish the parser that
> > splits packets in order to continue with my other Speex patches, but I
> > don't like how limiting it would be for the user.
> i am againt speex handling things different than other speech codecs
> based on arguments that apply to other speech codecs as well.
> Also iam against passing data between muxer and codec layers in a way
> that violates the API.
ffmpeg seperates muxer and codec layers, writing a demuxer & decoder
that depend on things beyond the API (frames per frame) is going to
break things. We had similar great code (passing structs in AVPacket.data
because it was convenient) that also didnt turn out to be that great
and required a complete rewrite ...
ive alraedy said nut doesnt allow multiple frames per packet, but its
not just nut, avi as well doesnt allow multiple frames per packet
for VBR and either way avi needs to have its headers set up properly,
not with fake frame size and such and flv as we alaredy know has a
issue with >8 frames per frame. All that is just what we know of will
break if you implement your hack, what else will break is something we
only would learn after some time.
IMHO, demuxer->parser->splited frames [unless it is not possible to split]
if a muxer can store multiple frames it can merge several depending on its
abilities and user provided parameters, that merging could also be done
as a bitstream filter.
But just skiping the spliting and merging and hoping that every container
would allow anyting that any other container allowed is just not going to
work. And even more so as we already know of many combinations that would
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
There will always be a question for which you do not know the correct awnser.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
More information about the ffmpeg-devel