[Ffmpeg-devel] Increasing Audio Buffer Size
Tue May 9 20:56:41 CEST 2006
On Tue, May 09, 2006 at 01:05:20PM -0400, Cyril Zorin wrote:
> > heres my thought on how it could be done (comments welcome...)
> >-int avcodec_decode_audio(AVCodecContext *avctx, int16_t *samples,
> >+int avcodec_decode_audio(AVCodecContext *avctx, AVFrame *avframe,
> >optionally the above can be done in a way which doesnt break compatibility
> >be adding a new function and keeping the old ...
> >the audio decoders decode():
> > calls avctx.release_buffer(avctx, avframe) if a previous buffer isnt
> > needed anymore
> > calls avctx.get_buffer(avctx, avframe)
> > audio sample i of channel c is stored in
> > avframe.data[c][ i*avframe.linesize[c] ] cast to the format (always
> > int16_t currently)
> Would it be correct to say that currently (samples) is an array of
> interlaced channel data?
> If many audio decoders already output interlaced
> channel audio data, then they'd have to be modified to support the proposed
nonsense, nothing needs to be modified, the new system supports interleaved as
well as non interleaved output, the later makes some sense as it might be
closer to the internal format and it might be easier to filter / encode
> avframe.data[channel][sample_index] storage.
avframe.data[c][ i*avframe.linesize[c] ] not avframe.data[c][i]
> In that case, who interlaces
> the audio data later on?
if the user needs a format different from what a decoder output ... well
of course the user will need to covert it, lavc might provide some code
to help but its really just a 3 line for loop ...
> I think it'd be better to take the analogous approach that video decoding
i do, you just dont seem to understand it
> takes, insofar that at a certain point an "Audio Frame" is just a free-area
> of crap that the decoder can fill in, without organizing it by "channel" or
for video its organized by color components and lines, so your free form
crap is not analogous
> Also, if it were up to me, I'd leave the AVFrame struct well alone and make
> an "AFrame" or otherwise something for "audio frame". I wouldn't want to
> clutter AVFrame any more.
thats certainly a possibility, if its better i dont know, what i do know
is it will be more work and possibly more complicated code ...
we would need AVFrame, AFrame, VFrame later 2 would be "subclasses"
of AVFrames (= their first fields match the ones from AVFrame)
In the past you could go to a library and read, borrow or copy any book
Today you'd get arrested for mere telling someone where the library is
More information about the ffmpeg-devel