[FFmpeg-devel] [PATCH] Implement av_samples_fill_arrays().
Tue Feb 1 18:07:43 CET 2011
On 02/01/2011 10:59 AM, Stefano Sabatini wrote:
> On date Friday 2011-01-28 21:13:30 +0100, Michael Niedermayer encoded:
>> On Fri, Jan 28, 2011 at 12:55:09AM +0100, Stefano Sabatini wrote:
>>>> what iam unsure about is linesize.
>>>> currently its all set equal, that is mightly useless
>>>> linesize is as good as linesize[chan]
>>>> we could do something more usefull with linesize[1+]
>>>> like setting it so that
>>>> linesize = data - data
>>>> the idea is that things could be addressed like
>>>> data + time*linesize + chan*linesize
>>>> the advantage of this over
>>>> data[chan] + time*linesize
>>>> is that you need fewer variables (no data, data, data ...) which can
>>>> help register starved architectures
>>>> also, if possible a single public function like:
>>>> int av_samples_fill_arrays(uint8_t *data, int linesizes,
>>>> uint8_t *buf, int buf_size,
>>>> enum AVSampleFormat sample_fmt, int planar,
>>>> int nb_channels);
>>>> that allows data & linesize to be NULL
>>>> would mean simpler public API thanb having 3 functions
>>> Done, indeed it's much nicer.
>>> As for the linesize stuff, I'm wondering if it is better to have
>>> something more similar to the imgutils stuff, that is to have linesize
>>> simply give the planar size (for planar) and the buffer size for
>>> packed (eventually aligned).
>>> This would be useful for implementing:
>>> does it make sense?
>> you have 8 linesize values you set them all equal. you could store differnet
>> things in there instead of choosing which of 2 usefull values to duplicate 7
> Updated again, implementing the idea from Michael, other patches
> attached for reference.
> Note that we never defined the semantics of linesizes in the
> AVFilterBuffer for audio samples, the most natural approach is to
> assume the same semantics used by av_samples_*.
> At this point some change is required (in the aconvert) because
> av_audio_convert() assumes a different semantics for in/out_stride:
> * Convert between audio sample formats
> * @param[in] out array of output buffers for each channel. set to NULL to ignore processing of the given channel.
> * @param[in] out_stride distance between consecutive output samples (measured in bytes)
> * @param[in] in array of input buffers for each channel
> * @param[in] in_stride distance between consecutive input samples (measured in bytes)
> * @param len length of audio frame size (measured in samples)
> int av_audio_convert(AVAudioConvert *ctx,
> void * const out, const int out_stride,
> const void * const in, const int in_stride, int len);
> * create a wrapper in aconvert for converting samplesref->linesize to
> in/out_stride semantics.
> * create an av_audio_convert2() supporting the new semantics
> It was discussed some months ago an avcodec_audio_decodeX() which was
> returning a packet rather than buffer+size as the current API, I
> suppose that may be relevant for this discussion as well.
Yes, I was working on a patch for avcodec_decode_audioX() to return data
in an AVFrame similar to video decoders, but that did not take into
account linesize at all since we don't support planar audio in lavc. I
suppose it could be easily revised to set pointers based on an
interleaved layout using whatever common scheme is decided on.
More information about the ffmpeg-devel