[FFmpeg-devel] Channel layouts for aac encoding/decoding and vorbis decoding
Tue Aug 11 14:49:25 CEST 2009
Le tridi 23 thermidor, an CCXVII, Andreas ?man a ?crit?:
> Yes, but if you move the interleaving out of the codecs you have the
> freedom of doing the interleaving (with an optional additional
> swizzling step of the planar pointers) or not do any interleaving at
> Of course it gets more complex, but flexibility often comes with
> increased complexity.
It is not only a matter of flexibility vs. complexity, it is also a matter
I finally did some benchmarks: I decoded 128?Mo of 6-channels Vorbis, first
ignoring the output, then reordering it (Vorbis layout to ALSA layout, a
cycle on channels 1-2-3-4), and then de-interleaving it. I did 10 runs and
got a standard deviation of less than 0.1%.
The reordering costs 2%. The de-interleaving costs 4.8%.
The de-interleaving is more expensive probably because it is not done in
place, and therefore puts more pressure on the cache.
A 2% costs is small, and probably totally acceptable. But it is not
> Well, it certainly simplified some code in my application (though it's
> still riddled with a special case for the AAC decoder). I can not just
> see how it was better before when all codecs had different orders.
The current API does not provide the information on the channels layout, nor
auxiliary functions to reorder the channels.
If both are available, the only visible added complexity for client code
would be something like that after each call to avcodec_decode_audio3:
Or, even better: the channels reordering is done automatically by the
library to avc->request_channels_layout, avcodec_get_context_defaults sets
it to a conventional value, and there is a CH_LAYOUT_ANY special value.
Note that the current (request_)channel_layout field is not suitable for
that purpose: it says what channels are present, but not in what order.
(And by the way, I have always seen 5.1 used for front+rear layouts, while
avcodec.h uses 5POINT1 for front+side and 5POINT1_BACK for front+rear. Is it
> Indeed. Though if you, in the "planar audio struct", also add
> offset between each sample (similar to stride for video frames) you can
> use it to point into any channel order of interleaved samples as well.
> It will make the conversion functions more complicated but one can
> write optimized cases for the "0 extra bytes of stride" case, etc.
I still do not see the benefit of supporting non-interleaved layouts at such
a low level: the API are not suitable for them at all, since they all expect
a single buffer with its size.
In this case, a pair of well optimized conversion function would probably be
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 197 bytes
Desc: Digital signature
More information about the ffmpeg-devel