[FFmpeg-user] Preserving perceived loudness when downmixing audio from 5.1 AC3 to stereo AAC

Andy Furniss adf.lists at gmail.com
Wed Aug 7 18:16:55 CEST 2013


Nicolas George wrote:

> The issue has been analyzed in the devel mailing list: the old downsampling
> was done with samples coded in floating point, where clipping does not
> happen (but can later happen if the samples are converted to integers);
> because the number of conversions have been optimized it is now done with
> samples coded as integers.

Ok, thanks for the info.

>> FWIW I also consider the new behavior wrong in that the description
>> of aformat says -
>>
>> "Set output format constraints for the input audio. The framework
>> will negotiate the most appropriate format to minimize conversions"
>
> What is "wrong" in that?

Nothing in the statement its self and I also accept that format may mean 
more than number of channels.

What I thought was wrong was the behavior with my thd example that 
clearly doesn't

"negotiate the most appropriate format to minimize conversions"

> Not all codecs support channel layout selection like that.

Yea, but if the codec does, then maybe the code could try to do the best 
for the user that requested stereo by using it. The user may not know 
the inner workings of every codec, but the code can.

Of course dca should be exempt until it's fixed, but that should be for 
another thread/further analysis :-)







More information about the ffmpeg-user mailing list