[Ffmpeg-devel] channel ordering and downmixing
Fri Mar 30 02:54:10 CEST 2007
Paul Curtis wrote:
> Justin Ruggles wrote:
>>#2) define a standard FFmpeg channel order and change decoders to always
>> output in that order. this would be simpler, but has the problem
>> stated above with containers vs. codecs. the demuxer could choose
>> to tell the decoder the channel order or else leave it up to the
>> decoder. i can't think of a clean way to use this on the encoding
>> side though.
> I, too, have been looking at this problem. Your second solution looks
> like the best way to handle it. I had the idea of having a default
> channel mapping to start with, then when the container was opened, if it
> had a channel mapping, it would set it. If the codec had a different
> mapping, it would then override the container's mapping. That way, a
> container with PCM would set the mapping, or in the AC3 case, the codec
> would override it. This is on decode.
Yes, this was what I had in mind for decoding.
> On encode, that reverse process would occur ... the container would set
> it's "preferred" channel mapping, and the codec could override it. You
> could also have an option (a la 'mplayer') to force the mapping, if needed.
That makes sense I guess. One issue would be that the codec init
(avcodec_open) seems to be called before the muxer init
(av_write_header). So the preferred channel order for the container
would need to be stored in the AVOutputFormat right?
> On down mixes, it would be a bit more complex. For example, a MPEG-TS
> with 5.1 audio converting to a FLV with two channels. That mapping would
> be a bit problematic, as where would the mapping occur? My thoughts were
> to have the 5.1 presented by the decoder, and then down mixed in the
> codec similar to the way it is handled now.
I was thinking of having the decoder do the reordering and downmixing
(with common code in lavc to do this of course). You can sometimes have
the channel layout switch mid-stream, and it seems more logical to have
the decoder adapt to keep constant output rather than have the encoder
adapt to changing input.
Of course, the ideal would be a separate audio filter API for mixing,
reordering, samplerate conversion, bandwidth filtering, etc... That's
the kind of thing for someone with more time on their hands though. ;)
In the meantime, another alternative to all of this might be to add a
separate API for audio channel layout which could do reordering and
downmixing. It could be similar to the current resampling API. That
way if someone does decide to implement a more comprehensive audio
filter layer, it would be easier since it would be external to all
muxing and coding. In this case, a default channel order could be
omitted in favor of direct conversion.
More information about the ffmpeg-devel