[FFmpeg-devel] [PATCH] iff/8svx: move decoding/deinterleaving in demuxer

Stefano Sabatini stefano.sabatini-lala at poste.it
Tue May 31 10:04:11 CEST 2011

On date Sunday 2011-05-29 20:48:40 +0200, Reimar Döffinger encoded:
> On 29 May 2011, at 15:49, Stefano Sabatini <stefano.sabatini-lala at poste.it> wrote:
> > On date Sunday 2011-05-29 12:20:56 +0200, Reimar Döffinger encoded:
> >> On Sun, May 29, 2011 at 11:25:20AM +0200, Stefano Sabatini wrote:
> >>> On date Sunday 2011-05-29 10:54:29 +0200, Reimar Döffinger encoded:
> >>>> 
> >>>> 
> >>>> On 28 May 2011, at 13:42, Stefano Sabatini <stefano.sabatini-lala at poste.it> wrote:
> >>>> 
> >>>>> This is required for making possible to return audio data in packets
> >>>>> rather than return a huge packet with all the chunk data, which is
> >>>>> problematic for applications.
> >>>>> 
> >>>>> In particular ffplay cannot pause in the middle of a packet.
> >>>> 
> >>> 
> >>>> Have you tested with other applications? ffplay just has a stupid
> >>>> implementation, IMO that is not at all a good reason to move stuff
> >>>> that does not belong there into libavformat.
> > 
> > How would you suggest to implement it in ffplay?
> > 
> > Currently ffmpeg has a read_thread which fetch packets from demuxers
> > and send them to A/V/S queues, when they're processed by separate
> > decoding threads.
> > 
> > Pausing operations affects the read_thread, while the other threads go
> > on, so if there is a huge packet the audio decoding thread keep on
> > decoding it until it's finished. In this scenario pausing can't work.
> I thought it obvious that pausing should stop audio and video output, to my knowledge that's what basically ever player does.
> > 
> >> Also, would you call e.g. the trellis code from libavcodec from libavformat?
> >> And the option for setting trellis, that would then be yet another
> >> format-specific option?
> > 
> > Do we need to implement trellis in the iff muxer? I'm not saying that
> > moving decoding in the demuxer is always a good idea, for this
> > particular format it's the simplest option (less code, less bugs).

> Well, IMO it's also the typical dead-end option: whether you want to
> do lossless remuxing, build an application that can do fast seeking
> in a file with compressed audio etc. I'd guess you'd start with
> reverting this.

Lossless remuxing and fast seeking is still possible with this patch,
the only main inconvenient is that you need to cache and decode the
data in the muxer, considering that decoding is pretty cheap in this
case I don't consider a big inconvenient.

> Now I doubt anyone will ever do that so I'm not going to drag this
> out endlessly, but to me it doesn't seem like a real improvement.
> Making the demuxer interleave the (possibly compressed) channel data
> would be something different, it would solve the issue of huge
> packets if the file is huge and "fix" the pause issue, but is more
> complex and won't really allow seeking with ffplay's implementation
> of it.

I considered this approach and I really couldn't see much benefits in
it. Indeed the peculiarities of the SVX8 compression methods are:

1) you need the first byte for decoding the whole file, decoding is
   meant to be sequential so you always need to decode *all* the
   previous data

2) left and right channel are globally interleaved, so you need to
   read all the left channel data before you can read the right
   channel data.

Now this format has clearly never been designed for seeking/streaming
scenarios, surely you could employ some trick for enabling the decoder
to decode the incoming packets regardless of the decoding of the
previous data (e.g. global decode -> reencode, then you put in each
"packet" the reference uncompressed byte), but this way you're
creating an extension of this format and losing its only advantage,
which is simplicity (at expense of efficiency).
FFmpeg = Fabulous and Faithful Minimalistic Pitiful Extended Game

More information about the ffmpeg-devel mailing list