[FFmpeg-devel] ASS in AVI
Tue Aug 11 23:25:34 CEST 2009
On Tue, Aug 11, 2009 at 10:00:41PM +0200, Michael Niedermayer wrote:
> On Tue, Aug 11, 2009 at 09:08:03PM +0200, Aurelien Jacobs wrote:
> > On Tue, Aug 11, 2009 at 08:37:53PM +0200, Michael Niedermayer wrote:
> > > On Tue, Aug 11, 2009 at 02:30:45PM +0200, Aurelien Jacobs wrote:
> > > > Hi,
> > > >
> > > > This patchset add support for ASS subtitles in AVI demuxer.
> > > > The format is documented  and implemented in various tools since sevral
> > > > years.
> > > >
> > > > First patch export functions of the ASS demuxer so that they can be used
> > > > from the AVI demuxer.
> > > > Second patch add seeking support in the ASS demuxer.
> > > > Third patch finally add support for demuxer ASS subtitles from AVI files
> > > > using functions from the ASS demuxer.
> > >
> > > uhm eh hmm wtf ...
> > > what exactly is this patch set doing and why?
> > > ok on can store a whole subtitle file in one chunk, this isnt really sane
> > > but hey the windoz kiddies arent sane.
> > Yes, this kind of non-interleaved storage is ugly... But I fear we will
> > have to deal with it :-(
> > > still i dont see why that would require this mess
> > >
> > > 1. avi demuxer returns 1 chunk
> > This chunk is generally at the very end of the file, so it have to
> > seek to this chunk and read it during read_header.
> if its at the very end the file qualifies as non interleaved avi and
> the code handling that should run.
Indeed, file is detected as non-interleaved avi by current code.
I could make use of this to slightly simplify my patch.
Basically it would allow to call read_gab2_sub() directly from
avi_read_packet() instead of the small piece of code doing the
seeking called from avi_read_header().
The problem with this is that the codec id will only be determined
inside avi_read_packet(). Is it allowed to modify the codec_id of a
stream after the call to avi_read_header() ?
> > > 2. ass parser splits chunk
> > Yes.
> > Here I added a step:
> > 2.5. avi demuxer get the next packet from ass parser and output it
> > interleaved with other streams at correct time.
> thats not the demuxers job, the common code can interleave the output of
> each streams parser. If thats not done already its a bug
When you write "streams parser", do you mean demuxer ?
Here there is only one demuxer involved, thus common code can't do any
interleaving except by doing potentially insane buffering.
Do you mean that the common code should actually see several demuxers
in this situation ? How would that work ?
> > > 3. add decoder makes many AVSubtitles ot of it
> > Yes.
> > > 4. render renders out of the hundreads of AVSubtitles which are needed
> > > at the current frame
> > So the renderer needs to buffer all the AVSubtitles from all the subtitle
> > streams ?
> > And it needs to be able to select which stream to render (and
> > change it at runtime) ?
> > Or do we need one renderer per subtitle stream, with the possibility to
> > enable/disable each renderer at runtime ?
> i dont see how one vs. many renderers would be related here
IMO, a renderer should be quite simple and shouldn't even need to know
from which streams the data come from. So if subtitles are switched to
another language during playback, the renderer will just receive a reset,
and then next packets it will receive will come from another stream.
If things works like this, buffering of all the packets of every streams
can't be done at the renderer level.
> and each subtitle packet could contain subtitles that are displayed,
> moved, have effects changed and so on, the renderer need to keep track
> of what to do when already.
IMO, renderer should receive packets when they are about to be displayed.
It should thus display them, for as long as they are supposed to affect
display, and then throw them away...
> > And renderer needs to handle seeking ?
> the renderer is reset on seeks, the demuxer must return the chunk again
> if the seek target falls within its start-duration (thats the natural thing
> to do, of course one could do it differently ...)
Fine. That's what I did.
> > All of this doesn't sound really natural.
> the design of ass in avi is not natural, storing a whole movie in one chunk
> is silly but its like storing s 3d wavelet block of the whole movie in one
> chunk ...
> > And how would you handle stream copy from AVI to NUT for example ?
> after the parser you have splited chunks, i see no problem here
> am i missing one?
Hum... I'm not sure what you mean ?
Here we have just one demuxer and one muxer. The question is, when are the
subtitles packet supposed to be output by the demuxer ?
And if the demuxer output all subtitles packet at the very begining, who
will do the buffering of all the packets during the whole demuxing ?
> > Basically, my code does more or less the same thing that the non-interleaved
> > avi code.
> the non interleaved avi code does not call functions from other demuxers, nor
> include their headers
> i wouldnt mind havng ass split in the demuxer if it could be done naturally
What do you mean ? Should the ass spliting be implemented as a parser instead
of a demuxer ? And assdec.c would just set need_parsing and do nothing else ?
Is it OK for a parser to receive a packet that needs to be split accross
several hours ?
And what about seeking ? Should the demuxer always send the one full
subtitle chunk after seeking ? And the parser would then have to discard
all the ass events which are in the past ?
I'm just trying to understand what you have in mind. I know that my
patch is not the most elegant thing there is. But I wasn't able to find
a nicer solution. In fact, I thought about something that was discussed
once, but never implemented: nested demuxers. May be a good solution in
this situation, but looks like a huge task...
Oh, and BTW, if you want to have a look at this, here is a sample:
More information about the ffmpeg-devel