[FFmpeg-devel] ASS in AVI

Michael Niedermayer michaelni
Wed Aug 12 14:50:57 CEST 2009


On Tue, Aug 11, 2009 at 11:25:34PM +0200, Aurelien Jacobs wrote:
> On Tue, Aug 11, 2009 at 10:00:41PM +0200, Michael Niedermayer wrote:
> > On Tue, Aug 11, 2009 at 09:08:03PM +0200, Aurelien Jacobs wrote:
> > > On Tue, Aug 11, 2009 at 08:37:53PM +0200, Michael Niedermayer wrote:
> > > > On Tue, Aug 11, 2009 at 02:30:45PM +0200, Aurelien Jacobs wrote:
> > > > > Hi,
> > > > > 
> > > > > This patchset add support for ASS subtitles in AVI demuxer.
> > > > > The format is documented [1] and implemented in various tools since sevral
> > > > > years.
> > > > > 
> > > > > First patch export functions of the ASS demuxer so that they can be used
> > > > > from the AVI demuxer.
> > > > > Second patch add seeking support in the ASS demuxer.
> > > > > Third patch finally add support for demuxer ASS subtitles from AVI files
> > > > > using functions from the ASS demuxer.
> > > > 
> > > > uhm eh hmm wtf ...
> > > > what exactly is this patch set doing and why?
> > > > ok on can store a whole subtitle file in one chunk, this isnt really sane
> > > > but hey the windoz kiddies arent sane.
> > > 
> > > Yes, this kind of non-interleaved storage is ugly... But I fear we will
> > > have to deal with it :-(
> > > 
> > > > still i dont see why that would require this mess
> > > > 
> > > > 1. avi demuxer returns 1 chunk
> > > 
> > > This chunk is generally at the very end of the file, so it have to
> > > seek to this chunk and read it during read_header.
> > 
> > if its at the very end the file qualifies as non interleaved avi and
> > the code handling that should run.
> 
> Indeed, file is detected as non-interleaved avi by current code.
> I could make use of this to slightly simplify my patch.
> Basically it would allow to call read_gab2_sub() directly from
> avi_read_packet() instead of the small piece of code doing the
> seeking called from avi_read_header().
> The problem with this is that the codec id will only be determined
> inside avi_read_packet(). Is it allowed to modify the codec_id of a
> stream after the call to avi_read_header() ?

changing from CODEC_ID_NONE -> X is ok


> 
> > > > 2. ass parser splits chunk
> > > 
> > > Yes.
> > > 
> > > Here I added a step:
> > > 
> > > 2.5. avi demuxer get the next packet from ass parser and output it
> > > interleaved with other streams at correct time.
> > 
> > thats not the demuxers job, the common code can interleave the output of
> > each streams parser. If thats not done already its a bug
> 
> When you write "streams parser", do you mean demuxer ?

i mean AVParser


> Here there is only one demuxer involved, thus common code can't do any
> interleaving except by doing potentially insane buffering.
> Do you mean that the common code should actually see several demuxers
> in this situation ? How would that work ?
> 

> > > > 3. add decoder makes many AVSubtitles ot of it
> > > 
> > > Yes.
> > > 
> > > > 4. render renders out of the hundreads of AVSubtitles which are needed
> > > >    at the current frame
> > > 
> > > So the renderer needs to buffer all the AVSubtitles from all the subtitle
> > > streams ?
> > > And it needs to be able to select which stream to render (and
> > > change it at runtime) ?
> > 
> > > Or do we need one renderer per subtitle stream, with the possibility to
> > > enable/disable each renderer at runtime ?
> > 
> > i dont see how one vs. many renderers would be related here
> 
> IMO, a renderer should be quite simple and shouldn't even need to know
> from which streams the data come from. So if subtitles are switched to
> another language during playback, the renderer will just receive a reset,
> and then next packets it will receive will come from another stream.
> If things works like this, buffering of all the packets of every streams
> can't be done at the renderer level.

as said, i dont see how the renderer design of one stream vs. all streams
would matter in the current discussion


> 
> > and each subtitle packet could contain subtitles that are displayed,
> > moved, have effects changed and so on, the renderer need to keep track
> > of what to do when already.
> 
> IMO, renderer should receive packets when they are about to be displayed.
> It should thus display them, for as long as they are supposed to affect
> display, and then throw them away...

as above, i dont see how that would be related to the discussion at hand.
there would be individual packets with correct start times after the
decoder at least and how and when they are passed to the renderer is a
matter of the renderer API (which we dont have yet)


[...]

> > > And how would you handle stream copy from AVI to NUT for example ?
> > 
> > after the parser you have splited chunks, i see no problem here
> > am i missing one?
> 
> Hum... I'm not sure what you mean ?
> Here we have just one demuxer and one muxer.

hmm, there should be a parser in between ...


> The question is, when are the
> subtitles packet supposed to be output by the demuxer ?

at their "dts" ideally, but they may be output earlier when its too messy
or would lead to performance issues, avi after all will output pkts for other
stream types as well not always at their dts but as they are interleaved ...


> And if the demuxer output all subtitles packet at the very begining, who
> will do the buffering of all the packets during the whole demuxing ?

a system without buffering does not work, you can have subtitles overlapping
in time, the demuxers return audio and video (and obviously subs) not exactly
when the subsequent decoders and output devices would need them, buffers will
be needed somewhere and these should be able to absorb the ass in avi mess as
the windoz lads have designed


> 
> > > Basically, my code does more or less the same thing that the non-interleaved
> > > avi code.
> > 
> > the non interleaved avi code does not call functions from other demuxers, nor
> > include their headers
> 
> OK.
> 
> > i wouldnt mind havng ass split in the demuxer if it could be done naturally
> 
> What do you mean ? Should the ass spliting be implemented as a parser instead
> of a demuxer ? And assdec.c would just set need_parsing and do nothing else ?

iam not sure ... maybe, maybe not


> Is it OK for a parser to receive a packet that needs to be split accross
> several hours ?

if it works and the alternative if too complex i dont see a problem


> And what about seeking ? Should the demuxer always send the one full
> subtitle chunk after seeking ? 

that seems the natural thing to do with this (idiotic) design


> And the parser would then have to discard
> all the ass events which are in the past ?

something would need to discard them, iam not sure if the parser would be
the correct position to do it.


> 
> I'm just trying to understand what you have in mind. I know that my
> patch is not the most elegant thing there is. But I wasn't able to find
> a nicer solution. In fact, I thought about something that was discussed
> once, but never implemented: nested demuxers. May be a good solution in
> this situation, but looks like a huge task...

nested demuxers could help but they still would end up reading the whole
chunk on seeking i suspect


[...]

-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Breaking DRM is a little like attempting to break through a door even
though the window is wide open and the only thing in the house is a bunch
of things you dont want and which you would get tomorrow for free anyway
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20090812/0cd6dc80/attachment.pgp>



More information about the ffmpeg-devel mailing list