[FFmpeg-devel] [PATCH][RFC] avcodec/avutil: Add timeline side data

Michael Niedermayer michael at niedermayer.cc
Tue Apr 3 01:45:17 EEST 2018


Hi

On Mon, Apr 02, 2018 at 04:46:18PM +0100, Derek Buitenhuis wrote:
> On 4/1/2018 10:38 PM, Michael Niedermayer wrote:
> >> I agree that the APIs are annoyingly different, but that they should remain
> >> separate APIs. The suggestion of aligned (same names, args, etc.) seems the
> >> most reasonable to me - consistency.
> >>
> >> I don't really think they can share some common structs, but they can certainly
> >> be aligned.
> >>
> > 
> >> However, some of the stuff you listed doesn't make sense to me:
> >>
> >> * Codec changes are not part of timeline data; they're entirely separate things
> >>   in every format I'm aware of. In MPEG-TS it just changes, in MOV/MP4 it is done
> >>   via multiple 'stsd' atoms/boxes, which don't really carry any time related
> >>   data - there is another atom/box that maps samples to a particular 'stsd',
> >>   and that's it.
> > 
> > yes but in an abstract way each timespan has one codec
> > its the same kind of "data structure" even if its not stored this way in 
> > formats.
> 
> I disagree with this. Codec changes shouldn't be represented as timespans.

I think we misunderstand each other here on the terminology

If we have a list of information for ranges of time. That is a a set of
(start,end,info) tripets. Thats what i meant and i used the term timespans
here. We can use any other term, if timespans have a different meaning for you.

That is for example if the codec is not always the same for a stream, that is
this kind of information by definition pretty much.


> They may happen in-band, with a new PMT, new stsd, etc. Short of indexing
> and possibly decoding the entire file, you won't have a timespan at all
> to output. I also think its just a fundamentally wrong way to go about
> supporting codec or format changes.
> 
> >> * What is AVTimelineMetadataList meant to be?
> > 
> > what we have in AV_PKT_DATA_STRINGS_METADATA
> 
> That is very vague. Could you elaborate on this? A lot of different and not
> necessarily time related data can be stored in this generic type. I know
> e.g. WebM DASH cue points are, but that's the only one I think is even vaguely
> related. And cue points are not timestamps, but a singular timestamp.
> 
> >> Lastly, what is the intent here? Is it to align all of these before timeline
> >> support, or to deprecate and align after? Right now the API is align with
> >> other side data APIs. I need to know which scenario this is before I go
> >> ahead and implement this.
> > 
> > Part of the intend is to have a easier to understand API where things follow
> > similar design.
> 
> I don't think anybody will disagree with this. But I need a clear path forward,
> and not a vague 'would should be consistent' which could mean many things.
>
> > Another part is "compatibility". Ideally a format like mpegts that just
> > changes codecs should fill whatever data structure we use for codec changes
> > in the future. And simply passing these structures over to a mov/mp4 muxer
> > would make it write stsd accordingly.
> 
> This can be accomplished with the current packet side data, I think. The code
> just hasn't been written to do so yet. Possibly extending or replacing the
> 'new extradata' type is the best solution IMO. It's not timeline/timespan data,
> and certainly can't be provided up-front, but only at packet level.
> 
> > It would be suboptimal for example if these 2 would use different ways to
> > export this codec change data at least unless we also provide code to
> > convert. 
> 
> Can't disagree with that, but I think this problem is entirely orthogonal to
> timeline data. There is nothing that, IMO, should be shared.
> 
> > I didnt now check what exactly is stored in it currently but,
> > We do with AV_PKT_DATA_STRINGS_METADATA, the chapter array (which also has
> > metadata in it) possibly already something that goes a bit in that direction
> 
> Ideally we can deprecate the chapter array in favour of an actual struct, which aligns
> with how we currently do side data (encryption, this rfc, etc.)? I don't quite think
> there is much overlap between timeline edits and chapters, and conflating the two,
> in my opinion, ends up making it more annoying for users.
> 
> > Its absolutely not my intend to suggest that other APIs would need to
> > be changed before. Much more to bring the current incosistent APIs to people
> > attention so it is given a thought when a new API is added.
> > deprecating APIs and replacing them also has a cost for our users ...
> 
> OK. What is my path forward here then? Make a good API and then work on fixing
> the other older worse APIs like chapters? Use the old chapter-style API for timelines
> for the sake of consistency?
> 
> I get the worry and problem of consistent and large APIs, but I cannot continue the
> RFC, or even write an implementation until I know what exactly I need to do to move
> it forward; an actionable path to take. You have very valid concerns, but I don't
> know what I'm actually supposed to take action on, and do.

ok

Lets see why every API we have is different.
Its because people add new APIs that are fundamnetally different from existing APIs. 
I think its because each has been implemented by a different developer who believed his
choice was better than all prior ones.

I would be surprised if people in 10 years see the choices made today
fundamentally different from how we see the choices people made 10 years ago.

So in this sense my question is what advantage the opaque side data based system
has over a system simply using a C struct the way AVChapter works ?

about actual action and moving forward.
If you want a side data based system there is very little to do. Just take a look
at the API and existing APIs and if you see nothing that can be made more similar
and aligned then thats fine, nothing to change ATM.
In the future we may want to update existing APIs to use side data unless by then
something else is more trendy.

If OTOH you prefer a C struct based one, i think this patch can be simplified alot
as alot of abstraction would not be needed and we also would not need to update
AVChapter to be consistent and also would not require users to update their code
to a new AVChapter API.

Thanks

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I do not agree with what you have to say, but I'll defend to the death your
right to say it. -- Voltaire
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: not available
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180403/7aa22a91/attachment.sig>


More information about the ffmpeg-devel mailing list