[FFmpeg-devel] [PATCH] Playlist API

Ronald S. Bultje rsbultje
Fri Aug 7 17:52:54 CEST 2009


Hi Geza,

On Thu, Aug 6, 2009 at 7:10 PM, Geza Kovacs<gkovacs at mit.edu> wrote:
> On 08/06/2009 03:43 PM, Ronald S. Bultje wrote:
>> On Thu, Aug 6, 2009 at 6:36 PM, Geza Kovacs<gkovacs at mit.edu> wrote:
>>> streams[0] is primary video stream
>>> streams[1] is primary audio stream
>>> streams[2] is primary subtitle stream
>>> streams[3] is alternative video stream
>>> streams[4] is alternative audio stream
>>> streams[5] is alternative subtitle stream
>>
>> Well, that makes no sense. It's too limiting in all aspects.
>
> How so? libavcodec defines only 7 codec types; if streams[] were
> dynamically allocated rather than arbitrarily limited to MAX_STREAMS,
> the only issue I see with this is a small amount of memory being wasted
> by dummy streams. Ideally some 2-dimensional level of organization would
> be more elegant (something along the lines of separate arrays for
> video_streams[], audio_streams[], and all other stream types), but given
> that such 2-dimensional organization would break basically all existing
> code I don't think there's a much better way to organize a
> one-dimensional streams array.

Well, you conveniently cut the rest, but memory use is really not the
only reason here. I think you're reading too much into an "open" or
"defined" AVFormatContext, which is basically the same as limiting it
into a certain corner. A subtitle file with 500 languages should still
fit in an AVFormatContext (and it does not, at this moment). Many
streams might not be audio, video or subtitle (which was only added
recently). You should see it much more as an opaque structure where
you can't really tell what's in it until you've actually started
looking in it. Consider metadata: you don't know if the file ("mp3")
will have a title/author because it might not have an ID3 tag. A m3u
playlist of 20 mp3s might have 16 titles and 14 authors, but which
belongs to which? how do you represent that in a AVFormatContext? What
about time_base? etc.

Don't get me wrong, your code to treat a playlist is invaluable and
will be useful in some way. But it should *not* be the only (or
primary) interface through which we access playlists. I'd rather see a
new interface, developed by you or whoever, that we use for this.
Here's some contraints for this new, to-be-designed AVPlayList
interface:

- should roughly read as a AVFormatContext[]
- you might want to think of addng an "offset" and "length" which are
*not* the same as the media length - for editing purposes (or at least
design it in such a way that that could be done)
- the AVPlayList could be a DVD disc (containing per-title or so
AVFormatContexts), a game resource file, a m3u file, an iTunes
playlist, a video editor internal structure or something along those
lines
- XSPF has titles in the playlist; you might therefore want to add an
additional metadata[] field to the AVPlayList. This is not the same as
the AVFormatContext.metadata
- you want to be able to represent a AVPlayList as a AVFormatContext
under certain conditions, but you want to add the AVFormatContext
constraints here: 1 set of metadata covering the whole media; 1 set of
streams covering the whole media (this might imply that the demuxer
should call decoder functions so that the raw output has the same
codec across the whole stream; in case of video + audio, followed by
audio-only, it might also include gaps for certain streams).

With these conditions, you can load a m3u file as a AVFormatContext,
transcode it using ffmpeg to whatever, and play it using ancient tools
such as ffplay/ffmpeg - and it would all just work (well, except
-acodec copy, but that won't work anyway).

But you could also write players that are much more feature-rich, such
as ffplay-with-multiple-song-support (ffplayer). Editors. and it
wouldn't break API.

I think your current code implements an internal AVPlayList (which you
call PlayList in your patches, IIRC), and externally represents it as
a AVFormatContext. I'd like to make the AVPlayList the external
representation and add the code to represent it as a AVFormatContext
*in addition* to that primary interface. So don't worry, your code /
time is not wasted and is invaluable. I know that writing ffplayer is
not part of your SoC and you don't have to write a fully featured
ffplayer. But writing a demo tool for AVPlayList isn't too hard and
we'd love to keep you around afterwards anyway. Converting ffplay ->
ffplayer or writing a new ffplayer would be a completely
self-contained project already and won't be part of your SoC
end-of-term evaluation, I think.


HTH (and of course all this is just IMHO),
Ronald



More information about the ffmpeg-devel mailing list