[FFmpeg-devel] SOC Qualifiction tasks
Sat Mar 15 02:50:43 CET 2008
On Sat, Mar 15, 2008 at 02:45:33AM +0200, Uoti Urpala wrote:
> On Sat, 2008-03-15 at 00:00 +0100, Michael Niedermayer wrote:
> > On Fri, Mar 14, 2008 at 10:44:01PM +0200, Uoti Urpala wrote:
> > > - Doesn't for with LOW_DELAY.
> > low_delay == no delay which is just the opposit of frame based multithreading.
> I don't see it as the opposite or incompatible. Doing reordering in the
> decoder is IMO orthogonal to whether you want multiple threads to work
> on different frames.
Yes, it seems i misunderstood what you meant
But then i dont see what problem returning frames in another order would have
in respect to the API.
> > > if FFmpeg had better functionality to track frame timestamps and
> > > reordering across decoding then such changes would not break things so
> > > easily.
> > Iam not sure what you are talking about, but if you can point at some
> > concrete problems and solutions with patches these would be welcome.
> The most basic issue is keeping track of timestamps (or any other
> information) related to particular frames across decoding. I think the
> get_buffer() thing that ffplay.c uses to reorder timestamps is an ugly
> hack, and the need to use such hacks shows that the library API is
Yes, i did suggest a while ago that av_decode_video/audio should take AVPacket
as input instead of buf/buf_size. That would make passing information
as timestamps quite easy. Also applications not using lavf really would
av_init_packet(&pkt); // initalize fields to defaults
sadly no volunteers implemented it ...
> Another thing is that if you use LOW_DELAY then (AFAIK) there's no way
> to access the reordering information. So if you decode a bunch of frames
> you don't necessarily even know whether the next visible frame would be
> one of the frames you decoded or further in the decode order.
What you call LOW_DELAY really is a hack, i wouldnt be surprised if it has
various random unexpected sideeffects. This probably is caused by there being
no? applications using it.
Anyway, shouldnt the has_b_frames value be enough to awnser the question
of if one of the decoded frames is output?
> > > On a more general level I don't like the idea of managing concurrently
> > > running threads behind the caller's back as the ONLY supported mode of
> > > operation.
> > Is this just a feeling or do you know of some concrete use cases where it
> > would be a limitation?
> Say you want to keep a constant number of actively running threads (for
> example equal to the number of cores, or one less to leave a free core
> for other activity), and also have other tasks besides decoding (like
> video filtering). I think this is a realistic enough use case. It seems
> difficult to achieve with the proposed API. It basically requires
> threads to occasionally "move" from the "FFmpeg side" to "user side"
> when they complete their current tasks.
Maybe its possible with callbacks provided by the user app to start/stop
threads, we will need callbacks anyway because the code shouldnt rely on a
specific thread implementation.
Anyway i think loren would be in a better position to comment on this
use case being relevant or how it could best be dealt with ...
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The educated differ from the uneducated as much as the living from the
dead. -- Aristotle
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
More information about the ffmpeg-devel