[FFmpeg-devel] SOC Qualifiction tasks
Sat Mar 15 00:00:18 CET 2008
On Fri, Mar 14, 2008 at 10:44:01PM +0200, Uoti Urpala wrote:
> On Fri, 2008-03-14 at 20:20 +0100, Andreas ?man wrote:
> > Uoti Urpala wrote:
> > > On Fri, 2008-03-14 at 18:56 +0100, Andreas ?man wrote:
> > >> * Asynchronous decoding (I.e. decoding frames even when the caller
> > >> is not currently in 'avcodec_decode_video()').
> > >> Of course this must be fully hidden from the caller POV.
> > >
> > > You can't hide it from the caller. Normally the caller gives the frame
> > > data and expects full results before the call returns. With that API
> > > there is nothing to decode when the caller is not currently in
> > > avcodec_decode_video(). For asynchronous decoding you need a different
> > > API that allows the caller to queue frames in advance.
> > I was under the impression that '*got_picture_ptr' would be zero in the
> > case when the decoder can not (yet) deliver a complete picture.
> > So it will just be zero the first few calls to avcodec_decode_video().
> > But perhaps I'm missing something.
No, i dont think you miss anything which would not need to be dealt with
anyway even with a different API.
> - Return value will not be correct in case of errors.
It isnt really meaningfull currenty anyway as many decoders conceal errors
without any error code, feel free to call it a bug.
If we want to return some error code like frame ok/errors concealed / fatal
that could be done as a field in AVFrame. Though this is actually orthogonal
to threads as this information is not available currently IIRC.
> - Doesn't for with LOW_DELAY.
low_delay == no delay which is just the opposit of frame based multithreading.
> - Without LOW_DELAY will make DTS (even more) meaningless.
I cannot make sense of the "even more". Except that yes, DTS will need some
work, this is inevitable no matter what API, DTS just wont be magically
> - A naive implementation would call get/release buffer callbacks from
> another thread and/or outside avcodec_decode_video() call. It's probably
> possible to avoid this in most cases but at least it requires extra
Yes, one can write code with bugs, if you found a solution to this iam all
> This would break timing based on DTS (in the cases where it worked
Yes but you already said that above.
> It would break the hack ffmpeg.c uses to reorder pts in decoder
> unless the code was changed so that all buffer management is done in the
> calling thread before returning.
There is no code that does call it from different threads currently thus
no need to change it. And calling get_buffer() from different threads is
likely not going to work with many players so even without what you call
a hack, it wouldnt be a good idea.
> MPlayer's basic method of reordering
> pts in a decode delay sized buffer would keep working, but it would
> break the comparison with avctx->has_b_frames used to detect cases where
> a pts will really never have a corresponding visible frame. The timing
> problems are not purely the fault of the thread-related API changes
mplayers basic mathod of reordering pts does AFAIK not work.
IIRC (please correct me if iam wrong) it needs to have timestamps for
each frame. These are generally not available and not required by the specs.
Neither mpeg2 nor h.264 in mpeg-ps/ts need to have timestamps on every
frame, they only need them once every 0.5 or so seconds and the frames
in between can have variing duration and arbitrary reordering. Only the
decoder or some other code decoding headers and SEIs can recover the
> if FFmpeg had better functionality to track frame timestamps and
> reordering across decoding then such changes would not break things so
Iam not sure what you are talking about, but if you can point at some
concrete problems and solutions with patches these would be welcome.
Also id like to point you to AVstream.pts_buffer which i belive is
equivalent to what you call "MPlayer's basic method of reordering pts".
In libavformat its one of several methods to calculate missing timestamps.
If you think mplayers method can do something which ours cannot then
iam very interrested to hear what that is.
> On a more general level I don't like the idea of managing concurrently
> running threads behind the caller's back as the ONLY supported mode of
Is this just a feeling or do you know of some concrete use cases where it
would be a limitation?
When comparing a seperate API to a
CODEC_CAP_FRAME_THREADS in AVCodec and CODEC_FLAG_FRAME_THREADS
in AVCodecContext. Later seems simpler from a user app point of view.
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The educated differ from the uneducated as much as the living from the
dead. -- Aristotle
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
More information about the ffmpeg-devel