[FFmpeg-devel] [PATCH] AVCHD/H.264 parser: determination of frame type, question about timestamps

Ivan Schreter schreter
Sun Feb 1 10:51:20 CET 2009

Michael Niedermayer wrote:
> On Sun, Feb 01, 2009 at 01:17:24AM +0100, Ivan Schreter wrote:
>> Michael Niedermayer wrote:
>>> On Mon, Jan 26, 2009 at 08:42:17AM +0100, Ivan Schreter wrote:
>>> [...]
>>>> We have a stream with pictures containing (T1/B1/T2==T1), (B2/T3/B3==B2) 
>>>> fields. That's two H.264 pictures, but 3 frames. Each av_read_frame() 
>>>> should return a packte containing exactly single frame. But we have just 
>>>> 2 packets, which need to be returned in 3 calls to av_read_frame(), 
>>>> according to API. Further, the DTS must be set correctly as well for the 
>>>> three AVPackets in order to get the timing correct. How do you want to 
>>>> handle this?
>>> i dont see where you get 3 calls of av_read_frame(),
>>> there are 2 or 4 access units not 3 unless one is coded as 2 fields
>>> and 1 is a frame
>> No, we don't have 3 calls. First of all, I meant two pictures with 
>> This 
>> generates 3 frames in the display. 
> no, it generates 2 frames each with a duration of 1.5 each
Maybe I didn't express myself correctly. There are two decoded frames (4 
decoded fields in total), but there are three _displayed_ frames (4 
fields + 2 fields, which are repeated).

Look at H.264 standard, D.2.2, table D-1. For these two picture 
structures, three timestamps per picture are generated for the three 
fields (NumClockTS=3), so for the two pictures in total 6 timestamps. 
Each frame has two timestamps for appropriate top/bottom fields. In our 
case, T1/B1 could have timestamps 1/1 for progressive or 1/2 for 
interlaced. T2/B2 could have timestamps 2/2 for progressive or 3/4 for 
interlaced. T3/B3 could have timestamps 3/3 for progressive or 5/6 for 

In any case, there are _three_ display frames displayed with duration 1, 
not two frames displayed with duration of 1.5. I just wanted to point 
out that there is no way to express this yet. Current code setting the 
frame duration of 1.5 is a good workaround for now, but the application 
doesn't know the order of frames to display, which will most probably 
cause interlacing artefacts for interlaced video (progressive will be 
OK, even better than with constructing the frame in-between from two 
fields of first and third frame).

IMHO, this is by no means the most pressing topic, though.

> [...]
>> I don't believe someone would produce such streams. Anyway, the standard 
>> _requires_ DTS/PTS coding for all frames having DTS != PTS, so even in 
>> this case, I- and P-slices would have to have timestamps. The timestamps 
>> of B-slices in between can be computed.
> this sounds like utter nonsense
> where is the standard supposed to require this?

I was referring to sections 2.7.4 and 2.7.5 in H.222.0. But yes, you are 
right, I didn't remember it quite correctly. The standard requires 
coding both DTS/PTS, if DTS != PTS _and_ PTS is set at all. So if PTS is 
not set, DTS doesn't have to be set. My mistake.



More information about the ffmpeg-devel mailing list