[FFmpeg-user] bwdif filter question

Mark Filipak (ffmpeg) markfilipak at bog.us
Mon Sep 21 20:11:14 EEST 2020

On 09/21/2020 11:24 AM, Edward Park wrote:
> Morning,
Hi Ted!
>> Regarding 'progressive_frame', ffmpeg has 'interlaced_frame' in lieu of 'progressive_frame'. I think that 'interlaced_frame' = !'progressive_frame' but I'm not sure. Confirming it as a fact is a side project that I work on only occasionally. H.242 defines "interlace" as solely the condition of PAL & NTSC scan-fields (i.e. field period == (1/2)(1/FPS)), but I don't want to pursue that further because I don't want to be perceived as a troll. :-)
> I'm not entirely aware of what is being discussed, but progressive_frame = !interlaced_frame kind of sent me back a bit, I do remember the discrepancy you noted in some telecopied material, so I'll just quickly paraphrase from what we looked into before, hopefully it'll be relevant.
> The AVFrame interlaced_frame flag isn't completely unrelated to mpeg progressive_frame, but it's not a simple inverse either, very context-dependent. With mpeg video, it seems it is an interlaced_frame if it is not progressive_frame ...

No so, Ted. The following two definitions are from the glossary I'm preparing (and which cites H.262).

'progressive_frame' [noun]: 1, A metadata bit differentiating a picture or halfpicture
   frame ('1') from a scan frame ('0'). 2, H.262 §6.3.10: "If progressive_frame is set
   to 0 it indicates that the two fields of the frame are interlaced fields in which an
   interval of time of the field period exists between (corresponding spatial samples)
   of the two fields. ... If progressive_frame is set to 1 it indicates that the two
   fields (of the frame) are actually from the same time instant as one another."

interlace [noun]: 1, H.262 §3.74: "The property of conventional television frames [1]
   where alternating lines of the frame represent different instances in time."
   [1] H.262 clearly limits interlace to scan-fields and excludes concurrent fields
       (and also the non-concurrent fields that can result from hard telecine).
   2, Informal: The condition in which the samples of odd and even rows (or lines)
   [verb], informal: To weave or reweave fields.

  -- A note about my glossary: "picture frame", "halfpicture frame", and "scan frame" are precisely 
and unambiguously defined by (and differentiated from one another by) their physical structures 
(including any metadata that may demarcate them), not by their association to other features and not 
by the context in which they appear. I endeavor to make all definitions strong in likewise manner.

>... and it shouldn't result where mpeg progressive_sequence is set.
> Basically, the best you can generalize from that is the frame stores interlaced video. (Yes interlaced_frame means the frame has interlaced material) Doesn't help at all... But I don't think it can be helped? Since AVFrames accommodates many more types of video frame data than just the generations of mpeg coded.

Since you capitalize "AVFrames", I assume that you cite a standard of some sort. I'd very much like 
to see it. Do you have a link?

> I think it was often said (not as much anymore) that "FFmpeg doesn't output fields" and I think at least part of the reason is this. At the visually essential level, there is the "picture" described as a single instance of a sequence of frames/fields/lines or what have you depending on the format and technology; the image that you actually see.

H.262 refers to "frame pictures" and "field pictures" without clearly delineating them. I am calling 
them "pictures" and "halfpictures".

> But that's a visual projection of the decoded and rendered video, or if you're encoding, it's what you want to see when you decode and render your encoding. I think the term itself has a very abstract(?) nuance. The picture seen at a certain presentation timestamp either has been decoded, or can be encoded as frame pictures or field pictures.

You see. You are using the H.262 nomenclature. That's fine, and I'm considering using it also even 
though it appears to be excessively wordy. Basically, I prefer "pictures" for interlaced content and 
"halfpictures" for deinterlaced content unweaved from a picture.

> Both are stored in "frames", a red herring in the terminology imo ...

Actually, it is frames that exist. Fields don't exist as discrete, unitary structures in macroblocks 
in streams.

>... The AVFrame that ffmpeg deals with isn't necessarily a "frame" as in a rectangular picture frame with width and height, but closer to how the data is  temporally "framed," e.g. in packets with header data, where one AVFrame has one video frame (picture). Image data could be scanned by macroblock, unless you are playing actual videotape.

You singing a sweet song, Ted. Frames actually do exist in streams and are denoted by metadata. The 
data inside slices inside macroblocks I am calling framesets. I firmly believe that every structure 
should have a unique name.

> So when interlace scanned fields are stored in frames, it's more than that both fields and frames are generalized into a single structure for both types of pictures called "frames" –  AVFrames, as the prefix might suggest, also are audio frames. And though it's not a very good analogy to field-based video, multiple channels of sound can be interleaved.

Interleave is not necessarily interlaced. For example, a TFF YCbCr420 frameset has 7 levels of 
interleave: YCbCr sample-quads, odd & even Y blocks (#s 1,2,3,4), odd & even Y halfmacroblocks, TFF 
Y macroblock, TFF Cb420 block (#5), TFF Cr420 block (#6), and macroblock. but only 3 interlacings: 
TFF Y macroblock, TFF Cb420 block, and TFF Cr420 block.

> I apologize that was a horrible job at quickly paraphrasing but if there was any conflation of the packet-like frames and picture-like frames or interlaced scanning video lines and macro block scanning I think the info might be able to shift your footing and give you another perspective, even if it's not 100% accurate.
> Regards,
> Ted Park

More information about the ffmpeg-user mailing list