[FFmpeg-user] ffmpeg architecture question

Mark Filipak markfilipak.windows+ffmpeg at gmail.com
Sun Apr 19 10:01:27 EEST 2020



On 04/19/2020 02:08 AM, pdr0 wrote:
> Mark Filipak wrote
>> My experience is that regarding "decombing" frames 2 7 12 17 ...,
>> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.
>>
>> "lb/linblenddeint
>> "Linear blend deinterlacing filter that deinterlaces the given block by
>> filtering all lines with a
>> (1 2 1) filter."
>>
>> I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1"
>> refers. pdr0 recommended it
>> and I found that it works better than any of the other deinterlace
>> filters. Without pdr0's help, I
>> would never have tried it.

> [1,2,1] refers to a vertical convolution kernel in image processing. It
> refers to a "block" of 1 horizontal, 3 vertical pixels.  The numbers are the
> weights. The center "2" refers to the current pixel.  Pixel values are
> multipled by the numbers , and the sum is taken, that's the output value.
> Sometimes a normalization calculation is applied with some implementations,
> you'd have to look at the actual code to check. But the net effect is each
> line is blended with it's neighbor above and below  .

Thank you, pdr0. That makes perfect sense.

> In general, it's frowned upon, because you get "ghosting" or double image .

Yes, that's what I see. But I don't care. Why don't I care? 1, ghosting is plainly evident for 
planes such as building sides during panning shots, not for patterned surfaces such as landscape 
shots or people's faces, and 2, it greatly reduces the (apparent, but not real) judder that would 
otherwise show up for the original, "combed" frame. When you also consider that the ghost is there 
for only 1/60th second, it becomes nearly invisible. Of course, that's not true for 23-telecine. In 
23-telecine (i.e., 30fps), there are 2 combed frames, they abut each other, and each of them is 
1/30th second. That means that the ghosting is 1/15th second -- clearly awful.

> 1 frame now is a mix of 2
> different times, instead of distinct frames. But you need this property in
> this specific case to retain the pattern in terms of reducing the judder .
> Other types of typical single rate deinterlacing (such as yadif) will force
> you to choose the top or bottom field, and you will get a duplicate frame of
> before or after ruining your pattern . Double rate deinterlacing will
> introduce 2 frames in that spot, ruining your pattern . Those work against
> your reasons for doing this - anti-judder

BINGO! You are a pro, pdr0. You have really taken the time to understand what I'm doing. Thank you!

By the way, my first successful movie, 55-telecine transcode has finished. (The one I did earlier 
today has PTS errors.) Hold on while I watch it ("Patton" Blu-ray).

Oh, dear. There were no errors, yet the video freezes at about 3:20. If I back up 5 seconds and 
resume, the video plays again, through where it froze, but the audio is gone (silence). If I 
continue letting it play, the audio eventually resumes, but at the wrong place (i.e., audio from a 
scene several minutes further along in the stream). If I continue letting it play, the video freezes 
again. The total running time (which should be a constant), is not constant. It stays just ahead of 
the actual running time by, about 2x seconds, until the video freezes, at which time the total 
running time also freezes at the value it had prior to the freeze.

I think that this 55-telecine is stressing ffmpeg in ways it's not been stressed before and is 
exposing flaws that I fear the ffmpeg principals will not accept because they think what I'm doing 
is a load of crap.

>> To me, deinterlace just means weaving the odd & even lines. To me, a frame
>> that is already woven
>> doesn't need deinterlacing. I know that the deinterlace filters do
>> additional processing, but none
>> of them go into sufficient detail for me to know, in advance, what they
>> do.
> 
> Weave means both intact fields are combined into a frame . Basically do
> nothing. That's how video is commonly stored.

Yes, I know. I thought I invented the word "weave" -- I wanted to avoid the word "interlace" -- for 
lines that have been combined in a frame: "Woven lines" instead of "interlaced lines". I would have 
used "interleave", but someone fairly significant in the video world is already using "interleave" 
to replace "interlace" when referring to fields: "Interleaved fields" instead of "interlaced fields".

> If it's progressive video, yes it doesn't need deinterlacing. By definition,
> progressive means both fields are from the same time, belong to the same
> frame

There ya go. You have read the MPEG spec.

> Deinterlace means separating the fields and resizing them to full dimension
> frames by whatever algorithm +/- additional processing
> 
> Single rate deinterlace means half the fields are discarded (29.97i =>
> 29.97p) . Either odd or even fields are kept .

Oh, now I understand you. You see, I don't want to do that. What I want is to not throw away either 
odd or even. I want to blend them. If you throw away the combed frame's odd field, then 55-telecine 
(A A A+B B B) becomes the same as 46-telecine (A A B B B). If you throw away the combed frame's even 
field, then 55-telecine becomes the same as 64-telecine (A A A B B). I've done those by direct 
configuration and by employing yadif, and I don't like the judder. 46-telecine and 64-telecine both 
have judder. I hate judder. I will accept a 1/60th second of blended frame (nearly impossible to 
notice) in lieu of judder. I've taken strongly planar objects with extremely strong edges in panning 
shots that produce a lot of telecine judder when p24 is fed to my 60Hz TV, and by 55-telecine, made 
the judder a non-issue and the 1/60th second blended frame noticeable only if you watch the edges 
intentently (and even then the fleeting edge fuzzing is not at all objectionable).


More information about the ffmpeg-user mailing list