[FFmpeg-devel] attribute_deprecated int avpicture_deinterlace ??

Don Moir donmoir at comcast.net
Fri Nov 29 14:30:06 CET 2013


----- Original Message ----- 
From: "wm4" <nfxjfg at googlemail.com>
To: <ffmpeg-devel at ffmpeg.org>
Sent: Saturday, November 30, 2013 1:06 PM
Subject: Re: [FFmpeg-devel] attribute_deprecated int avpicture_deinterlace ??


> On Sat, 30 Nov 2013 18:54:52 +0100
> Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:
>
>>
>>
>> On 30.11.2013, at 15:37, Hendrik Leppkes <h.leppkes at gmail.com> wrote:
>>
>> > On Sat, Nov 30, 2013 at 2:17 PM, Stefano Sabatini <stefasab at gmail.com> wrote:
>> >> On date Friday 2013-11-29 08:30:51 -0500, Don Moir encoded:
>> >>>
>> >>> ----- Original Message ----- From: "Stefano Sabatini"
>> >>> <stefasab at gmail.com>
>> >>> To: "FFmpeg development discussions and patches" <ffmpeg-devel at ffmpeg.org>
>> >>> Sent: Friday, November 29, 2013 8:09 PM
>> >>> Subject: Re: [FFmpeg-devel] attribute_deprecated int avpicture_deinterlace ??
>> >>>
>> >>>
>> >>>> On date Friday 2013-11-29 09:06:57 -0500, Don Moir encoded:
>> >>>>> deinterlacing is directly related to decoding in that you want a
>> >>>>> properly decoded image and not some effect.
>> >>>>>
>> >>>>> Looks like we are now pointed to avlibfilter and yadif. I have no
>> >>>>> use for avlibfilter so I should link it so I can deinterlace ?
>> >>>>> avlibfilter is just excess baggage from my viewpoint.
>> >>>>>
>> >>>>> Hate to bring this up late but seems silly or am I the only one that
>> >>>>> thinks that? Hope I am misunderstanding something.
>> >>>>
>> >>>> Possibly: we could extract the yadif code and move it somehow to the
>> >>>> library (libavfilter public low-level API or something, so you don't
>> >>>> need to build a filtergraph to apply it). It might be non trivial.
>> >>>
>> >>> Would be good if avpicture_deinterlace was improved possibly using
>> >>> yadif and left where it is. Other than that, I would probably roll
>> >>> my own rather than use avfilter if avpicture_deinterlace goes away.
>> >>
>> >> What's exactly your problem with libavfilter (please no trolling)? The
>> >> main problem seems that you are not willing to configure a filtergraph
>> >> for that, so the alternative I proposed is a low level deinterlacing
>> >> API, based on yadif which could be used without filters.

The main problem with libavfilter for me is it is either not enough to meet my needs or slow as compared to hardware. The other 
aspects of it are is it seems its a place to throw effects in and see if they stick. You know, it just goes against the grain for 
me.

I do need to do some of the things it pretends to do but I need to do them efficiently and correctly to meet my needs. I don't need 
the additional overhead.

Take alpha fade for example. I don't need software trying to do this and I just pass it on to the hardware which is oh so simple and 
effective. I don't need it on a frame by frame basis like in libavfilter and the image just needs to fade out or in whenever I need 
it. Could be a single frame (simple image) or multiple frames. Doesn't matter, my alpha fades in/out at the time I need it done and 
however many steps I choose and this is just a redraw to the hardware. There are other examples I could post but will leave it at 
that.

>> > A simple filtergraph just for deinterlacing is so trivial that I
>> > wouldn't let this argument count for anything.

>> I don't know. If it's more than 5 lines of code (and I suspect it's a lot more) it might be useful to have a function that just 
>> passes things through one single filter...
>> Though it would probably still need a create and destroy function in addition, so I don't know if such a simplified API is worth 
>> it...

> Yeah, the current libavfilter API is pretty bad. Look at all the boiler
> plate even ffplay.c/ffmpeg.c have to use, even though they don't have
> to be compatible with anything else.

The way it is now with avpicture_deinterlace is the way I think it should be. Could be improved though. There is no need for me to 
decide ahead of time if I should have some filter do this for me. You decode a frame and you are notified if it needs to be 
deinterlaced. Perfect! I watch the cpu usage and watch the video to make sure its on time. Among other things I can just choose to 
stop deinterlacing for a period of time if things are looking slow. I can also choose to deinterlace right into a hardware surface 
if I choose to do so by setting the appropriate data in a AVPicture struct with no additional memory usage. Don't need to allocate a 
picture for example.

I guess some of you think deinterlacing is an effect of sorts but I look at it as a fundamental part of decoding. Yes its a post 
process, but not like tint, alpha, or whatever. To me the frame is not complete until it has been deinterlaced. More like a pre-post 
process. I don't think the designer / encoder means for you to display it without deinterlacing.

ffmpeg has 3 fundamental libraries: avutil, avformat, and avcodec. The other libraries are useful and you can pick and choose those 
to meet your needs. I strongly believe deinterlacing is a fundamental need and belongs in one of the fundamental libraries. avcodec 
is where it's at now and no complaints at least from me about that.




More information about the ffmpeg-devel mailing list