[FFmpeg-devel] [PATCH 1/2] (was Re: deduplicated [PATCH] Cinepak: speed up decoding several-fold...) formats.

Karl Blomster kalle.blomster at gmail.com
Mon Mar 6 18:42:31 EET 2017


On 2017-03-06 15:12, u-9iep at aetey.se wrote:
> Karl,
>
> I wish to thank you for the effort to make the "opposite" parties
> to better understand each other.
>
> As you say, indeed the patch makes a move which looks in line with
> ffmpeg's former traditions but which at least formally collides with
> the currently percepted "best practices".
>
> On Mon, Mar 06, 2017 at 12:47:26PM +0100, Karl Blomster wrote:
>> think the original patch made this seem particularly unpalatable since it
>> added a whole new "quick and dirty" colorspace conversion to YUV with
>> hardcoded constants (there are several different YUV variants with slightly
>> different constants), with a intended-as-helpful link to the Wikipedia page
> I am pretty sure the definition of yuv420p in ffmpeg and the constants
> agree to each other. Yes I am aware of the mess around the variety of
> YUVs, that's why I referred to the source of the numbers. (IIRC they
> come actually from Microsoft but I felt Wikipedia looked more neutral)
I'm honestly not sure what you mean here. The most common flavor of YUV (or more 
strictly speaking YCbCr), which is the one that you seem to have taken the 
numeric approximation of color primaries from - informally "Rec. 601", from its 
ITU recommendation number - has its roots in analog color television and I 
believe it was standardized back in the early 1980's, long before digital video 
was practically usable in consumer applications. ffmpeg naturally supports 
several other well established variants as well, such as ITU rec. 709 (first 
standardized in 1990 I believe, originally for use in likewise analog HDTV) and 
ITU rec. 2020 (a recent invention intended for 4k and other such things). In 
practice someone decoding from cinepak's storage format (which is similar to 
ycgco in spirit but not the same as conventional ycgco, as far as I can tell) to 
yuv is probably unlikely to want anything other than rec. 601 primaries, but you 
can't know that. Standards conversion of digital video can be very complex if 
you want to do it right, and as CPU's have gotten faster there has been more 
emphasis in general in digital video on correctness and less on just getting it 
to work, which is why standards conversion these days tends to be left to 
specialized libraries. I understand that in your case you simply want it done 
fast and don't really worry too much about correctness, and while that's fine 
and a legitimate use case (if unusual, in this day and age), it may not be what 
other users want, and more importantly it's a principle of least astonishment 
violation.

Regardless, it is now gone so there's no use in dwelling on it further - just 
trying to clarify what's going on, here.

> Do you believe that libraries are not supposed to react on environment
> variables? The most ubiquitous system one (libc) does by design.
>
> Ffmpeg itself contains over twenty getenv() in the libav* libraries and
> also via ffplay depends on SDL which for very natural reasons (similar
> to my motivation with Cinepak) relies on environment variables.
> Nor is there any apparent reasonable way to "get rid" of those getenv()s.
I would say that as usual when determining what is "right" in software 
architecture, the answer is "it depends". As far as I am aware, most of the 
things libav* touches environment variables for is related to interactions with 
things external to ffmpeg, such as the network (HTTP_PROXY), the terminal, the 
locations of certain types of plugins, etc. Modifying internal behavior based on 
environment variables is ugly in my opinion, and I can't think of any other case 
in ffmpeg where this is done. In general, if you want to interact with library 
functionality, do it through the API. In this case there's even a mechanism in 
place explicitly for this purpose (negotiating output pixelformat via the 
get_format callback), although I'll note that the ordinary use case for that is 
quite different from what you want to do here.

As far as SDL goes, I'll just note that the documentation states that "[u]sing 
these variables isn't recommended and the names and presence of these variables 
aren't guaranteed from one release to the next." They seem intended for 
debugging only.

>> ffmpeg
>> is a generalist library that tries to support a broad range of use cases on
>> a wide range of platforms, but that also by necessity means it lends itself
>> less well to extremely specific use cases like this one, where you're
>> prepared to buy decoding performance at any cost.
> The cost in the code is in fact not that "big", about 68 LOC per output
> pixel format, in the latest revision. Compared to the several times (!)
> speed loss when going through swscale it looks to me like an
> exceptionally low hanging fruit.
And here's one of the biggest issues in this debate and where I think you and 
wm4 et al keep talking past each other. The cost in lines of code is relatively 
unimportant. The cost in "number of different behaviors" is different. The Unix 
way is to do one thing and do it well. Doing standards conversion (or as in the 
most recent patch, bitdepth scaling) directly in the decoder is fast, sure! 
Nobody disputes that. The problem is that when you have as many decoders as 
ffmpeg does, you *really* want them to behave in the same way as far as 
reasonably possible, and do one logical thing in one place. In practice this 
isn't a huge deal for cinepak in particular since it already does this bit of 
ugliness, but surely you can see why there is a reluctance to add more of it?

Then there's also the fact that, to be blunt to the point of rudeness, nobody 
*cares* about cinepak. I'm sure that *you* do, and as you've repeatedly pointed 
out we do not have your point of view, but from the "mainstream" perspective or 
whatever you want to call it this is definitely a fringe codec that has very few 
users today. Hence the reluctance to accept an "ugly" patch, I believe. A 
cursory search of the mailing list archives makes it seem like nobody's really 
contributed meaningful functionality to the decoder other than you yourself 
(although your tendency to change email addresses makes it a bit hard to tell 
for certain) and almost no users have ever asked about it.

One also has to wonder *how* slow a system needs to be, exactly, to require this 
kind of change. Again, I don't have all the details on your use case, but just 
out of curiosity, how many systems were you intending to deploy this to, correct 
to an order of magnitude? Forgive me for presuming things, but users who have 
not until now had video playback on their early 1990's vintage systems (dumb 
terminals?) for performance reasons must surely have gotten used to the 
situation by now. Or implemented their own decoder. Or...?

Best regards
Karl Blomster


More information about the ffmpeg-devel mailing list