[FFmpeg-trac] #7037(avcodec:open): ffmpeg destroys HDR information when encoding
FFmpeg
trac at avcodec.org
Mon Dec 16 12:00:52 EET 2019
#7037: ffmpeg destroys HDR information when encoding
-------------------------------------+-----------------------------------
Reporter: mario66 | Owner: cehoyos
Type: enhancement | Status: open
Priority: normal | Component: avcodec
Version: git-master | Resolution:
Keywords: libx265 hdr | Blocked By:
Blocking: | Reproduced by developer: 0
Analyzed by developer: 0 |
-------------------------------------+-----------------------------------
Comment (by mario66):
gdgsdg123, '''you''' do not really understand what HDR mean in practice.
But let me use this opportunity to explain this in more detail so that you
guys see why this is an absolutely critical "feature" you need to
"support".
In SDR (8 bit), you basically have 256 values per color channel, so you
are quite limited in how many different nuances you can display. So what
you do when you record a movie, you need to make sure that from one pixel
value to the other, the difference is low enough that a viewer cannot see
any banding. That gives you a limit on how bright or dark something can
be. Everything above or below that limit needs to be clipped. TV users at
home then set max brightness on their TV so that it looks good for them.
Now HDR comes in which means, first of all, you can represent pixel values
with higher precision. In HDR, there are 1024 (10 bit) or 4096 (12 bit)
different values per color channel. In a nutshell, that's the basic idea
of HDR. So, how does this "it simply makes the black more black, the white
more white" that lot of customers believe comes in? This is where things
get complicated now. When the creator now decides where max brightness is
clipped, he can choose now much higher values. Because you can represent
much higher values before a viewer notices any banding. But this changes
your distribution of the pixel values! Where with SDR you basically had an
approximately uniform distribution, with HDR you now have most pixel
values centered around some values and only very few pixels (e.g. looking
into sunlight, fire) occupy the tails of the distribution. Now if this is
applied on your TV naively, everything will look dull and low contrast,
and bright light would not be any brighter. Why is this? Well, you
typically have set the max brightness of you TV based on SDR content. What
you would need to do is to increase the brightness dramatically on you TV,
only then you would be able to actually see the HDR as it was intended to
be. But what is now the correct brightness value? And what if you then
switch to and SDR movie, you need to reset the brightness again? Also too
bright backlight can limit the ability to display really dark scenes, what
was also the goal of HDR. Problems over problems...
Then the solution the movie industry came up with was to now directly
control the max brightness of your TV by using meta data. If you ever
watched a HDR movie, you will notice that the backlight is much more
active even higher to what you set on your TV. That's exactly because of
this meta data. Carried to extremes, you could even set max brightness on
a per frame / sample basis which would further improve the experience
since you can now tell the TV to dim the backlight in very dark scenes.
This is what industry pushes for and what this meta data is all about. If
those meta data would be lost, your movie would look dull, low contrast
and the whites are not whither, the blacks are not blacker... See the
initial posting.
It is absolutely critical that this meta data is preserved. This everyone
needs to understand. If you encode an HDR movie with ffmpeg at the moment
and then delete the original, you are basically f***ed. The meta data was
lost and no way to recover.
--
Ticket URL: <https://trac.ffmpeg.org/ticket/7037#comment:34>
FFmpeg <https://ffmpeg.org>
FFmpeg issue tracker
More information about the FFmpeg-trac
mailing list