[FFmpeg-devel] FFV1 Specification

Peter B. pb at das-werkstatt.com
Sun Apr 8 11:03:00 CEST 2012

On 04/07/2012 11:52 PM, Michael Niedermayer wrote:
> On Sat, Apr 07, 2012 at 08:36:27PM +0200, Peter B. wrote:
>> I've cloned the github repo and made a few minor style changes to the text.
>> I'm a complete git-newbie (only SVN experience so far), but from what
>> I've read, I could commit my changes locally and send a "pull request"?
> yes, well, you will need to push them to some public repo before they
> can be pulled
Ok. Will try to do that as quickly as possible.
I could just create a fork on github, right?

>> - what if the source was RGB?
> its losslessly converted to YCbCr
I always thought that converting between RGB and YCbCr isn't
mathematically (or numerically?) losslessly possible. I assume that's
the "magic" of the JPEG2000-RCT formula?

>> 3) Is "high level description" supposed to give an overview of how FFv1
>> works?
> yes
I'd rename it to "General Description" then, because I think the term
"high-level" is more "coder-speak" :)

If that paragraph should provide an overview of how FFv1 works, it would
be great to have the following things in there:

1) An overview of all "components" of the codec. Similar to a block
diagram of a circuit. I'll see if I can come up with a draft of what I
"think" would be in there:

[Source (YCbCr / RGB)]
[if(RGB): convert_to_JPEG2000RCT]
[split into planes (Y, Cb, Cr, Alpha)]
[Range coder] ...

Does my description make any sense to you?

2) Which compression algorithms/principles are used by FFv1?

Range coder and Huffman is mentioned, but it is unclear (to me), when
within the encoding chain they're applied - and to what.
Additionally, I don't understand how FFv1's compression is so effective,
if the used algorithms are rather old (Range coder: 1979, Huffman: 1952).
Why is it, that these can compete with JPEG2000's heavy-lifting Wavelet

>> 5) Do you have any estimations about bit-error resilience of FFv1's stream?
> ive not tested it but i suspect bit errors are quite bad. ffv1 is not
> designed to handle that. It could be changed to handle bit errors
> much better but this would cause overhead both speed and compression
> wise. Is this something important for the likely users of ffv1?
> I would expect that bit errors are not a big issue on modern storeage
> media, which corrects bit errors internally and the end user gets a
> error free sector or no sector (or a heavily damaged sector)
That is indeed a controversial topic among video technicians:

Almost all codecs/formats that have become standards in the
digital-video domain provide means of handling bit-errors. Personally, I
agree with you: I think bit-error resilience is a requirement that
origins from the idea of "classic" video media.
However, it is often mentioned thought of as a very important feature of
a professional video codec, because it makes people sleep better :)

Sure it'd be a nice thing, but I think it's more a requirement for
codecs used in different use cases, such as broadcasting over unreliable
channels (e.g. Satellite, ...). I presume FFv1's usage to be in domains
(e.g. video archiving), where the media are considered to be
bit-reliable. I personally promote multiple backups and sporadic
checksum verification.

In order to satisfy that question raised by video technicians, it might
be a good thing to have it in the specs, so one knows what to expect -
and why.


More information about the ffmpeg-devel mailing list