[FFmpeg-user] Mathematically lossless MotionJPEG2000 encoding possible?
dave at dericed.com
Mon Oct 27 21:58:09 CET 2014
On Oct 27, 2014, at 4:36 PM, Christoph Gerstbauer <christophgerstbauer at gmail.com> wrote:
> Hello David,
> using "ffplay -vcodec libopenjpeg -i ..." works.
> So I have to force decoding with libopenjpeg to get an correct output.
Another option is to use --disable-decoder=jpeg2000 when configuring to prevent its use. In most encoding or decoding scenarios ffmpeg's default jpeg2000 encoder/decoder (named ‘jpeg2000’) is broken.
> As I see: ffv1 compressed (if not using slices) a little bit better than jpeg2000.
> But ffv1 is much more faster than jpeg2000.
> Can you explain me why jpeg 2000 is so extremely slow and needs much more CPU load than ffv1?
> Where is the difference here? Both are wavelet transformation codecs, not?
Others could explain this much better than I. Note that ffmpeg can recently do some threading with libopenjpeg encoding so current speeds though slow are at least double what they were last year.
> I am very disappointed that jpeg2000 is a "standard" and ffv1 is not :/
> FFv1 would get much more acceptance in the "professional" video world if it would be.
I think this subject has been on the list before (to further standardize ffv1), maybe time to revive that thread.
> The reason why i want to test jpeg2000:
> We are always using ffv1 but we sometimes we get "contrary wind" from the professional video world against ffv1 (like: "jpeg2000 is THE lossless codec...."). So we want to show compression rate7speed comparisons for 10bit pixel formats and above, to open their eyes.
It sounds like there are two contentions:
1) that there should only be one lossless codec in use professionally
2) that jpeg2000 should be that one lossless codec
If contention 1 is invalidated than contention 2 is not so meaningful. I don’t really see a good argument for a singular lossless codec. Archives are filled with many different film gauges, dozens of video tape formats, and huge varieties of digital media specifications and the variety is created because of the need to meet competing priorities (fast or slow, large or small, long term or temporary, open or closed). Because of the nature of lossless codecs and ffmpeg’s support of frame-level checksums it is feasible to encode around and around from uncompressed to jpeg2000 to huffyuv to ffv1 without loss (as long as significant characteristics are maintained). Either way any organization utilizing lossless codecs for preservation (or I suppose this goes for any codec) should be able to trust and ensure that they can maintain their ability to decode and assess that codec. If that maintenance becomes tenuous, when a codec becomes an obsolescence risk, then there is a need to transcode losslessly to preserve in a safer environment. In my opinion ffmpeg provides the tools needed to maintain, assess, transcode, and wrangle collections of either ffv1 or jpeg2000, but I can understand that for some they may have a more supportive community based on one codec or the other based on their professional network, still if considering the factors of speed, software support, and features I think you’ve already seen the difference.
One further note for ffv1 for preservation is its self-descriptiveness. FFv1 version 3 maintains information about field dominance, sample aspect ratio, and pixel format within the codec whereas jpeg2000 is moreso dependent on standards like SMPTE422M and the container. FFv1 version 3 also mandates embedded crc values per frame, which is a huge plus for long term preservation.
More information about the ffmpeg-user