|Version 1 (modified by jakubk, 7 months ago) (diff)|
Using ffmpeg in a VFX pipeline
Every VFX pipeline needs a way of converting still frames into motion sequences that can be played back on a large screen or projector for the purposes of doing a review or just seeing the resulting work in motion. It is perfectly possible to play back high resolution frames directly, but such a setup requires an incredible amount of throughput bandwidth (a 2K sequence will need 300MB/s for seamless playback) and an even larger amount of storage space (the average size for a 2K frame is about 12MB for a 10bit DPX, and a 16bit EXR is about 14MB in size). Encoding these frames into a single compressed video file provides the option to quickly preview the work, makes it portable and is much more suited for quick daily reviews of the work. Full resolution frames should always be used for final reviews and color correct final grades, but that would be performed on speciality hardware/software such as a Mistika suite or a Nucoda Filmmaster attached to a high performance SAN.
The basic requirements for generating movie clips in a VFX pipeline can be summed up into the following points:
- Resulting clip should be playable on all three major OS's used in VFX: Mac OS X, Windows and most of all, Linux
- The codec and review player should allow for frame-by-frame scrubbing of the clip
- There should be some level of compression. This is obviously based on each studios capability in terms of storage space available.
- Portable devices such as iPads are becoming increasingly more popular for doing film reviews. The clip encoding part of the VFX pipeline should be ideally capable of producing videos playable on these devices
- The color must be as close as possible to the original source frames