[Ffmpeg-devel] upsampling of subsampled video data
Wed Sep 13 00:37:24 CEST 2006
* Attila Kinali (attila at kinali.ch) wrote:
> > Charles Poynton's book and website are good resources for this.
> Which book do you mean? he wrote more than one :)
"Digital Video and HDTV Algorithms and Interfaces"
In general, when you read anything that conflicts with Poynton,
consider Poynton correct :). (there is sooo much dis-information
regarding video out on the internet, it's unbelieveable)
> Unfortunately, noise is something very difficult to generate if you
> have only digital hardware. And i somewhat doubt that it will make
> the image visualy any better (unless the colour/luma gradients are
> calculated and dithering is adjusted accordingly, but this is very
> difficult in hardware)
I'm just talking about bumping the least-significant bit up or down
using error-diffusion dithering. I don't even do the full 2D error
diffusion, I just process each scanline separately and carry error
from each pixel to next horizontal pixel. It makes a big improvement
in the quality of gradients. (so much that I never allow my clients to
encode my animations from 8-bit RGB files anymore, since they will
inevitably use a dumb converter that just rounds to the nearest value
instead of dithering).
If you are going to feed the samples into a lossy codec however, there
isn't much point in using dithering. The codec looks at your nice
dithering and says "oh, that's just noise, get rid of it!" This
unfortunate interaction dooms most lossy 8-bit codecs to horrible
banding. In DV or MPEG-2 for instance, to get band-free gradients you
would have to add a very large amount of noise - much more than one
least significant bit - which looks bad, and reduces codec
efficiency... I would love to see a lossy codec that does a better job
of handling banding/quantization issues.
More information about the ffmpeg-devel