[Ffmpeg-devel] upsampling of subsampled video data

Michael Niedermayer michaelni
Tue Sep 12 22:36:55 CEST 2006


On Tue, Sep 12, 2006 at 09:06:50PM +0200, Attila Kinali wrote:
> > > The quantization levels for YUV are different from RGB and
> > > requantizing the Y as RGB will introduce ugly banding.
> > 
> > One way to reduce banding is to use error-diffusion dithering. I get
> > pretty good results on 8-bit to 8-bit conversions, at the cost of
> > introducing some noise in the least significant bits. (I'd rather see
> > a little noise than a lot of banding :)
> Unfortunately, noise is something very difficult to generate
> if you have only digital hardware. And i somewhat doubt
> that it will make the image visualy any better (unless
> the colour/luma gradients are calculated and dithering
> is adjusted accordingly, but this is very difficult
> in hardware)

do YUV->RGB with a little more precission then needed and do some ordered
dither (see the swscaler code, its just something like (sample+tab[x&3][y&3])>>C)
this should be fairly simply in hw
for rgb15 and rgb16 theres a huge vissual quality improvemnet by doing this
for the rgb24 case iam not sure though if there will be much/any vissible


Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

In the past you could go to a library and read, borrow or copy any book
Today you'd get arrested for mere telling someone where the library is

More information about the ffmpeg-devel mailing list