[FFmpeg-devel] inverse weighitng tables for DV seems to be wrong

Michael Niedermayer michaelni
Sat Feb 28 05:10:15 CET 2009


On Fri, Feb 27, 2009 at 05:49:57PM -0800, Roman V. Shaposhnik wrote:
> On Fri, 2009-02-27 at 03:51 +0100, Michael Niedermayer wrote:
> > > I guess I'd be really glad if you can elaborate on this point. For DV,
> > > at least, if there's no quantization involved than decoding is a 100%
> > > inverse of the encoding weighting. If there's quantization things can
> > > get a little bit less straightforward, but still.
> > 
> > of course, so you store all coefficients without quantization and the
> > "ideal" weighting
> > but you know this cant be done because there is a limit on the bits
> > thus you are limited to store some approximation that needs less
> > bits
> 
> True. But as far as weighting tables are concerned the only thing
> that quantization really does is it changes the probability of level
> distribution, right? In a sense that probability of some levels
> get to 0 with some other increasing because that's where the missing
> ones map now.
> 

> To restate -- the goal of designing optimal weighting/unweighting
> tables seems to be to minimize the error on the most probable 
> levels after the quantization. 

no

let me give you a hypothetical example
you can store values 0-127 in 8 bits
value 128 will need 10 bit
you have 100 values to store all are 128 and you have 800bit space
you can store 100 127 values in 800 bit giving you a distortion of
1*100 = 100
you store 128 values until you run out of the bit budget, you store
80 128 values, the remaining 20 will be 0, distortion will be
100*100*20 = 200000
your result is worse by a factor of 2000 in terms of sum of squared
errors

also i would suggest that you read some paper (any paper actually)
about rate distortion and/or quantization.

repeating this in a blog post would be pointless, theres plenty of
existing literature (note though wikipedia is NOT)


[...]
> > and instead of tuning tables, near optimal quantization is harder but possible
> > too, and this will lead to significant gains (it does for other codecs ...)
> > to do it
> > the most obvious way would be to first apply the common RD trellis
> > quantization to a group of 5mbs 

> > (there is IIRC no bit sharing
> > possible acorss these 5mb groups)
> 
> There is. In DV all 5mb's share the common bit-space of a singel DIF
> block.

i think you misunderstood what i said
what i remember about dv is that each block had its own space when that
overflowed there was space for each MB and when that overflowed there was
space for each 5mb group? but there is no space for 10mb 100mb or such
groups
is my understanding correct?

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Observe your enemies, for they first find out your faults. -- Antisthenes
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20090228/8227d39d/attachment.pgp>



More information about the ffmpeg-devel mailing list