[FFmpeg-devel] [PATCH] encoder for adobe's flash ScreenVideo2 codec
Sun Dec 20 16:46:04 CET 2009
On Wed, Jul 22, 2009 at 08:19:41PM -0600, Joshua Warner wrote:
> On Wed, Jul 22, 2009 at 4:05 PM, Vitor Sessak<vitor1001 at gmail.com> wrote:
> > Then, you choose the one that has the smallest quantity (distortion +
> > lambda*rate). The reasoning behind that is better explained at
> > doc/rate_distortion.txt. The parameter lambda is found in frame->quality and
> > is passed from the command line by "-qscale" ("-qscale 2.3" =>
> > frame->quality == (int) 2.3*FF_LAMBDA_SCALE). It is also a good starting
> > point to implement in future rate control (using VBR with a given average
> > bitrate gives better quality than CBR).
> > Note that what is explained in rate_distortion.txt is already what you are
> > doing with the s->dist parameter (s->dist == 8*lambda), so this "solves" the
> > problem of finding the optimum dist.
> > If the speed loss is not worth the price of trying both methods, I think
> > that s->use15_7 should be chosen set based on frame->quality (by testing on
> > a few samples from what quality value using bgr starts been optimal on
> > average).
> > Unfortunately, the rate distortion method do not solve the problem of
> > finding the optimal block size. How much do quality/bitrate depend on it?
why do you think RD cannot be applied to block sizes?
> Quality doesn't depend at all on the block size, but the bit rate
> depends a lot on it (I have seen the bit rate change by a factor of 4
> between different block sizes). Like I said before, I have tried
> different formulas for estimating the optimal block size, but none of
> them have worked consistently better than the 64x64 defaults. Brute
> force would be the obvious next step, but I think that would be
> prohibitively expensive, because several frames (probably at least 10%
> of the inter-key-frame distance) would have to be trial encoded to
> make a good assessment.
if you try 2 block sizes and each costs you 10% that makes 20% extra time,
this does not seem prohibitively expensive to me.
heres a simple example that would need 2x the encode time
encode frame 1 with 64x64 and 128x128 pick the better, lets assume its 64x64
encode frame 2 with 32x32 and 64x64 pick the better, lets assume its 32x32
encode frame 3 with 16x16 and 32x32 pick the better, lets assume its 32x32
encode frame 4 with 32x32 and 64x64 pick the better, lets assume its 64x64
encode frame 5 with 64x64 and 128x128 pick the better, lets assume its 128x128
encode frame 6 with 128x128 and 256x256 pick the better, lets assume its 128x128
encode frame 7 with 64x64 and 128x128 pick the better, lets assume its 128x128
encode frame 8 with 128x128 and 256x256 pick the better, lets assume its 128x128
also the patch is full of float & double math that appears unneeded
if its unneeded it has to be removed as floats make regression tests difficult
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
I know you won't believe me, but the highest form of Human Excellence is
to question oneself and others. -- Socrates
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
More information about the ffmpeg-devel