[FFmpeg-devel] [Jack-Devel] [PATCH] libavdevice: JACK demuxer

Michael Niedermayer michaelni
Thu Mar 5 16:06:38 CET 2009


On Thu, Mar 05, 2009 at 01:15:21PM +0100, Fons Adriaensen wrote:
> On Thu, Mar 05, 2009 at 03:02:14AM +0100, Michael Niedermayer wrote:
> 
> > On Wed, Mar 04, 2009 at 09:31:44PM +0100, Fons Adriaensen wrote:
> >
> > > > well, the filter will take the first system time it gets as its
> > > > best estimate
> > > 
> > > there's no alternative at that time
> > 
> > this is not strictly true, though it certainly is not what i meant but
> > there very well can be a systematic error so that the first time + 5ms
> > might be better as a hyphothetical example.
> 
> Might be, or not. With just one value you don't know.

One might simply "know" that there is a unaccunted buffer
or that there is X cpu cycles spend between the interrupt
and the call to get the system time.
Or that some processing done in the soundcard causes a delay
of X.
How one could actually test for this, well if one has some kind of
signal generator and 2 sound cards in one system their relative delay
after all corrections could be tested for. So one in principle just
needs to know the systematic error for a single sound card to test
others ...
But this is getting off topic


> 
> If the sequence of measured times is t_i, then t_1 - t_0 
> *could* give an first estimate of the true sample rate.
> Most sound cards (even the cheapest ones) are within
> 0.1% of the nominal value. Wich means the random 
> error on t_1 - t_0 (the jitter) will be much larger
> than the systematic one, and the sample rate computed
> from the first two value is useless. This will apply
> to any trick you may want to use to get a 'quick' 
> value for the sample rate error.

I would suspect, though dont know that the samplerate error
stays the same (assuming similar temperature) thus remembering
it from last time would make sense. The code though that was
submitted even explicitly overwrites the estimate by the nominal
value on reset() ... (not sure if the paper says anything about this
case)


> 
> > > > and then "add" future times slowly into this to
> > > > compensate. That is it weights the first sample
> > > > very differently than the following ones, this is
> > > > clearly not optimal.
> > > 
> > > What makes you think that ?
> > 
> > common sense
> 
> Which is usually defeated by exact analysis. There is
> nothing mysterious about this, it's control loop /
> filter theory that's at least 50 years old now.

exact analysis involves application of math, formal proofs, simulation
and tests with real data.
and when approximation/extra assumtations have to be used one should keep
track of them and not pretend in the end that these extra assumtations
havent been done and that ones analysis based on them still would be true
if these "axioms" where not.

now i surely can create a synthetic example where the optimal value
of the 2 parameters lies outside the case that can be generated by 1, also as
has been shown adapting the parameters works better then not.
that makes one belive that some kind of extra assumtations have been done
also it seems that more theory then actual testing had been applied if
my first naive attempt at adapting the parameter worked better


> 
> > > > or in other words the noisyness or call it accuracy of its internal state
> > > > will be very poor after the first sampele while after a hundread it will
> > > > be better.
> > > 
> > > Which is what matters. 
> > 
> > yes but it would converge quicker where the factor not fixed
> 
> As long as the system is linear and time-invariant (you don't
> modify the parameters on the fly),

ahh the assumtations :)
so what makes you think the system is linear and time invariant?


> then for any given bandwidth
> the quickest convergence will be with critical damping. That is 
> again something any engineering student will learn in the first
> few months. It's basic maths, nothing else. If your common 
> sense tells you otherwise then your common sense is wrong.
> 
> You can make it converge quicker by using a higher bandwidth
> initially, or by using non-linear filter techniques wich will
> be a lot more complicated, or sometimes by ad-hoc tricks.

> **But all of this is useless**. As long as the filter settles
> within a few seconds all is OK. 

It may be useless to you, but surely isnt to us, decreasing the
error in the timestamps by 50% is a lot and iam sure our users
prefer not having to wait a few seconds for the filter to settle
or alternativly having a little bit jittery timestamps in the first
seconds. That is if the container even has timestamps, if not
one has to get creative once the filter settled to fix things
up ...

Also theres the evil user that might ask us why we arent using
a better filter if its just a one line change ...
pointing them to some time invariant approximation will not satisfy
them and it shouldnt ...


>  
> > > > The filter though will add samples in IIR fashion while ignoring
> > > > this
> > > 
> > > It's called exponential averaging, which means recent
> > > samples have more weight than older ones. Without that
> > > a system can't be adaptive.
> > 
> > that isnt true, one can design a non exponential filter that is
> > adaptive as well.
> 
> I did not say it has to exponential. I said it has to give
> higher weigth to more recent data. 

i dont think this is strictly true either. not even limiting one to
linear and time invariant things ...

I surely could have a signal that consists of 2 alternating values
with noise and a filter removing that noise would not use the
most recent sample (which is the other uncorrelated value)


> 
> 
> > heres a simple example with the recently posted timefilter patch
> > code below will simulate random uncorrelated jitter and a samplerate
> > error, it will find the best values for both parameters using a really
> > lame search.
> 
> Since you error statistics include the values during
> the initial settling time - which are completely
> irrelevant in this case - it is invalid. The only
> thing that matters is the long term performance. 

the long term performance of my suggestion is obviously the same
as yours, the difference is at the begin and that is quite
significant ...


> 
> I'm not going to waste anymore time on this, unless
> you can at least show you understand the basic theory
> and we have a common ground. 

i think we agree here,
keep your filter, i keep the one that works better
and when i get really bored ill reread the theory behind
kalman filters and optimally adapt the parameters, until then
mine is still having half the error in actual tests and i prefer
to give our users the code that performs best in reality than the
code that should perform best given linear and timeinvariant +
a few other assumtations.

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Opposition brings concord. Out of discord comes the fairest harmony.
-- Heraclitus
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20090305/110e0bfe/attachment.pgp>



More information about the ffmpeg-devel mailing list