[FFmpeg-devel] [Jack-Devel] [PATCH] libavdevice: JACK demuxer
Fri Mar 6 12:06:23 CET 2009
Michael Niedermayer wrote:
>> 1 - the oss demuxer timestamps the packets with gettimeofday() without filtering
>> 2 - the v4l demuxer uses gettimeofday() without filtering
> i know these are broken, and here is where your timefilter code will come in
> handy ...
To me they're not broken, they could be improved by filtering. The one(s) that
is (are) broken are those that use the device clock, such as libdc1394, thus
generating timestamps that drift away from the system time. And the other
problem is that all demuxers do not follow a uniform timestamping mechanism, so
that nothing guarantees that they play well together.
>> 3 - the v4l2 demuxer believes it uses the the clock of the device, but the v4l2
>> documentation states this is the same as gettimeofday()
> do you have a link to the docs/src that says it calls gettimeofday() ?
"struct timeval timestamp :
For input streams this is the system time (as returned by the gettimeofday()
function) when the first data byte was captured.[...]"
>> 4 - the libdc1394 demuxer timestamps seem to be based on the video device clock:
>> dc1394->packet.pts = (dc1394->current_frame * 1000000) / (dc1394->fps);
>> As you can see, this is incoherent, and this is the current experience you are
>> giving to your users.
>> And now, you are fighting about the epsilon amount of jitter that could be
>> suppressed during the first couple of samples that are read from the device.
>> Nothing is perfect, and the current timefilter does provide an improvement. It
>> has been tested and benchmarked (see the link in my previous post).
>> This is what I call the Not Invented Here syndrom. The theory and/or the code
>> weren't elaborated by you or another member of your group, so you consider that
>> it's broken.
>> But it's not.
> The code performs very significnatly worse on the first 1000 sampled time
> values then a naively changed variant which has a single line changed.
> Iam just picking the best i can, thats not NIH its rather evolution.
> NIH would be rewriting perfectly fine code and ending up with
> nothing better, iam not planing to rewrite it just change it to make it
> perform better.
I understand. I was just trying to say that time (once again, time ;) is
precious and fixing the above mentioned demuxers (and possible others) may be
more important. Once they use a uniform timestamping mechanism, you could still
improve the timefilter in the future.
> also 1000 sampled time values are with, lets say 1024 sound samples per
> packet not just a few seconds but closer to a minute. assuming 22khz
> and just for fun i tried 10000 time samples
> at 0.0% sample rate error your code performs worse by a factor of 14
> at 0.1% sample rate error your code performs worse by a factor of 2
> iam not sure when you would consider the end of the first few samples
> to be but i have the feeling my variant is asymptotically
> performing better
> and really that makes sense, for the 0.0% sample rate error you try to
> beat the arithemtic mean by a exponentially decaying IIR filter, that
> cant work out
Okay, maybe your filter performs better.
1 - Your test code is theoretical. How does your filter perform with real
devices, soundcards, etc..?
2 - You are mesuring the jitter (or call it error). But what about the drift?
What if your filtered time slowly drifts away from the system time? For
instance, the device clock has by definition no jitter (the cycle period is
fixed) but it *always* drifts (forward or backward).
Please show us some graphs based on your filter, something similar to this:
More information about the ffmpeg-devel