[Ffmpeg-devel] [PATCH] fix mpeg4 lowres chroma bug and increase h264/mpeg4 MC speed
Fri Feb 16 21:42:57 CET 2007
On Wed, Feb 14, 2007 at 04:52:35PM -0800, Trent Piepho wrote:
> On Tue, 13 Feb 2007, Michael Niedermayer wrote:
> > On Mon, Feb 12, 2007 at 05:53:38PM -0800, Trent Piepho wrote:
> > > On Mon, 12 Feb 2007, Michael Niedermayer wrote:
> > > > On Mon, Feb 12, 2007 at 12:49:30PM +0100, Michael Niedermayer wrote:
> > > > > > > > Why do you discard some times in your TIMER code? Is the goal just to
> > > > > > > > discard those times in which an interrupt occured?
> > > > > > >
> > > > > > > yes
> > > > > >
> > > > > > That's not what's is doing, there are far too many skips for that to
> > > > > > be the case.
> > > > >
> > > > if anyone has any ideas how we could detect if a interrupt/task switch
> > > > happened between START and STOP_TIMER please tell me ...
> > [...]
> > i think requireing to modify the kernel for using START/STOP_TIMER is not a
> > good idea ...
> > rdpmc though might be worth a try ...
> You need at least a kernel module to be able to use rdpmc. There are two
> different system out there for pmc on linux. Both require a patched kernel
> and create a device for controlling the PMCs. I thought this was too
> complex and so I wrote a simple kernel module that lets me turn on the pmcs
> without having to the patch the kernel source or mess with any userspace
> I've tested it for counting interrupts, and it works quite well.
is the code available somewhere?
> > > One run of one version of code will have some error from interrupts. The
> > > next run will have a different amount of error. The other version of the
> > > code will have error too when it's benchmarked. This is why you run the
> > > benchmarks many times. Then you can use statistics to make mathematically
> > > precise statements about the confidence of one version being faster than
> > > another despite the presence of measurement error. If difference between
> > > versions is so small and the error so large that the error overshadows the
> > > difference, then statistics will tell you that you can't say which is
> > > faster with much confidence. That's probably a good sign you're wasting
> > > your optimization efforts trying to decide which version to use.
> > the problem is that errors are systematic and statistics cannot seperate them
> > out from a routine which just really sometimes needs much more time
> > think of the following example:
> > a routine which needs 10000 cycles per run but once in 100 runs it needs
> > 1000000 cycles (due to some task switch and an other app doing something
> > or maybe it really has to deal with more complex data)
> > suddenly your code looks as if it needs 20000 cycles ...
> If it's because in a representative dataset, once every 100 calls more
> complex data comes along, wouldn't it be correct to say the average speed
> is 20000 cycles?
of course that is the problem ...
> If you had another version that ran in only 1,000 cycles except that the 1
> in 100 hard data made is use 10,000,000 cycles, that version would be
> slower, would it not?
> If the extra 1,000,000 cycles is because of a task switch, you would expect
> that to be random, right?
NO absolutely not, it depends on what runs currently, cron, xmms, ... and what
these applications do this will cause systematic errors, and totally mess
if the system is 100% idle there are no task switches or interrupts to begin
with but thats not realistic
also the amount of time (repeating benchmarks) and inconvinience (stoping all
applications and deamons) must be considered
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
There will always be a question for which you do not know the correct awnser.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: not available
More information about the ffmpeg-devel