[FFmpeg-devel] [RFC] integer.h overhaul
michaelni at gmx.at
Fri Oct 12 05:28:58 CEST 2012
On Thu, Oct 11, 2012 at 10:47:36AM +0200, Stefano Sabatini wrote:
> On date Tuesday 2012-10-09 19:52:12 +0200, Michael Niedermayer encoded:
> > On Tue, Oct 09, 2012 at 07:15:37PM +0200, Stefano Sabatini wrote:
> > > On date Tuesday 2012-10-09 18:45:12 +0200, Michael Niedermayer encoded:
> > > > On Tue, Oct 09, 2012 at 05:01:28PM +0200, Stefano Sabatini wrote:
> > > > > Hi,
> > > > >
> > > > > I needed to extend big integer support to make it able to contain up
> > > > > to 2304 bits (144 16-bits words), so I re-worked the API to avoid to
> > > > > pass huge arrays by value.
> > > >
> > > > why do you need to work with such large integers ?
> > >
> > > > please explain the algorithm you try to implement
> > >
> > > Check the xface patch, it encodes a 48x48 bitmap as a big integer (so
> > > it is 48x48 bits = 144 16-bits words, integer.c supports up to 8).
> > >
> > > Another possibility would be to implement custom routines in the xface
> > > code, but I thought sharing code with libavutil was a good thing.
> > >
> > > > why do you remove the pass by value ?
> > > > multiplying 2 such numbers will take probably around a million
> > > > cpu cyles, with the current implementation that was not designed for
> > > > this, copying such a number in pass by value will take maybe around
> > > > hundread cpu cyles.
> > >
> > > > so this isnt making it faster but it makes its interface quite
> > > > awkward to use IMHO
> > >
> > > Yes, indeed I was not so sure about it. I can change that and/or
> > > ripristinate the old interface (or leave it as it was), my only
> > > problem is to raise the maximum number of supported digits (that's why
> > > I published this as an RFC).
> > iam not sure whats the best thing to do, one would be simply to move
> > all to the header and use code like:
> > #define AV_INTEGER_SIZE 123
> > #include "libavutil/integer.h"
> > another would be to switch to references as you do but drop the upper
> > limit. so that the size of each number is decided when its allocated
> > but it seems overkill, we dont need these features currently ...
> Implemented the first suggestion (which has the nice property that no
> API/ABI is even exposed).
> Problem is, the code is too slow, xface
> takes several seconds to encode an image, which is not acceptable.
I dont think the original xface code is that slow (given its age if i
dont misremember that), so why dont you use that ?
but i didnt search/look at the code so maybe theres some problem iam
integer.c/h isnt optimized to be used as an arithmetic coder
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
I have often repented speaking, but never of holding my tongue.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 198 bytes
Desc: Digital signature
More information about the ffmpeg-devel