[FFmpeg-devel] [RFC] use av_get_cpu_flags for real runtime CPU detection in swscale

Janne Grunau janne-ffmpeg
Wed Sep 8 18:38:02 CEST 2010


On Wed, Sep 08, 2010 at 05:24:02PM +0100, M?ns Rullg?rd wrote:
> Janne Grunau <janne-ffmpeg at jannau.net> writes:
> 
> > Hi,
> >
> > attached patch implements runtime CPU detection in libswscale. One minor
> > problem is that it changes behaviour on existing code which sets
> > individual flags but obviously not the new SWS_CPU_CAPS_FORCE. I think
> > it's acceptable since the flags have no effect with
> > !CONFIG_RUNTIME_CPUDETECT.
> >
> > Janne
> > diff --git a/swscale.h b/swscale.h
> > index 4e11c9a..ca63796 100644
> > --- a/swscale.h
> > +++ b/swscale.h
> > @@ -30,7 +30,7 @@
> >  #include "libavutil/avutil.h"
> >
> >  #define LIBSWSCALE_VERSION_MAJOR 0
> > -#define LIBSWSCALE_VERSION_MINOR 11
> > +#define LIBSWSCALE_VERSION_MINOR 12
> >  #define LIBSWSCALE_VERSION_MICRO 0
> >
> >  #define LIBSWSCALE_VERSION_INT  AV_VERSION_INT(LIBSWSCALE_VERSION_MAJOR, \
> > @@ -93,6 +93,7 @@ const char *swscale_license(void);
> >  #define SWS_CPU_CAPS_ALTIVEC  0x10000000
> >  #define SWS_CPU_CAPS_BFIN     0x01000000
> >  #define SWS_CPU_CAPS_SSE2     0x02000000
> > +#define SWS_CPU_CAPS_FORCE    0x00100000
> 
> What does the force flag mean?

don't change the passed flags, see AV_CPU_FLAG_FORCE

> >  #define SWS_MAX_REDUCE_CUTOFF 0.002
> >
> > diff --git a/utils.c b/utils.c
> > index e9400f8..9375489 100644
> > --- a/utils.c
> > +++ b/utils.c
> > @@ -44,6 +44,7 @@
> >  #include "libavutil/avutil.h"
> >  #include "libavutil/bswap.h"
> >  #include "libavutil/pixdesc.h"
> > +#include "libavutil/cpu.h"
> >
> >  unsigned swscale_version(void)
> >  {
> > @@ -722,7 +723,26 @@ static int handle_jpeg(enum PixelFormat *format)
> >
> >  static int update_flags_cpu(int flags)
> >  {
> > -#if !CONFIG_RUNTIME_CPUDETECT //ensure that the flags match the compiled variant if cpudetect is off
> > +#if CONFIG_RUNTIME_CPUDETECT
> > +    int cpuflags;
> > +
> > +    if (!(flags & SWS_CPU_CAPS_FORCE)) {
> > +        flags &= ~(SWS_CPU_CAPS_MMX|SWS_CPU_CAPS_MMX2|SWS_CPU_CAPS_3DNOW|SWS_CPU_CAPS_ALTIVEC|SWS_CPU_CAPS_BFIN);
> > +
> > +        cpuflags = av_get_cpu_flags();
> > +
> > +        if (ARCH_X86 && cpuflags & AV_CPU_FLAG_MMX)
> > +            flags |= SWS_CPU_CAPS_MMX;
> > +        if (ARCH_X86 && cpuflags & AV_CPU_FLAG_MMX2)
> > +            flags |= SWS_CPU_CAPS_MMX2;
> > +        if (ARCH_X86 && cpuflags & AV_CPU_FLAG_3DNOW)
> > +            flags |= SWS_CPU_CAPS_3DNOW;;
> > +        if (ARCH_X86 && cpuflags & AV_CPU_FLAG_SSE2)
> > +            flags |= SWS_CPU_CAPS_SSE2;
> > +        if (ARCH_PPC && cpuflags & AV_CPU_FLAG_ALTIVEC)
> > +            flags |= SWS_CPU_CAPS_ALTIVEC;
> 
> Why not change libswscale to use the AV_CPU_FLAG_ values directly?
> That would avoid this mess entirely.

API change, flags are passed in the same parameter as the other
algorithm options which would conflict with AV_CPU_FLAG_

If we want to be able to force a specific implemtation from library
using code we would have to add cpu_flags parameter. I wouldn't mind to
drop it since it's imho only useful for easy testing of specific
implementations.

Janne



More information about the ffmpeg-devel mailing list