[FFmpeg-devel] [RFC] New swscale internal design prototype
Rémi Denis-Courmont
remi at remlab.net
Wed Mar 12 02:58:52 EET 2025
Le 10 mars 2025 15:14:46 GMT+02:00, Niklas Haas <ffmpeg at haasn.xyz> a écrit :
>On Sun, 09 Mar 2025 17:57:48 -0700 Rémi Denis-Courmont <remi at remlab.net> wrote:
>>
>>
>> Le 9 mars 2025 12:57:47 GMT-07:00, Niklas Haas <ffmpeg at haasn.xyz> a écrit :
>> >On Sun, 09 Mar 2025 11:18:04 -0700 Rémi Denis-Courmont <remi at remlab.net> wrote:
>> >> Hi,
>> >>
>> >> Le 8 mars 2025 14:53:42 GMT-08:00, Niklas Haas <ffmpeg at haasn.xyz> a écrit :
>> >> >https://github.com/haasn/FFmpeg/blob/swscale3/doc/swscale-v2.txt
>> >>
>> >> >I have spent the past week or so ironing
>> >> >I wanted to post it here to gather some feedback on the approach. Where does
>> >> >it fall on the "madness" scale? Is the new operations and optimizer design
>> >> >comprehensible? Am I trying too hard to reinvent compilers? Are there any
>> >> >platforms where the high number of function calls per frame would be
>> >> >probitively expensive? What are the thoughts on the float-first approach? See
>> >> >also the list of limitations and improvement ideas at the bottom of my design
>> >> >document.
>> >>
>> >> Using floats internally may be fine if there's (almost) never any spillage, but that necessarily implies custom calling conventions. And won't work with as many as 32 pixels. On RVV 128-bit, you'd have only 4 vectors. On Arm NEON, it would be even worse as scalars/constants need to be stored in vectors as well.
>> >
>> >I think that a custom calling convention is not as unreasonable as it may sound,
>> >and will actually be easier to implement than the standard calling convention
>> >since functions will not have to deal with pixel load/store, nor will there be
>> >any need for "fused" versions of operations (whose only purpose is to avoid
>> >the roundtrip through L1).
>> >
>> >The pixel chunk size is easily changed; it is a compile time constant and there
>> >are no strict requirements on it. If RISC-V (or any other platform) struggles
>> >with storing 32 floats in vector registers, we could go down to 16 (or even 8);
>> >the number 32 was merely chosen by benchmarking and not through any careful
>> >design consideration.
>>
>> It can't be a compile time constant on RVV nor (if it's ever introduced) SVE because they are scalable. I doubt that a compile-time constant will work well across all variants of x86 as well, but not that I'd know.
>
>It's my understanding that on existing RVV implementations, the number of
>cycles needed to execute an m4/m2 operation is roughly 4x/2x the cost of
>an equivalent m1 operation.
But that's exactly the problem! We want to use the *same* group multipler regardless of the vector length to obtain roughly optimal bandwidth. That means the number of elements will be proportional to the vector length. The multiplier depends on the element size and perhaps the register pressure of a given chunk processing, not the vector length.
And with SVE2, it'll most probably work optimally with a 2x unroll (like NEON typically). This is more or less equivalent to RVV m2, and will also lead to a chunk size proportional to the hardware vector length.
If you calculate the chunk size based in the worst 128-bit case, then it'll work on 256-bit but at only 50% of the possible speed, because half the CPU time will be wasted working on tail or masked elements.
>If this continues to be the case, the underlying VLEN of the implementation
>should not matter much, even with a compile time constant chunk size, as long
>as it does not greatly exceed 512.
No clue how you come to that conclusion. The maths don't add up here.
More information about the ffmpeg-devel
mailing list