Hi all,
I have some questions about the implementation of vector32_is_highbit_set on arm.
Below is the comment and the implementation for this function.
/*
* Exactly like vector8_is_highbit_set except for the input type, so it
* looks at each byte separately.
*
* XXX x86 uses the same underlying type for 8-bit, 16-bit, and 32-bit
* integer elements, but Arm does not, hence the need for a separate
* function. We could instead adopt the behavior of Arm's vmaxvq_u32(), i.e.
* check each 32-bit element, but that would require an additional mask
* operation on x86.
*/
#ifndef USE_NO_SIMD
static inline bool
vector32_is_highbit_set(const Vector32 v)
{
#if defined(USE_NEON)
return vector8_is_highbit_set((Vector8) v);
#else
return vector8_is_highbit_set(v);
#endif
}
#endif /* ! USE_NO_SIMD */
But I still don't understand why the vmaxvq_u32 intrinsic is not used on the arm platform.
We have used the macro USE_NEON to distinguish different platforms.
In addition, according to the "Arm Neoverse N1 Software Optimization Guide",
The vmaxvq_u32 intrinsic has half the latency of vmaxvq_u8 and twice the bandwidth.
So I think just use vmaxvq_u32 directly.
Any comments or feedback are welcome.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you
arenot the intended recipient, please notify the sender immediately and do not disclose the contents to any other
person,use it for any purpose, or store or copy the information in any medium. Thank you.