On Tue, Mar 11, 2025 at 05:35:10PM -0500, Sami Imseih wrote:
> I have not benchmarked the overhead, so maybe there is not much to
> be concerned about. However, it just seems to me that tracking the extra
> data for all cases just to only deal with corner cases does not seem like the
> correct approach. This is what makes variant A the most attractive
> approach.
I suspect that the overhead will be minimal for all the approaches I'm
seeing on this thread, but it would not hurt to double-check all that.
As the overhead of a single query jumbling is weightless compared to
the overall query processing, the fastest method I've used in this
area is a micro-benchmark with a hardcoded loop in JumbleQuery() with
some rusage to get a more few metrics. This exagerates the query
jumbling computing, but it's good enough to see a difference once you
take an average of the time taken for each loop.
--
Michael