"Dann Corbit" <DCorbit@connx.com> writes:
>> I've still got a problem with these checks; I think they are a net
>> waste of cycles on average.
> The benchmarks say that they (order checks) are a good idea on average
> for ordered data, random data, and partly ordered data.
There are lies, damn lies, and benchmarks ;-)
The problem with citing a benchmark for this discussion is that a
benchmark can't tell you anything about real-world probabilities;
it only tells you about the probabilities occuring in the benchmark
case. You need to make the case that the benchmark reflects the
real world, which you didn't.
> If you trace the algorithms in a debugger you will be surprised at how
> often the partitions are ordered, even with random sets as input.
Well, I do agree that checking for orderedness on small partitions would
succeed more often than on larger partitions or the whole file --- but
the code-as-given checks all the way down. Moreover, the argument given
for spending these cycles is that insertion sort sucks on reverse-order
input ... where "sucks" means that it spends O(N^2) time. But it spends
O(N^2) in the average case, too.
regards, tom lane