I was just looking at the behavior of src/port/qsort.c on the test case
that Jerry Sievers was complaining about in pgsql-admin this morning.
I found out what the real weak spot is: it's got nothing directly to do
with good or bad pivots, it's this code right here:
if (swap_cnt == 0) { /* Switch to insertion sort */ for (pm = (char *) a + es; pm
<(char *) a + n * es; pm += es) for (pl = pm; pl > (char *) a && cmp(pl - es, pl) > 0; pl -=
es) swap(pl, pl - es); return; }
In other words, if qsort hits a subfile for which the chosen pivot is a
perfect pivot (no swaps are necessary), it switches to insertion sort.
Which is O(N^2). In Jerry's test case this happens on a subfile of
736357 elements, and you can say goodnight to that process ....
What I'm thinking is that we ought to have a limit on this, ie not
switch to insertion sort if n is larger than 1000 or so, ie
- if (swap_cnt == 0)
+ if (swap_cnt == 0 && n < 1000)
I'm wondering what the authors were expecting the insertion sort to
handle exactly. Does anyone have a copy of the paper that's referenced
in the code comment?
/** Qsort routine from Bentley & McIlroy's "Engineering a Sort Function".*/
I tried looking for this at ACM but they seem not to have it.
regards, tom lane