Thread: AW: [HACKERS] sort on huge table
> I have compared current with 6.5 using 1000000 tuple-table (243MB) (I > wanted to try 2GB+ table but 6.5 does not work in this case). The > result was strange in that current is *faster* than 6.5! > > RAID5 > current 2:29 > 6.5.2 3:15 > > non-RAID > current 1:50 > 6.5.2 2:13 > > Seems my previous testing was done in wrong way or the behavior of > sorting might be different if the table size is changed? This new test case is not big enough to show cache memory contention, and is thus faster with the new code. The 2 Gb test case was good, because it shows what happens when cache memory becomes rare. Andreas
Zeugswetter Andreas SEV <ZeugswetterA@wien.spardat.at> writes: > This new test case is not big enough to show cache memory contention, > and is thus faster with the new code. Cache memory contention? I don't think so. Take a look at the CPU versus elapsed times in Tatsuo's prior report on the 2Gb case. I'm not sure yet what's going on, but it's clear that the bottleneck is I/O operations not processor/memory speed. regards, tom lane