Dear Tom,
Thanks for your response.
----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
Sent: Monday, June 07, 2004 3:51 PM
> That's pretty hard to believe; particularly on modern machines, I'd
> think that moving it down would make more sense than moving it up.
> You're essentially asserting that the CPU time to process one tuple
> is almost half of the time needed to bring a page in from disk.
That is exactly what I had in mind. We found that 5x10krpm HW RAID 5 array
blazing fast, while we were really disappointed about CPU. E.g.
* tar'ing 600MB took seconds; gzip'ing it took minutes.
* initdb ran so fast that I didn't have time to hit Ctrl+C because
I forgot a switch ;)
* dumping the DB in or out was far faster than adddepend between 7.2 and 7.3
* iirc index scans returning ~26k rows of ~64k were faster than seq scan.
(most suspicious case of disk cache)
But whatever is the case with my hardware -- could you tell me something
(even a search keyword ;) ) about my theoretical question: i.e. relation of
cpu_*_costs?
> I suspect that your test cases were toy cases small enough to be
> fully cached and thus not incur any actual I/O ...
Dunno. The server has 1GB RAM; full DB is ~100MB; largest query was ~7k
which moved at least 2 tables of >200k rows and several smaller ones. If it
is a "toy case" for such hw, I humbly accept your opinion.
BTW its runtime improved from 53 to 48 sec -- all due to changing cpu tuple
cost. I ran the query at different costs, in fast succession:
run cost sec
#1 0.01 53
#2 0.4 50
#3 1.0 48
#4 1.0 48
#5 0.4 48
#6 0.01 53
For the second result, I'd say disk cache, yes-- but what about the last
result? It's all the same as the first one. Must have been some plan change
(I can send the exp-ana results if you wish)
G.
%----------------------- cut here -----------------------%
\end