Re: possible improvement between G4 and G5 - Mailing list pgsql-performance

From Aaron Werman
Subject Re: possible improvement between G4 and G5
Date
Msg-id Law10-OE46Gr8Ljq7Xb0001858b@hotmail.com
Whole thread Raw
In response to Re: atrocious update performance  ("Rosser Schwarz" <rschwarz@totalcardinc.com>)
Responses Re: possible improvement between G4 and G5
List pgsql-performance

----- Original Message -----
From: "Josh Berkus" <josh@agliodbs.com>
To: "Aaron Werman" <awerman2@hotmail.com>; "Qing Zhao" <qzhao@quotefx.net>;
"Tom Lane" <tgl@sss.pgh.pa.us>
Cc: <pgsql-performance@postgresql.org>
Sent: Tuesday, April 06, 2004 2:52 PM
Subject: Re: [PERFORM] possible improvement between G4 and G5


> Aaron,
>
> > I'm surprised by this thought. I tend to hit CPU bottlenecks more often
than
> > I/O ones. In most applications, db I/O is a combination of buffer misses
and
> > logging, which are both reasonably constrained.
>
> Not my experience at all.  In fact, the only times I've seen modern
platforms
> max out the CPU was when:
> a) I had bad queries with bad plans, or
> b) I had reporting queires that did a lot of calculation for display
(think
> OLAP).
>
> Otherwise, on the numerous servers I administrate, RAM spikes, and I/O
> bottlenecks, but the CPU stays almost flat.
>
> Of course, most of my apps are large databases (i.e. too big for RAM) with
a
> heavy transaction-processing component.
>
> What kind of applications are you running?
>
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco
>
>

<hot air>

I do consulting, so they're all over the place and tend to be complex. Very
few fit in RAM, but still are very buffered. These are almost all backed
with very high end I/O subsystems, with dozens of spindles with battery
backed up writethrough cache and gigs of buffers, which may be why I worry
so much about CPU. I have had this issue with multiple servers.

Consider an analysis db with 10G data. Of that, 98% of the access is read
and only 2% write (that is normal for almost anything that is not order
entry, even transaction processing with thorough cross validation). Almost
all the queries access 10%, or 1G of the data. Of the reads, they average ~3
level b-trees, with the first 2 levels certainly cached, and the last ones
often cached. Virtually all the I/O activity is logical reads against
buffer. A system with a 100 transactions which on average access 200 rows
does 98% of 200 rows x 100 transactions x 3 logical I/Os per read = 58,800
logical reads, of which actually maybe a hundred are physical reads.  It
also does 2% of 200 rows x 100 transactions x (1 table logical I/O and say 2
index logical writes) per write = 1,200 logical writes to log, of which
there are 100 transaction commit synch writes, and in reality less than that
because of queuing against logs (there are also 1,200 logical writes
deferred to checkpoint, of which it is likely to only be 40 physical writes
because of page overlaps).

Transaction processing is a spectrum between activity logging, and database
centric design. The former, where actions are stored in the database is
totally I/O bound with the engine acting as a thin layer of logical to
physical mapping. Database centric processing makes the engine a functional
server of discrete actions - and is a big CPU hog.

What my CPU tends to be doing is a combination of general processing,
complex SQL processing: nested loops and sorting and hashing and triggers
and SPs.

I'm curious about you having flat CPU, which is not my experience. Are your
apps mature and stable?

</hot air>

/Aaron

pgsql-performance by date:

Previous
From: Stefan Kaltenbrunner
Date:
Subject: Re: good pc but bad performance,why?
Next
From: Ken Geis
Date:
Subject: plan problem