Re: performance config help - Mailing list pgsql-performance

From Craig Ringer
Subject Re: performance config help
Date
Msg-id 4B4D53BF.4020905@postnewspapers.com.au
Whole thread Raw
In response to Re: performance config help  (Bob Dusek <redusek@gmail.com>)
Responses Re: performance config help
List pgsql-performance
On 13/01/2010 2:01 AM, Bob Dusek wrote:

> The connections to postgres are happening on the localhost.  Our
> application server accepts connections from the network, and the
> application queries Postgres using a standard pg_pconnect on the localhost.

Well, that's a good reason to have all those CPUs - if your app server
runs on the same host as Pg.

> We've been logging long-running queries (200 ms).  That's how we know
> Postgres is degrading.  We don't see any queries showing up when we have
> 40 clients running.  But, we start seeing quite a bit show up after that.

It might be informative to see how fast query times are increasing with
client count. You can probably figure this out by progressively lowering
your query time logging theshold.

> Each of the concurrent clients does a series of selects, inserts,
> updates, and deletes.  The requests would generally never update or
> delete the same rows in a table.  However, the requests do generally
> write to the same tables.  And, they are all reading from the same
> tables that they're writing to.

AFAIK None of that should interfere with each other, so long as they're
not working with the same sets of tuples.

> For the inserts, I imagine they are
> blocking on access to the sequence that controls the primary keys for
> the insert tables.

I doubt it. Sequences are outside transactional rules for that reason.
It takes an incredibly short time for nextval(...) to obtain the next
value for the sequence, and after that the sequence is unlocked and
ready for the next use.

> But, I'm not sure about locking beyond that.  When we delete from the
> tables, we generally delete where "clientid=X", which deletes all of the
> rows that a particular client inserted (each client cleans up its own
> rows after it finishes what its doing).  Would that be blocking inserts
> on that table for other clients?

Now would be a good time to start watching pg_locks - rather than
guessing, try to actually observe if there's lock contention.

I'd also consider looking into a connection pool so that as the number
of clients to your appserver increases you can keep the number of active
Pg connections at the "sweet spot" for your server to maximise overall
throughput.

--
Craig Ringer

pgsql-performance by date:

Previous
From: Eduardo Piombino
Date:
Subject: a heavy duty operation on an "unused" table kills my server
Next
From: Craig James
Date:
Subject: Re: a heavy duty operation on an "unused" table kills my server