Thread: getting better performance

getting better performance

From
Eugeny N Dzhurinsky
Date:
Hello!

I have a postgresql server serving thousands of tables. Sometime there are
queries which involves several tables.

In postgresql.conf I have these settings:

shared_buffers = 40000
work_mem = 8192
maintenance_work_mem = 16384
max_stack_depth = 2048

all other settings are left by default (except ones needed for pg_autovacuum).

Is there anything I can tune to get better performance?

--
Eugene N Dzhurinsky

Re: getting better performance

From
"A. Kretschmer"
Date:
am  06.07.2006, um  9:40:16 +0300 mailte Eugeny N Dzhurinsky folgendes:
> In postgresql.conf I have these settings:
>
> shared_buffers = 40000
> work_mem = 8192
> maintenance_work_mem = 16384
> max_stack_depth = 2048
>
> all other settings are left by default (except ones needed for pg_autovacuum).
>
> Is there anything I can tune to get better performance?

You can set "log_min_duration_statement" to log slow querys and then
analyse this querys.


HTH, Andreas
--
Andreas Kretschmer    (Kontakt: siehe Header)
Heynitz:  035242/47215,      D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===    Schollglas Unternehmensgruppe    ===

Re: getting better performance

From
"Ivan Zolotukhin"
Date:
On 7/6/06, A. Kretschmer <andreas.kretschmer@schollglas.com> wrote:
> am  06.07.2006, um  9:40:16 +0300 mailte Eugeny N Dzhurinsky folgendes:
> > In postgresql.conf I have these settings:
> >
> > shared_buffers = 40000
> > work_mem = 8192
> > maintenance_work_mem = 16384
> > max_stack_depth = 2048
> >
> > all other settings are left by default (except ones needed for pg_autovacuum).
> >
> > Is there anything I can tune to get better performance?
>
> You can set "log_min_duration_statement" to log slow querys and then
> analyse this querys.

When you collect your logs try PgFouine
http://pgfouine.projects.postgresql.org/
to understand what queries should be optimized and what's the reason
of poor performance.

Re: getting better performance

From
Scott Marlowe
Date:
On Thu, 2006-07-06 at 01:40, Eugeny N Dzhurinsky wrote:
> Hello!
>
> I have a postgresql server serving thousands of tables. Sometime there are
> queries which involves several tables.

Do you add / remove tables a lot?  Could be you've got system catalog
bloat.

do you have autovacuum running?

What version of pgsql are you running?

What OS?

What file system?

What kind of machine are you using?

How much memory does it have?

How many disk drives?

Are you using RAID?  hardware / software?  battery backed cache or no?

Do you have one or two, or thousands of connections at a time?

Do you use connection pooling if you have lots of connections to handle?

There's a lot of info needed to make a decision on how to performance
tune a system.

Re: getting better performance

From
Eugeny N Dzhurinsky
Date:
On Thu, Jul 06, 2006 at 09:28:39AM -0500, Scott Marlowe wrote:
> On Thu, 2006-07-06 at 01:40, Eugeny N Dzhurinsky wrote:
> Do you add / remove tables a lot?  Could be you've got system catalog
> bloat.

Yes, almost each table is dropped and re-created in 3-5 days.

> do you have autovacuum running?

Yes.

> What version of pgsql are you running?

psql -V
psql (PostgreSQL) 8.0.0

> What OS?

CentOS release 3.7 (Final)

> What file system?

ext3

> What kind of machine are you using?

Pentium IV, 1.8 GHz

> How much memory does it have?

512 Mb RAM

> How many disk drives?

single

> Are you using RAID?  hardware / software?  battery backed cache or no?

no

> Do you have one or two, or thousands of connections at a time?

something like 20 connections (peak).

> Do you use connection pooling if you have lots of connections to handle?

My application uses Jakarta Commons DBCP module.

--
Eugene Dzhurinsky

Re: getting better performance

From
"Merlin Moncure"
Date:
On 7/6/06, Eugeny N Dzhurinsky <bofh@redwerk.com> wrote:
> Hello!
>
> I have a postgresql server serving thousands of tables. Sometime there are
> queries which involves several tables.

> In postgresql.conf I have these settings:
>
> shared_buffers = 40000
> work_mem = 8192
> maintenance_work_mem = 16384
> max_stack_depth = 2048
>
> all other settings are left by default (except ones needed for pg_autovacuum).
>
> Is there anything I can tune to get better performance?

you may want to explore upping your FSM settings with that many tables.

Merlin

Re: getting better performance

From
Scott Marlowe
Date:
On Thu, 2006-07-06 at 10:11, Eugeny N Dzhurinsky wrote:
> On Thu, Jul 06, 2006 at 09:28:39AM -0500, Scott Marlowe wrote:
> > On Thu, 2006-07-06 at 01:40, Eugeny N Dzhurinsky wrote:
> > Do you add / remove tables a lot?  Could be you've got system catalog
> > bloat.
>
> Yes, almost each table is dropped and re-created in 3-5 days.

> > do you have autovacuum running?
>
> Yes.

Hopefully, that will keep the system catalogs from getting overly fat.
You still need to check their size every so often to make sure they're
not getting out of line.

you might wanna run a vacuum verbose and see what it has to say.

> > What version of pgsql are you running?
>
> psql -V
> psql (PostgreSQL) 8.0.0

You should update.  x.0.0 versions often have the nastiest of the data
loss bugs of any release versions of postgresql.  8.0 is up to 8.0.8
now.  The upgrades are generally painless (backup anyway, just in case)
and can be done in place.  just down postgres, rpm -Uvh the packages and
restart postgres

> > What OS?
>
> CentOS release 3.7 (Final)
>
> > What file system?
>
> ext3
>
> > What kind of machine are you using?
>
> Pentium IV, 1.8 GHz
>
> > How much memory does it have?
>
> 512 Mb RAM

That's kind of small for a database server.  If your data set is fairly
small it's alright, but if you're working on gigs of data in your
database, the more memory the better.

> > How many disk drives?
>
> single

OTOH, if you're doing a lot of committing / writing to the hard drives,
a single disk drive is suboptimal

Is this SCSI, PATA or SATA?  If it's [SP]ATA, then you've likely got no
real fsyncing, and while performance won't be a problem, reliability
should the machine crash would be.  If it's SCSI, then it could be a
bottle neck for writes.

> > Are you using RAID?  hardware / software?  battery backed cache or no?
>
> no

I'd recommend looking into it, unless you're CPU bound.  A decent RAID
controller with battery backed cache and a pair of drives in a mirror
setup can be a marked improved to start with, and you can add more
drives as time goes by if needs be.

My guess is that you've got sys catalog bloat.  You might have to down
the database to single user mode and run a vacuum on the system catalogs
from there.  That's how it was in the old days of 7.x databases anyway.

If you don't have sys cat bloat, then you're probably CPU / memory bound
right now.  unless you're writing a lot, then you're likely disk i/o
bound, but I kinda doubt it.

Re: getting better performance

From
Markus Schaber
Date:
Hi, Eugeny,

Eugeny N Dzhurinsky wrote:

>> Do you add / remove tables a lot?  Could be you've got system catalog
>> bloat.
>
> Yes, almost each table is dropped and re-created in 3-5 days.

If your really recreate the same table, TRUNCATE may be a better
solution than dropping and recreation.

HTH,
Markus

--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org