Partition table in 9.0.x? - Mailing list pgsql-performance

From AJ Weber
Subject Partition table in 9.0.x?
Date
Msg-id 50E74A33.5000204@comcast.net
Whole thread Raw
In response to Re: Simple join doesn't use index  (Stefan Andreatta <s.andreatta@synedra.com>)
Responses Re: Partition table in 9.0.x?  (Jeff Janes <jeff.janes@gmail.com>)
List pgsql-performance
Hi all,

I have a table that has about 73mm rows in it and growing.  Running
9.0.x on a server that unfortunately is a little I/O constrained.  Some
(maybe) pertinent settings:
default_statistics_target = 50
maintenance_work_mem = 512MB
constraint_exclusion = on
effective_cache_size = 5GB
work_mem = 18MB
wal_buffers = 8MB
checkpoint_segments = 32
shared_buffers = 2GB

The server has 12GB RAM, 4 cores, but is shared with a big webapp
running in Tomcat -- and I only have a RAID1 disk to work on.  Woes me...

Anyway, this table is going to continue to grow, and it's used
frequently (Read and Write).  From what I read, this table is a
candidate to be partitioned for performance and scalability.  I have
tested some scripts to build the "inherits" tables with their
constraints and the trigger/function to perform the work.

Am I doing the right thing by partitioning this?  If so, and I can
afford some downtime, is dumping the table via pg_dump and then loading
it back in the best way to do this?

Should I run a cluster or vacuum full after all is done?

Is there a major benefit if I can upgrade to 9.2.x in some way that I
haven't realized?

Finally, if anyone has any comments about my settings listed above that
might help improve performance, I thank you in advance.

-AJ


pgsql-performance by date:

Previous
From: Daniel Westermann
Date:
Subject: Re: FW: performance issue with a 2.5gb joinded table
Next
From: nobody nowhere
Date:
Subject: Re[6]: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database