Re: Transaction table - Mailing list pgsql-general

From Vick Khera
Subject Re: Transaction table
Date
Msg-id 2968dfd61003211230n1e1545ddte8a14c4b7ace1f81@mail.gmail.com
Whole thread Raw
In response to Transaction table  (Deepa Thulasidasan <deepatulsidasan@yahoo.co.in>)
Responses Re: Transaction table  (John R Pierce <pierce@hogranch.com>)
Re: Transaction table  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-general
On Sat, Mar 20, 2010 at 4:47 AM, Deepa Thulasidasan
<deepatulsidasan@yahoo.co.in> wrote:
> transaction table to grow by 10 times in near future. In this regard, we would like to know if this same structure of
thetransaction table and the indexing would be sufficient for quick retrivel of data  or do we have to partition this
table?If so what kind of partition would be suitable? 


My experience has been that when the tables are approaching the 100
million record mark things tend to slow down.  Running reindex and
vacuum on those tables also takes much longer since you tend not to
have enough memory to do those operations efficiently.

I like to partition tables such that they end up having under 10
million records each.  I just (a few hours ago...) finished
partitioning and migrating the data from a single table that had about
120 million records into 100 partitions of about 1.2 million rows
each.  For this particular case, I just partitioned on a mod 100
operation of one of the ID keys on which I do the bulk of my searches.

Like the two Scott M's recommended, figure out your usage patterns and
partition across those vectors to optimize those searches.  I would
not worry about optimizing the insert pattern.

You really *never* delete this data?  I would suspect then that having
a partitioning scheme where the number of partitions can grow over
time is going to be important to you.

pgsql-general by date:

Previous
From: John Shott
Date:
Subject: Problems with "CREATE CAST"
Next
From: Tom Lane
Date:
Subject: Re: Problems with "CREATE CAST"