Re: Optimal database table optimization method - Mailing list pgsql-general

From John R Pierce
Subject Re: Optimal database table optimization method
Date
Msg-id 4B9018F5.8090400@hogranch.com
Whole thread Raw
In response to Optimal database table optimization method  (Roger Tannous <roger.tannous@gmail.com>)
List pgsql-general
Roger Tannous wrote:
>
> Hello,
>
> I have a database table that is growing too big (few hundred million
> rows) that needs to be optimized, but before I get into partitioning
> it, I thought I'd ask about suggestions.
>
> Here is the usage:
>
>      0 . Table contains about 10 columns of length about 20 bytes each.
>
>   1.
>
>       INSERTS are performed at a rate of hundreds of times per second.
>
>   2.
>
>       SELECT statements are performed based on column 'a' (where
>       a='xxxx' ) a few times per hour.
>
>   3.
>
>       DELETE statements are performed based on a DATE column. (delete
>       where date older than 1 year) usually once per day.
>
>   4.
>
>
> The key requirement is to speed up INSERT and SELECT statements, and
> be able to keep history data of 1 year back without locking the whole
> table down while deleting.
>
> I would guess that I must have two indexes, one for column 'a', and
> the other for the date field. or is it possible to optimize both ?
>
> Will there be a necessary trade-off between speed on select and speed
> of delete?
>
> Is partitioning the only solution ? What are good strategies for
> partitioning such table?
>
> I'm using a PostgreSQL 8.4 database.
>

we partition similar tables by date, typically by month.   this way you
can simply drop an old month table after updating the triggers to put
new data into a new month table.

.

pgsql-general by date:

Previous
From: Joshua Johnston
Date:
Subject: Re: Can you set the date output format (to_char style) per session?
Next
From: "Niranjan Maturi (nmaturi)"
Date:
Subject: Re: Failed to run initdb: 128