Re: Table Partitioning in Postgres: - Mailing list pgsql-general

From Bodanapu, Sravan
Subject Re: Table Partitioning in Postgres:
Date
Msg-id D9C90B51B105D511A3FB00508BFD70E2046DB416@mnmtkex1.nextelpartners.com
Whole thread Raw
In response to Table Partitioning in Postgres:  ("Bodanapu, Sravan" <Sravan.Bodanapu@NextelPartners.com>)
Responses Re: Table Partitioning in Postgres:  (Jonathan Bartlett <johnnyb@eskimo.com>)
Re: Table Partitioning in Postgres:  (Curt Sampson <cjs@cynic.net>)
Re: Table Partitioning in Postgres:  ("Shridhar Daithankar<shridhar_daithankar@persistent.co.in>" <shridhar_daithankar@persistent.co.in>)
List pgsql-general

Thanks Curt!!! The data was actually taken out of Oracle database and then dumped into Postgres database
using bulk copy. Most of the tables were very large ( having around 20-30 million rows and around
200-300 columns in each ). In Oracle, these tables were partitioned into chunks to get maximum performance.

1.      When a table is created in postgres, it will always create the datafile in /pgdata/base/16975 or 16976 directory.

        What does 16975 and 16976 mean ? Is there a way that the datafile(for table/data/index) gets generated
        in different directories instead of one. If yes, how ?

2.      Is there a way to limit a datafile size ( say 3GB ) ? This is a concept in Ingres that you can span the data
        across different files.

3.      Please suggest us some tips for setting up a big database to acheive maximum performance ?

Thanks and Regards,

- Sravan.

-----Original Message-----
From: Curt Sampson [mailto:cjs@cynic.net]
Sent: Thursday, February 13, 2003 7:25 AM
To: Bodanapu, Sravan
Cc: PGSQL General (E-mail)
Subject: Re: [GENERAL] Table Partitioning in Postgres:

On Tue, 11 Feb 2003, Bodanapu, Sravan wrote:

> We are trying to migrate a database from Oracle to Postgres which is about
> 150Gig.
> How do you setup and maintain Big tables having around 20-30 million rows ?
> Is there a way to setup table partitioning ? How can I improve the Postgres
> Database performance for such a bid database ?

I've set up tables with 500 million or more rows just as I would with
any other table. There is no table partitioning per se in postgres, but
you can always modify your application to use separate tables (which I
have also done for some large ones).

As for performance, that is soooo application dependent that you really
probably want to hire a consultant to help you out if you don't have time
to spend studying it yourself.

At the very least, for anything big like this, you'd want to spend
a week or two playing around with your database and application on
postgres before you even think about whether you want to convert or not.

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC

pgsql-general by date:

Previous
From: Patrick Nelson
Date:
Subject: RE in where
Next
From: Joe Conway
Date:
Subject: PL/R - R procedural language handler for PostgreSQL