Re: optimizing advice - Mailing list pgsql-general

From Bret
Subject Re: optimizing advice
Date
Msg-id 001b01ca72df$87df43f0$0d00a8c0@bjsworkstation
Whole thread Raw
In response to Re: optimizing advice  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-general
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Scott Marlowe
> Sent: Tuesday, December 01, 2009 2:10 PM
> To: r.soerensen@mpic.de
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] optimizing advice
>
> 2009/12/1 R�diger S�rensen <r.soerensen@mpic.de>:
> > dear all,
> >
> > I am building a database that will be really huge and grow
> rapidly. It
> > holds data from satellite observations. Data is imported
> via a java application.
> > The import is organized via files, that are parsed by the
> application;
> > each file hods the data of one orbit of the satellite.
> > One of the tables will grow by about 40,000 rows per orbit,
> there are
> > roughly 13 orbits a day. The import of one day (13 orbits) into the
> > database takes 10 minutes at the moment. I will have to import data
> > back to the year 2000 or even older.
> > I think that there will be a performance issue when the table under
> > question grows, so I partitioned it using a timestamp
> column and one
> > child table per quarter. Unfortunately, the import of 13 orbits now
> > takes 1 hour instead of 10 minutes as before. �I can live
> with that,
> > if the import time will not grow sigificantly as the table
> grows further.
>
> I'm gonna guess you're using rules instead of triggers for
> partitioning?  Switching to triggers is a big help if you've
> got a large amount of data to import / store.  If you need
> some help on writing the triggers shout back, I had to do
> this to our stats db this summer and it's been much faster
> with triggers.
>
> --
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org) To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

189,800,000 records per year..
Hope they are short records.
Not knowing what the report target is, perhaps breaking orbits
into separate servers (or at least db's) by month or year, then
querying to build your research data on another server..

Steve..how does this compare to the stats db??






pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: import data from openoffice Calc
Next
From: Israel Brewster
Date:
Subject: Build universal binary on Mac OS X 10.6?