Re: What's size of your PostgreSQL Database? - Mailing list pgsql-general

From Mark Roberts
Subject Re: What's size of your PostgreSQL Database?
Date
Msg-id 1219254091.14010.67.camel@localhost
Whole thread Raw
In response to Re: What's size of your PostgreSQL Database?  (Robert Gravsjö <robert@blogg.se>)
List pgsql-general
> Just out of curiosity, how do you replicate that amount of data?

When I started working here, we used Slony-I to replicate our aggregate
fact tables.  A little over a year ago our data volume had grown to the
point that the Slony was regularly unable to keep up with the data
volume and around this time Slony hit us with some crazy rollback based
data loss bug.  We elected to move our aggregate fact tables off of
slony, but left metadata tables on Slony.

So I wrote a custom in house replication engine into our aggregation
process.  Replication is accomplished in parallel via piped copy
statements, and so far it's been fast enough to keep up with the data
volume.  Without getting into too much detail, an import process might
look like this:

- Obtain and transform data into a usable state
- Import Data, and earmark it to be reaggregated
- Reaggregate (summarize) the changed data
- Copy the new data to all databases in the replication set
- Integrate new data into the aggregate fact tables

> I'd be very interested in a developers view of running and maintaining a
> database this size.
> Mostly what choices is made during development that might have been
> different on a smaller database.
> I'm also curious about the maintenance needed to keep a database this
> size healthy over time.

Ok, so all of this comes from what might be termed as an "ETL Developer"
point of view.  I pretty much only work on the data warehouse's
import/summarization process and look into performance issues.

Design:
- Natural keys are pretty well strictly forbidden.  Don't make a VARCHAR
your primary key, please. ;-)
- The Data determines partition criteria and the application layer is
partition aware.
- It's important to have aggregate tables to support common queries.
Joins are ok, but repeatedly aggregating thousands of rows together on
the fly really takes too much time.
- Aggregation processes have to be designed with care.
- Parallel processing is important, especially if you ever have to
reprocess large amounts of data (for example due to incorrect initial
data)



Maintenance:
- Autovacuum might be more trouble than it's worth.  We frequently have
mysteriously hung queries that are eventually traced back to being
blocked by Autovacuum.  The application layer knows exactly what and
when is changing... it could *probably* take over this duty.
- Pg upgrades are a major PITA and require absurd amounts of downtime
for the data processing part of the warehouse.
- Queries that have been working for long periods of time and suddenly
stop working or hang are usually broken by statistics issues.
- Partitioning is important because it allows the aggregate tables to be
clustered, backed up, and archived individually



If anyone wants to chat with me or direct me to resources about running
Postgres on distributed file systems (or other grid computing solutions)
please let me know.  Yes, I'm aware of the proprietary solutions, but
they've been ruled out on account of performance problems with
aggregates and absurd licensing costs.  Also, any grid computing
solution that I write can easily be applied across the company and thus
save us (quite literally) millions per year.

Again, if you'd like more information about any particular topic, just
ask. :)


-Mark


pgsql-general by date:

Previous
From: "Matthew Pettis"
Date:
Subject: Re: Fwd: Restarting with pg_ctl, users, and passwords.
Next
From: "Blakely, Jerel (Mission Systems)"
Date:
Subject: Re: Silent install 8.3 diiffers from 8.2