Re: [pgsql-advocacy] Postgres and really huge tables - Mailing list pgsql-performance

From Chris Mair
Subject Re: [pgsql-advocacy] Postgres and really huge tables
Date
Msg-id 45AFE9D0.6060205@1006.org
Whole thread Raw
In response to Postgres and really huge tables  (Brian Hurt <bhurt@janestcapital.com>)
Responses Re: [pgsql-advocacy] Postgres and really huge tables  ("Luke Lonergan" <llonergan@greenplum.com>)
Re: [pgsql-advocacy] Postgres and really huge tables  (Josh Berkus <josh@agliodbs.com>)
List pgsql-performance
> Is there any experience with Postgresql and really huge tables?  I'm
> talking about terabytes (plural) here in a single table.  Obviously the
> table will be partitioned, and probably spread among several different
> file systems.  Any other tricks I should know about?
>
> We have a problem of that form here.  When I asked why postgres wasn't
> being used, the opinion that postgres would "just <explicitive> die" was
> given.  Personally, I'd bet money postgres could handle the problem (and
> better than the ad-hoc solution we're currently using).  But I'd like a
> couple of replies of the form "yeah, we do that here- no problem" to
> wave around.

I've done a project using 8.1 on solaris that had a table that was
closed to 2TB. The funny thing is that it just worked fine even without
partitioning.

But, then again: the size of a single record was huge too: ~ 50K.
So there were not insanly many records: "just" something
in the order of 10ths of millions.

The queries just were done on some int fields, so the index of the
whole thing fit into RAM.

A lot of data, but not a lot of records... I don't know if that's
valid. I guess the people at Greenplum and/or Sun have more exciting
stories ;)


Bye, Chris.




pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Autoanalyze settings with zero scale factor
Next
From: Tom Lane
Date:
Subject: Re: Postgres and really huge tables