Large tables (was: RAID 0 not as fast as expected) - Mailing list pgsql-performance

From Bucky Jordan
Subject Large tables (was: RAID 0 not as fast as expected)
Date
Msg-id 78ED28FACE63744386D68D8A9D1CF5D42099C7@MAIL.corp.lumeta.com
Whole thread Raw
In response to Re: RAID 0 not as fast as expected  ("Luke Lonergan" <llonergan@greenplum.com>)
Responses Re: Large tables (was: RAID 0 not as fast as expected)
Re: Large tables (was: RAID 0 not as fast as expected)
List pgsql-performance
>Yes.  What's pretty large?  We've had to redefine large recently, now
we're
>talking about systems with between 100TB and 1,000TB.
>
>- Luke

Well, I said large, not gargantuan :) - Largest would probably be around
a few TB, but the problem I'm having to deal with at the moment is large
numbers (potentially > 1 billion) of small records (hopefully I can get
it down to a few int4's and a int2 or so) in a single table. Currently
we're testing for and targeting in the 500M records range, but the
design needs to scale to 2-3 times that at least.

I read one of your presentations on very large databases in PG, and saw
mention of some tables over a billion rows, so that was encouraging. The
new table partitioning in 8.x will be very useful. What's the largest DB
you've seen to date on PG (in terms of total disk storage, and records
in largest table(s) )?

My question is at what point do I have to get fancy with those big
tables? From your presentation, it looks like PG can handle 1.2 billion
records or so as long as you write intelligent queries. (And normal PG
should be able to handle that, correct?)

Also, does anyone know if/when any of the MPP stuff will be ported to
Postgres, or is the plan to keep that separate?

Thanks,

Bucky

pgsql-performance by date:

Previous
From: Jérôme BENOIS
Date:
Subject: Re: High CPU Load
Next
From: Jérôme BENOIS
Date:
Subject: Re: High CPU Load