multi billion row tables: possible or insane? - Mailing list pgsql-performance

Hi all,

I am doing research for a project of mine where I need to store several
billion values for a monitoring and historical tracking system for a big
computer system. My currect estimate is that I have to store (somehow)
around 1 billion values each month (possibly more).

I was wondering if anyone has had any experience with these kind of big
numbers of data in a postgres sql database and how this affects database
design and optimization.

What would be important issues when setting up a database this big, and
is it at all doable? Or would it be a insane to think about storing up
to 5-10 billion rows in a postgres database.

The database's performance is important. There would be no use in
storing the data if a query will take ages. Query's should be quite fast
if possible.

I would really like to hear people's thoughts/suggestions or "go see a
shrink, you must be mad" statements ;)

Kind regards,

Ramon Bastiaans



pgsql-performance by date:

Previous
From: PFC
Date:
Subject: Re: seq scan cache vs. index cache smackdown
Next
From: Jeff
Date:
Subject: Re: multi billion row tables: possible or insane?