> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Dann Corbit
> Sent: Friday, January 28, 2005 12:01 PM
> To: William Yu; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Splitting queries across servers
>
>
> Suppose that you currently need 16 GB to cache everything now.
> I would install (perhaps) 32 GB ram for the initial configuration.
>
Good point. Adding memory as I need it.
> The price of memory drops exponentially, and so waiting for the price to
> drop will give a much lower expense for the cost of the RAM.
>
> The reason to double the ram is the expense of upgrading in terms of
> labor and downtime for the computer. That can be very significant. So
> if we double the ram, that should give one or (hopefully) two years
> safety margin.
Downtime is a big deal, however I am planning on using replication with
pgpool.
> If the database is expected to grow exponentially fast, then that is
> another issue. In such a case, if it can be cost justified, put on the
> largest memory volume that is possible given your financial limitations.
We can't really forecast the growing curve. My bet is that we have a short
term (6 months) need of 32 GB, so I'll just double that and it should give
us visibility for about a year. I hope!
I just realized I never asked that question: What is the maximum size of a
postgresql DB. Can it be anything ?
Max