On Thu, 2008-12-11 at 20:09 -0500, Mag Gam wrote:
> Hello All,
>
> Running 8.3.4. My situation is a little unique. I am running on a 1
> core with 2GB of memory on Redhat Linux 5.2. My entire installation of
> pgsql is about 8GB (compressed) from pgdump. I have 6 databases The
> data is keep growing since I plan to add more field to my database and
> it will increase dramatically.
The size of the compressed pg_dump is irrelevant to the size of the
database in normal operation. Not just because of the compression, but
indexes are not dumped, only the CREATE INDEX statement, which could
account in many gigs worth of data that you are not accounting for. It
also does not account for and dead tuples.
Either look at the size of your database on the filesystem itself, or
run this query to get a look at the database size
SELECT datname, pg_size_pretty(pg_database_size(datname)) from
pg_database;
> My goal is I don't want to use a lot of memory! My storage is faily
> fast, I can do about 250Mb/sec (sustained). I would like to leverage
> my I/O instead of memory, eventhough I will suffer performance
> problems.
Before giving you suggestions, may I ask why? Memory is cheap these
days, and intentionally limiting it seems like a bad idea - especially
if you are expecting performance. This really may behave in ways you
don't expect, like saturating your I/O system.
That said, if you really really want to do this, set your shared_buffers
to a low value, bump up random_page_cost, set work_mem and
maintenance_work_mem to lower values.
--
Brad Nicholson 416-673-4106
Database Administrator, Afilias Canada Corp.