Hello All,
Running 8.3.4. My situation is a little unique. I am running on a 1
core with 2GB of memory on Redhat Linux 5.2. My entire installation of
pgsql is about 8GB (compressed) from pgdump. I have 6 databases The
data is keep growing since I plan to add more field to my database and
it will increase dramatically.
My goal is I don't want to use a lot of memory! My storage is faily
fast, I can do about 250Mb/sec (sustained). I would like to leverage
my I/O instead of memory, eventhough I will suffer performance
problems. Also, is it possible to make the database data logs files
(the binary files) large? I am thinking of making them the size of 1G
each instead of very small files? My FS does better performance for
larger files...
Any ideas?
TIA