Hi again all,
I've tested postgres 7.3.4 on Linux version 2.4.17
and this is what I found :
The initial instance took up 8372K and this fluctuated
between +- 8372K and 10372K, plus +- 3500K for
every connection.
I did quite a few transactions on both connections, plus
a few vacuums and a pg_dump and the total memory usage
didn't seem to go over 16M
I set all the _buffers, _mem, _fsm settings to the minimum,
restarted every time, but this had absolutely no noticeable
increase or decrease in total memory usage.
(I used a program called gmemusage to get these stats.)
On the same machine , I tested postgres 7.1.2 with basically
the same conf options (not _fsm) and got the following :
The initial instance was 1772K and fluctuated to +- 4000K,
plus +- 3400K for every connection.
Doing the same transactions, vacuum + pg_dump, total
memory usage didn't really go over 11M,
which was exactly what I needed.
Although I've lived through some of the shortcomings of
7.1.2, it is still very stable, and works perfectly for
what it is going to be used for.
Again, here, I was only able to restrict things a little
by changing the configuration options, but no major
difference in memory usage.
Regards
Stef
On Mon, 6 Oct 2003 09:55:51 +0200
Stef <svb@ucs.co.za> wrote:
=> Thanks for the replies,
=>
=> On Fri, 3 Oct 2003 11:08:48 -0700
=> Josh Berkus <josh@agliodbs.com> wrote:
=> => 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the
=> => database files, either through mounting or symlinking.
=>
=> I'm not sure I understand how this helps?
=>
=> => 2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very
=> => frequently, like every 1-5 minutes. Spend some time tuning your
=> => fsm_max_pages to the ideal level so that you're not allocating any extra
=> => memory to the FSM.
=> =>
=> => 3. If your concern is *average* CPU/RAM consumption, and not peak load
=> => activity, increase wal_files and checkpoint_segments to do more efficient
=> => batch processing of pending updates as the cost of some disk space. If peak
=> => load activity is a problem, don't do this.
=> =>
=> => 4. Tune all of your queries carefully to avoid anything requiring a
=> => RAM-intensive merge join or CPU-eating calculated expression hash join, or
=> => similar computation-or-RAM-intensive operations.
=>
=> Thanks, I'll try some of these, and post the results.
=> The actual machines seem to be Pentium I machines,
=> with 32M RAM. I've gathered that it is theoretically
=> possible, so no to go try it.
=>
=> Regards
=> Stef
=>