Memory usage during vacuum - Mailing list pgsql-general

From Shelby Cain
Subject Memory usage during vacuum
Date
Msg-id 20040325145507.44739.qmail@web41609.mail.yahoo.com
Whole thread Raw
Responses Re: Memory usage during vacuum
List pgsql-general
Version: PostgreSQL 7.4.1 on i686-pc-cygwin, compiled
by GCC gcc (GCC) 3.3.1 (cygming special)

postgresql.conf settings:

tcpip_socket = true
max_connections = 16
shared_buffers = 2048 # min 16, at least
max_connections*2, 8KB each
sort_mem = 2048 # min 64, size in KB
vacuum_mem = 8192 # min 1024, size in KB
wal_buffers = 16                # min 4, 8KB each
checkpoint_segments = 9 # in logfile segments, min 1,
16MB each
effective_cache_size = 3000     # typically 8KB each
random_page_cost = 2            # units are one
sequential page fetch cost
cpu_index_tuple_cost = 0.0001   # 0.001(same)
default_statistics_target = 300 # range 1-1000
log_timestamp = true
stats_start_collector = true
stats_command_string = true
stats_block_level = true
stats_row_level = true
stats_reset_on_server_start = false

This is on a workstation so I've purposely limited the
amount of memory that would be used.  I would have
assumed that some combination of (shared_buffers*8 +
vacuum_mem) plus a little overhead would be in the
neighborhood of the maximum amount of memory used
during the vacuum analyze process.  However, I've
noticed that when I hit some very large tables the
backend's memory usage will soar to as high as 100+
megs.  I'm trying to keep postgresql's memory usage
under 40 megs under all conditions so that other
services/applications don't grind to a halt due to
swapping.  Is there any way to achieve my goal?

Regards,

Shelby Cain

__________________________________
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html

pgsql-general by date:

Previous
From: David Garamond
Date:
Subject: Re: primary key in table hierarchy
Next
From: Jeff Eckermann
Date:
Subject: Re: Adding flexibilty to queries