I am trying a simple
access of a table and get an out of
memory error. How do I avoid this issue. It seems I
have some configuration set wrong.
Our system has 24GB of memory and is dedicated to the postgres
database.
Back ground information
aquarec=> explain analyze verbose select * from ens_memb;
QUERY
PLAN
--------------------------------------------------------------------------------------------------------------------------
Seq Scan on ens_memb (cost=0.00..719893.12 rows=32216212
width=62) (actual time=4.954..37513.377 rows=32216154 loops=1)
Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate,
source, tyr, val
Total runtime: 39588.386 ms
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers =
6144MB # min 128kB
#(change requires
restart)
#temp_buffers =
8MB
#min 800kB
max_prepared_transactions = 5 # zero disables the
feature
#(change requires
restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes
of shared memory
# per transaction slot, plus lock space (see
max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero
unless you
# actively intend to use prepared transactions.
work_mem =
48MB
#min 64kB
maintenance_work_mem = 256MB # min 1MB
#max_stack_depth =
2MB # min 100kB
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
#(change requires
restart)
#shared_preload_libraries = '' # (change requires
restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay =
0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 #
0-10000credits
#vacuum_cost_page_miss = 10 # 0-10000
credits
#vacuum_cost_page_dirty = 20 # 0-10000
credits
#vacuum_cost_limit =
200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay =
200ms # 10-10000ms
between
rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max
buffers
written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on
buffers scanned/round
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000. 0 disables
prefetching
#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#fsync =
on
#turns forced
synchronization on or off
#synchronous_commit =
on # immediate fsync at
commit
#wal_sync_method =
fsync # the default is the
first option
#supported by the
operating system:
# open_datasync
# fdatasync
# fsync
# fsync_writethrough
# open_sync
#full_page_writes =
on # recover from
partial
page writes
wal_buffers =
8MB
#min 32kB
#(change requires
restart)
#wal_writer_delay = 200ms #
1-10000milliseconds
#commit_delay =
0
#range 0-100000, in
microseconds
#commit_siblings =
5 #
range1-1000
# - Checkpoints -
checkpoint_segments =
32 # in logfile segments,
min 1, 16MB each
#checkpoint_timeout = 5min #
range30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target
duration, 0.0 - 1.0
#checkpoint_warning = 30s #
0disables
# - Archiving -
#archive_mode = off # allows archiving
tobe done
#(change requires restart)
#archive_command = '' # command to use to archive a
logfile segment
#archive_timeout = 0 # force a logfile
segmentswitch
after this
#number of seconds; 0 disables
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost =
1.0 #
measuredon an
arbitrary scale
random_page_cost =
3.0 # same scale
asabove
#cpu_tuple_cost =
0.01 # same scale
asabove
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same
scaleas above
effective_cache_size = 12288MB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort =
5
#range 1-10
#geqo_pool_size =
0
#selects default based
on effort
#geqo_generations =
0 # selects
defaultbased
on effort
#geqo_selection_bias = 2.0 #
range1.5-2.0
# - Other Planner Options -
default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit =
8 # 1 disables collapsing
of explicit
#JOIN clauses