Re: Large database help - Mailing list pgsql-admin

From Tom Lane
Subject Re: Large database help
Date
Msg-id 6598.987977323@sss.pgh.pa.us
Whole thread Raw
In response to Large database help  (xbdelacour@yahoo.com)
Responses Re: Large database help  (xbdelacour@yahoo.com)
Re: Large database help  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-admin
xbdelacour@yahoo.com writes:
> Hi everyone, I'm more or less new to PostgreSQL and am trying to setup a
> rather large database for a data analysis application. Data is collected
> and dropped into a single table, which will become ~20GB. Analysis happens
> on a Windows client (over a network) that queries the data in chunks across
> parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory
> under Redhat 6 (.0 I think).

> I am setting 'echo 402653184 >/proc/sys/kernel/shmmax', which is being
> reflected in top. I also specify '-B 48000' when starting postmaster.

Hm.  384M shared memory request on a 512M machine.  I'll bet that the
kernel is deciding you don't need all that stuff in RAM, and is swapping
out chunks of the shared memory region to make room for processes and
its own disk buffering activity.  Try a more reasonable -B setting, like
maybe a quarter of your physical RAM, max.  There's no percentage in -B
large enough to risk getting swapped.  Moreover, any physical RAM that
does happen to be free will be exploited by the kernel for disk
buffering at its level, so you aren't really saving any I/O by
increasing Postgres' internal buffering.

BTW, what Postgres version are you using?

            regards, tom lane

pgsql-admin by date:

Previous
From: xbdelacour@yahoo.com
Date:
Subject: Large database help
Next
From: xbdelacour@yahoo.com
Date:
Subject: Re: Large database help