On Wed, 20 Oct 2004 08:00:55 +0100, Gary Doades <gpd@gpdnet.co.uk> wrote:
> Unlike many other database engines the shared buffers of Postgres is
> not a private cache of the database data. It is a working area shared
> between all the backend processes. This needs to be tuned for number
> of connections and overall workload, *not* the amount of your database
> that you want to keep in memory. There is still lots of debate about what
> the "sweet spot" is. Maybe there isn't one, but its not normally 75% of
> RAM.
>
> If anything, the effective_cache_size needs to be 75% of (available)
> RAM as this is telling Postgres the amount of your database the *OS* is
> likely to cache in memory.
>
> Having said that, I think you will need to define "crawling". Is it
> updates/inserts that are slow? This may be triggers/rules/referential
> integrity checking etc that is slowing it. If it is selects that are slow, this
> may be incorrect indexes or sub-optimal queries. You need to show us
> what you are trying to do and what the results are.
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
"select count(1)" from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?
I would just like to do anything possible to help speed this up.
-Josh