Re: Out of memory - Mailing list pgsql-general

From John R Pierce
Subject Re: Out of memory
Date
Msg-id 4D9B6D1F.9050406@hogranch.com
Whole thread Raw
In response to Out of memory  (Jeremy Palmer <JPalmer@linz.govt.nz>)
List pgsql-general
On 04/05/11 2:50 AM, Jeremy Palmer wrote:
> I've been having repeated troubles trying to get a PostgreSQL app to play nicely on Ubuntu. I recently posted a
messageon this list about an out of memory error and got a resolution by reducing the work_mem setting. However I'm now
gettingfurther out of memory issues during the same stage of plpgsql function as mentioned before. 
>
> The function itself is run as part of larger transaction which does the following:
>
> 1/ Maintains 104 tables (15 PostGIS tables), by loading and applying incremental table changes. A typical incremental
loadwith maintain about 10,000 rows. 
>
> 2/ When each one of these tables is updated an after trigger is fired that maintains an associated table revision
table.
>
> 3/ After all of the tables are maintained a plpgsql function is called to build/maintain a set of de-normalised
tables.These tables total about 20GB. Each one of these tables is compared against the previous table revision to
determineits row changes. It's in this function that the out of memory exception is occurring. 

a few random questions...

Does that all really have to be a single transaction?

Do you really need to use triggers for your revision tracking, and can't
rely on your daily update cycle to manually set the revision information?

Is it really necessary to generate massive denormalized tables, rather
than using view's to join the data?


> shared_buffers = 512MB
> maintenance_work_mem = 512MB
> temp_buffers = 256MB
> work_mem = 1MB
> wal_buffers = 16MB
> effective_cache_size = 4094MB
>
> The size of the database is 350GB. The typical number of users connected to the database is 1 or 2. This database is
usedfor loading external data, managing revision table information and generating and outputting de-normalised
datasets,so it does not have a high number of transactions running. Typically 1 large one per day. 


with only 1-2 connections, you certainly could increase the work_mem.
Alternately, this single giant transaction could manually set a larger
work_mem which would only apply to it.   Personally, given your 8gb
system and what you've described, I think I'd set the tuning parameters
something like...

    shared_buffers = 1GB
    maintenance_work_mem = 128MB
    temp_buffers = 64MB
    work_mem = 16MB
    wal_buffers = 16MB
    effective_cache_size = 4094MB


adjust effective_cache_size to somewhat less than the 'cached' value
shown in `free -m` after your system has been running for awhile.

pgsql-general by date:

Previous
From: rihad
Date:
Subject: Re: Named advisory locks
Next
From: Perry Smith
Date:
Subject: unique amount more than one table