Re: Getting an out of memory failure.... (long email) - Mailing list pgsql-general

From Gaetano Mendola
Subject Re: Getting an out of memory failure.... (long email)
Date
Msg-id cjbvfd$mbe$1@floppy.pyrenet.fr
Whole thread Raw
In response to Re: Getting an out of memory failure.... (long email)  (Sean Shanny <shannyconsulting@earthlink.net>)
List pgsql-general
Sean Shanny wrote:
> Tom,
>
> The Analyze did in fact fix the issue.  Thanks.
>
> --sean

Given the fact that you are using pg_autovacuum, you have to consider
a few points:

1) Is out there a buggy version that will not analyze big tables.
2) The autovacuum fail in scenarios with big tables not eavy updated,
    inserted.

For the 1) I suggest to check in your logs and see how the total rows
in your table are displayed, the right version show you the rows number
as a float:
     [2004-09-28 17:10:47 CEST]   table name: empdb."public"."user_logs"
     [2004-09-28 17:10:47 CEST]      relid: 17220;   relisshared: 0
     [2004-09-28 17:10:47 CEST]      reltuples: 5579780.000000;  relpages: 69465
     [2004-09-28 17:10:47 CEST]      curr_analyze_count: 171003; curr_vacuum_count: 0
     [2004-09-28 17:10:47 CEST]      last_analyze_count: 165949; last_vacuum_count: 0
     [2004-09-28 17:10:47 CEST]      analyze_threshold: 4464024; vacuum_threshold: 2790190

for the point 2) I suggest you to "cron" analyze during the day.



Regards
Gaetano Mendola


pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: postgres v8béta3 on AIX5.2
Next
From: Marco Colombo
Date:
Subject: Re: Null comparisons (was Re: checksum)