Re: Deleting millions of rows - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Deleting millions of rows
Date
Msg-id dcc563d10902021433v5398d5deo809c9ba0ae9a38e5@mail.gmail.com
Whole thread Raw
In response to Re: Deleting millions of rows  (Brian Cox <brian.cox@ca.com>)
List pgsql-performance
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox <brian.cox@ca.com> wrote:

> In production, the table on which I ran DELETE FROM grows constantly with
> old data removed in bunches periodically (say up to a few 100,000s of rows
> [out of several millions] in a bunch). I'm assuming that auto-vacuum/analyze
> will allow Postgres to maintain reasonable performance for INSERTs and
> SELECTs on it; do you think that this is a reasonable assumption?

Yes, as long as you're deleting a small enough percentage that it
doesn't get bloated (100k of millions is a good ratio) AND autovacuum
is running AND you have enough FSM entries to track the dead tuples
you're gold.

pgsql-performance by date:

Previous
From: Brian Cox
Date:
Subject: Re: Deleting millions of rows
Next
From: Robert Haas
Date:
Subject: Re: Deleting millions of rows