Re: Deleting millions of rows - Mailing list pgsql-performance

From Jerry Champlin
Subject Re: Deleting millions of rows
Date
Msg-id 05d401c9856a$9c8b4410$d5a1cc30$@com
Whole thread Raw
In response to Deleting millions of rows  (Brian Cox <brian.cox@ca.com>)
List pgsql-performance
Brian:

One approach we use for large tables is to partition and then drop
partitions as the data becomes obsolete.  This way you never have the
problem.  Our general rule is to never delete data from a table because it
is too slow.  We have found this to be the preferred approach regardless of
database platform.

-Jerry

Jerry Champlin|Absolute Performance Inc.|Mobile:  303-588-2547


-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Brian Cox
Sent: Monday, February 02, 2009 11:18 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Deleting millions of rows

I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete
from ts_defects;
Result: out of memory/Can't allocate size: 32
I then did 10 or so deletes to get rid of the rows. Afterwards, inserts
into or queries on this
table performed significantly slower. I tried a vacuum analyze, but this
didn't help. To fix this,
I dumped and restored the database.

1) why can't postgres delete all rows in a table if it has millions of rows?
2) is there any other way to restore performance other than restoring
the database?

Thanks,
Brian


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



pgsql-performance by date:

Previous
From: Robert Haas
Date:
Subject: Re: Deleting millions of rows
Next
From: Brian Cox
Date:
Subject: Re: Deleting millions of rows