Re: Embedded VACUUM - Mailing list pgsql-performance

From Craig Ringer
Subject Re: Embedded VACUUM
Date
Msg-id 4E6495DE.2060601@ringerc.id.au
Whole thread Raw
In response to Embedded VACUUM  (C Pond <cpondwork@yahoo.com>)
List pgsql-performance
On 3/09/2011 8:25 AM, C Pond wrote:
> I'm running a labour-intensive series of queries on a medium-sized dataset (~100,000 rows) with geometry objects and
bothgist and btree indices. 
>
> The queries are embedded in plpgsql, and have multiple updates, inserts and deletes to the tables as well as multiple
selectswhich require the indices to function correctly for any kind of performance. 
>
> My problem is that I can't embed a vacuum analyze to reset the indices and speed up processing, and the queries get
slowerand slower as the un-freed space builds up. 
>
>  From my understanding, transaction commits within batches are not allowed (so no vacuum embedded within queries).
Arethere plans to change this?  Is there a way to reclaim dead space for tables that have repeated inserts, updates and
deleteson them? 
Not, AFAIK, until the transaction doing the deletes/updates commits and
so do any older SERIALIZABLE transactions as well as any older running
READ COMMITTED statements.

This is one of the areas where Pg's lack of true stored procedures bites
you. You'll need to do the work via an out-of-process helper over a
regular connection, or do your work via dblink to achieve the same effect.

--
Craig Ringer

pgsql-performance by date:

Previous
From: "Kai Otto"
Date:
Subject: Re: Slow performance
Next
From: Richard Shaw
Date:
Subject: Rather large LA