Re: poor performance when recreating constraints on large tables - Mailing list pgsql-performance

From Claudio Freire
Subject Re: poor performance when recreating constraints on large tables
Date
Msg-id BANLkTi=3w8+sDbYZeDz7Mpj_QMEErEURVQ@mail.gmail.com
Whole thread Raw
In response to poor performance when recreating constraints on large tables  (Mike Broers <mbroers@gmail.com>)
List pgsql-performance
---------- Forwarded message ----------
From: Claudio Freire <klaussfreire@gmail.com>
Date: Wed, Jun 8, 2011 at 11:57 PM
Subject: Re: [PERFORM] poor performance when recreating constraints on
large tables
To: Samuel Gendler <sgendler@ideasculptor.com>


On Wed, Jun 8, 2011 at 9:57 PM, Samuel Gendler
<sgendler@ideasculptor.com> wrote:
> Sure, but if it is a query that is slow enough for a time estimate to be
> useful, odds are good that stats that are that far out of whack would
> actually be interesting to whoever is looking at the time estimate, so
> showing some kind of 'N/A' response once things have gotten out of whack
> wouldn't be unwarranted.  Not that I'm suggesting that any of this is a
> particularly useful exercise.  I'm just playing with the original thought
> experiment suggestion.

There's a trick to get exactly that:

Do an explain, fetch the expected rowcount on the result set, add a
dummy sequence and a dummy field to the resultset "nextval(...) as
progress".

Now, you won't get to read the progress column probably, but that
doesn't matter. Open up another transaction, and query it there.
Sequences are nontransactional.

All the smarts about figuring out the expected resultset's size
remains on the application, which is fine by me.

pgsql-performance by date:

Previous
From: Samuel Gendler
Date:
Subject: Re: poor performance when recreating constraints on large tables
Next
From: Greg Smith
Date:
Subject: Re: poor performance when recreating constraints on large tables