On Wed, Apr 26, 2006 at 05:24:27PM -0500, Wes wrote:
> On 4/25/06 12:24 PM, "Tom Lane" <tgl@sss.pgh.pa.us> wrote:
>
> > I'm inclined to think that the right solution is to fix UpdateStats and
> > setRelhasindex so that they don't use simple_heap_update, but call
> > heap_update directly and cope with HeapTupleUpdated (by looping around
> > and trying the update from scratch).
>
> Is there a verdict on what can/should/will be done for this? As far as I
> can tell from all this, there appears to be no workaround (even kludgy)
> other than to not build indexes in parallel - not an attractive option.
>
> If I'm only building two indexes simultaneously, what would happen if I
> tried to lock pg_class in the shorter index build transaction? Besides
> seeming like a bad idea...
Try running a first index build by itself and then running them in
parallel. Hopefully once pg_class has an exact tuple count the
conflicting update won't happen. If you actually have an exact tuple
count you could also try updating pg_class manually beforehand, but
that's not exactly a supported option...
Another possibility would be to patch the code so that if the tuplecount
found by CREATE INDEX is within X percent of what's already in pg_class
it doesn't do the update. Since there's already code to check to see if
the count is an exact match, this patch should be pretty simple, and the
community might well accept it into the code as well.
BTW, why are you limiting yourself to 2 indexes at once? I'd expect that
for a table larger than memory you'd be better off building all the
indexes at once so that everything runs off a single sequential scan.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461