Re: Large index operation crashes postgres - Mailing list pgsql-general

From Frans Hals
Subject Re: Large index operation crashes postgres
Date
Msg-id 39af1ed21003261643k676b96c3s11bfeb9ecbc875d4@mail.gmail.com
Whole thread Raw
In response to Re: Large index operation crashes postgres  (Frans Hals <fhals7@googlemail.com>)
List pgsql-general
The index mentioned below has been created in some minutes without problems.
Dropped it and created it again. Uses around 36 % of memorywhile
creating, after completion postmaster stays at 26 %.


> I'm not sure, what you're thinking about generating a self-contained
> test that exhibits similar bloat.
> I have started an index creation using my data without calling postgis
> functions. Just to make it busy:
> <CREATE INDEX idx_placex_sector ON placex USING btree
> (substring(geometry,1,100), rank_address, osm_type, osm_id);>
> This is now running against the 50.000.000 rows in placex. I will
> update you about the memory usage it takes.
>
>> Can you generate a self-contained test case that exhibits similar bloat?
>> I would think it's probably not very dependent on the specific data in
>> the column, so a simple script that constructs a lot of random data
>> similar to yours might be enough, if you would rather not show us your
>> real data.
>>
>>                        regards, tom lane
>>
>

pgsql-general by date:

Previous
From: Frans Hals
Date:
Subject: Re: Large index operation crashes postgres
Next
From: Allan Kamau
Date:
Subject: Re: Connection Pooling