Re: very very slow inserts into very large table - Mailing list pgsql-performance

From Mark Thornton
Subject Re: very very slow inserts into very large table
Date
Msg-id 5004687B.8080908@optrak.com
Whole thread Raw
In response to Re: very very slow inserts into very large table  (Claudio Freire <klaussfreire@gmail.com>)
Responses Re: very very slow inserts into very large table  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-performance
On 16/07/12 20:08, Claudio Freire wrote:
> On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <mthornton@optrak.com> wrote:
>> 4. The most efficient way for the database itself to do the updates would be
>> to first insert all the data in the table, and then update each index in
>> turn having first sorted the inserted keys in the appropriate order for that
>> index.
> Actually, it should create a temporary index btree and merge[0] them.
> Only worth if there are really a lot of rows.
>
> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf
I think 93 million would qualify as a lot of rows. However does any
available database (commercial or open source) use this optimisation.

Mark



pgsql-performance by date:

Previous
From: Claudio Freire
Date:
Subject: Re: very very slow inserts into very large table
Next
From: Claudio Freire
Date:
Subject: Re: very very slow inserts into very large table