Re: How to insert a bulk of data with unique-violations very fast - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: How to insert a bulk of data with unique-violations very fast
Date
Msg-id AANLkTimwLi6PkFYltlyXecsDQaI6yD3y22WLmWdgwe_R@mail.gmail.com
Whole thread Raw
In response to How to insert a bulk of data with unique-violations very fast  (Torsten Zühlsdorff <foo@meisterderspiele.de>)
List pgsql-performance
On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff
<foo@meisterderspiele.de> wrote:
> Hello,
>
> i have a set of unique data which about 150.000.000 rows. Regullary i get a
> list of data, which contains multiple times of rows than the already stored
> one. Often around 2.000.000.000 rows. Within this rows are many duplicates
> and often the set of already stored data.
> I want to store just every entry, which is not within the already stored
> one. Also i do not want to store duplicates. Example:

The standard method in pgsql is to load the data into a temp table
then insert where not exists in old table.

pgsql-performance by date:

Previous
From: Tom Wilcox
Date:
Subject: Re: requested shared memory size overflows size_t
Next
From: Scott Marlowe
Date:
Subject: Re: Autovacuum in postgres.