Re: [NOVICE] Normalizing Unnormalized Input - Mailing list pgsql-novice

From David G. Johnston
Subject Re: [NOVICE] Normalizing Unnormalized Input
Date
Msg-id CAKFQuwbEMiORC8cAm3AmvQGdSYG9usBA541DsDC1zKN1JvV-Ww@mail.gmail.com
Whole thread Raw
In response to [NOVICE] Normalizing Unnormalized Input  (Stephen Froehlich <s.froehlich@cablelabs.com>)
Responses Re: [NOVICE] Normalizing Unnormalized Input  (Stephen Froehlich <s.froehlich@cablelabs.com>)
List pgsql-novice
On Tue, Jun 20, 2017 at 3:50 PM, Stephen Froehlich
<s.froehlich@cablelabs.com> wrote:
> The part of the problem that I haven’t solved conceptually yet is how to
> normalize the incoming data.

The specifics of the data matter but...if at all possible I do something like:

BEGIN
CREATE TEMP TABLE tt
COPY tt FROM STDIN
INSERT NEW RECORDS into t FROM tt - one statement (per target table)
UPDATE EXISTING RECORDS in t USING tt - one statement (per target table)
END

I don't get why (or how) you'd "rename the table into a temp table"...

Its nice that we've add upsert but it seems more useful for streaming
compared to batch.  At scale you should try to avoid collisions in the
first place.

Temporary table names only need to be unique within the session.

The need for indexes on the temporary table are usually limited since
the goal is to move large subsets of it around all at once.

David J.


pgsql-novice by date:

Previous
From: Stephen Froehlich
Date:
Subject: [NOVICE] Normalizing Unnormalized Input
Next
From: Stephen Froehlich
Date:
Subject: Re: [NOVICE] Normalizing Unnormalized Input