Re: Insert into on conflict, data size upto 3 billion records - Mailing list pgsql-general

From Rob Sargent
Subject Re: Insert into on conflict, data size upto 3 billion records
Date
Msg-id 6075918d-c07d-7a29-aecc-95e0b160033a@gmail.com
Whole thread Raw
In response to Re: Insert into on conflict, data size upto 3 billion records  (Karthik K <kar6308@gmail.com>)
List pgsql-general

On 2/15/21 12:22 PM, Karthik K wrote:
> yes, I'm using \copy to load the batch table,
> 
> with the new design that we are doing, we expect updates to be less 
> going forward and more inserts, one of the target columns I'm updating 
> is indexed, so I will drop the index and try it out, also from your 
> suggestion above splitting the on conflict into insert and update is 
> performant but in order to split the record into batches( low, high) I 
> need to do a count of primary key on the batch tables to first split it 
> into batches
> 
> 
I don't think you need to do a count per se.  If you know the 
approximate range (or better, the min and max) in the incoming/batch 
data you can approximate the range values.



pgsql-general by date:

Previous
From: Loles
Date:
Subject: Re: Replication sequence
Next
From: Thomas Guyot
Date:
Subject: Re: How to post to this mailing list from a web based interface