Insert into on conflict, data size upto 3 billion records - Mailing list pgsql-general

From Karthik Kumar Kondamudi
Subject Insert into on conflict, data size upto 3 billion records
Date
Msg-id CAD-twtSfABMBH3ODxJiKdh6FHBtB0UuXn4mN-xwnC7tb=Cphjg@mail.gmail.com
Whole thread Raw
Responses Re: Insert into on conflict, data size upto 3 billion records
List pgsql-general
Hi, 

I'm looking for suggestions on how I can improve the performance of the below merge statement, we have a batch process that batch load the data into the _batch tables using Postgres and the task is to update the main target tables if the record exists else into it, sometime these batch table could go up to 5 billion records. Here is the current scenario

target_table_main has 700,070,247  records and is hash partitioned into 50 chunks, it has an index on logical_ts and the batch table has 2,715,020,546 close to 3 billion records, so I'm dealing with a huge set of data so looking of doing this in the most efficient way.

Thank you

pgsql-general by date:

Previous
From: Noah Bergbauer
Date:
Subject: Re: Preventing free space from being reused
Next
From: Christophe Pettus
Date:
Subject: MultiXactMemberControlLock contention on a replica