Thread: Bulk DML performance

Bulk DML performance

From
Date:

Hello! I’m building a system that needs to insert/update batches of millions of rows (using INSERT .. ON CONFLICT (…) DO UPDATE) in a single database transaction, where each row is about 1.5 kB. The system produces about 3 million rows (about 4.5 GB) of data in about 5 seconds, but PostgreSQL takes about 35 seconds to insert that data and about 55 seconds to update that data. This is both on my local dev machine as well as on a large AWS Aurora PostgreSQL instance (db.r8g.16xlarge with 64 vCPUs, 512 GB RAM and 30 Gbps).

 

The following INSERT .. ON CONFLICT (…) DO UPDATE statement inserts/updates 3 million rows with only 9 bytes per row and takes about 8 seconds on first run (to insert the rows) and about 14 seconds on subsequent runs (to update the rows), but is only inserting 27 MB of data (3 million rows with 9 bytes per row); although after the first run, SELECT pg_size_pretty(pg_total_relation_size('test')) reports the table size as 191 MB and after the second run reports the table size as 382 MB (adding another 191 MB).

 

CREATE TABLE test (

  id bigint PRIMARY KEY,

  text1 text

);

 

INSERT INTO test (id, text1)

SELECT generate_series, 'x'

FROM generate_series(1, 3000000)

ON CONFLICT (id) DO UPDATE

SET text1 = 'x';

 

If PostgreSQL is writing 191 MB on the first run and 382 MB on each subsequent run, then PostgreSQL is only writing about 28 MB/s. Although PostgreSQL is also able to write about 4.5 GB in about 35 seconds (as stated above), which is about 128 MB/s, so it seems the performance constraint depends on the number of rows inserted more than the size of each row.

 

Furthermore, deleting the rows takes about 18 seconds to perform (about 4 seconds longer than the time taken to update the rows):

 

DELETE FROM test

WHERE id in (

  SELECT * FROM generate_series(1, 3000000)

)

 

It seems like it should be possible to do better than this on modern hardware, but I don’t have enough knowledge of the inner workings of PostgreSQL to know whether my instinct is correct on this, so I thought I’d raise the question with the experts.

 

Thanks!

Bill

Re: Bulk DML performance

From
Laurenz Albe
Date:
On Thu, 2025-03-13 at 12:05 +0800, bill.poole@ymail.com wrote:

> The following INSERT .. ON CONFLICT (…) DO UPDATE statement inserts/updates
> 3 million rows with only 9 bytes per row and takes about 8 seconds on first
> run (to insert the rows) and about 14 seconds on subsequent runs (to update
> the rows),

I can confirm these times on my not so new laptop with local NVME.
That's the time it takes if you have an index on the table and want to be
crash safe.

>           but is only inserting 27 MB of data (3 million rows with 9 bytes
> per row); although after the first run,
> SELECT pg_size_pretty(pg_total_relation_size('test')) reports the table size
> as 191 MB and after the second run reports the table size as 382 MB (adding
> another 191 MB).

That is unavoidable, because PostgreSQL adds a new version for each row to the
table.  To avoid that kind of bloat, you'd have to update in smaller batches
and run VACUUM between those to free the "dead" row versions.

> CREATE TABLE test (
>   id bigint PRIMARY KEY,
>   text1 text
> );
>  
> INSERT INTO test (id, text1)
> SELECT generate_series, 'x'
> FROM generate_series(1, 3000000)
> ON CONFLICT (id) DO UPDATE
> SET text1 = 'x';
>  
> If PostgreSQL is writing 191 MB on the first run and 382 MB on each subsequent
> run, then PostgreSQL is only writing about 28 MB/s. Although PostgreSQL is
> also able to write about 4.5 GB in about 35 seconds (as stated above), which
> is about 128 MB/s, so it seems the performance constraint depends on the
> number of rows inserted more than the size of each row.

It is the index maintenance that is killing you.
Without the primary key, the first insert takes under 1.8 seconds here.
But you need the primary key index if you want to use INSERT ... ON CONFLICT.

The update has to do even more work, so it is slower.

If you don't need crash safety, you could use UNLOGGED tables and be somewhat
faster (5.8 seconds for the initial INSERT here).

Essentially, the moderate performance is the price you are paying for
data integrity (crash safety) and consistency (primary key).

> Furthermore, deleting the rows takes about 18 seconds to perform (about 4
> seconds longer than the time taken to update the rows):
>  
> DELETE FROM test
> WHERE id in (
>   SELECT * FROM generate_series(1, 3000000)
> )

Well, that is not a great statement.

The following takes only 1.6 seconds here:

DELETE FROM test WHERE id BETWEEN 1 AND 3000000;

And if you want to delete all rows, TRUNCATE is very, very fast.

> It seems like it should be possible to do better than this on modern
> hardware, but I don’t have enough knowledge of the inner workings of
> PostgreSQL to know whether my instinct is correct on this, so I thought
> I’d raise the question with the experts.

With the table as it is you won't get better performance if you want the
features that a relational database provides.

To get better performance, the best I can think of is to parallelize
loading the data until you saturate CPU, disk or hit internal contention
in the database.

Yours,
Laurenz Albe



RE: Bulk DML performance

From
Date:
Thanks Laurenz for taking the time to look at this.

> That is unavoidable, because PostgreSQL adds a new version for each row to the table.  To avoid that kind of bloat,
you'dhave to update in smaller batches and run VACUUM between those to free the "dead" row versions. 

Yes, I would expect the number of blocks/MBs to double due to executing an update, but it is noteworthy that inserting
27MB of data into a newly created table creates 191 MB of data including the index and 127 MB of data excluding the
index.

> It is the index maintenance that is killing you.

Yes, but as you say I need the index to perform the INSERT ... ON CONFLICT (...) DO UPDATE. Can you help me understand
whyperforming 3 million lookups on a b-tree index with all pages cached in memory takes so long? 

> If you don't need crash safety, you could use UNLOGGED tables and be somewhat faster (5.8 seconds for the initial
INSERThere). 

Sadly, that is not an option for me. I'm building a production system.

> Well, that is not a great statement.

Understood, but I was highlighting the performance of deleting 3 million rows identified by 3 million IDs, as opposed
todeleting rows in a given range of IDs or deleting the whole table. It seems like deleting 3 million rows identified
by3 million IDs should be faster than updating 3 million rows (also identified by 3 million IDs). 

> With the table as it is you won't get better performance if you want the features that a relational database
provides.

Sorry to hear that. I had hoped there was room to improve this performance.

> To get better performance, the best I can think of is to parallelize loading the data until you saturate CPU, disk or
hitinternal contention in the database. 

Sadly, I cannot do that because I need all rows to be inserted in a single database transaction, which I cannot do over
multipledatabase connections. 

Regards,
Bill

-----Original Message-----
From: Laurenz Albe <laurenz.albe@cybertec.at>
Sent: Thursday, 13 March 2025 5:21 PM
To: bill.poole@ymail.com; pgsql-performance@postgresql.org
Subject: Re: Bulk DML performance

On Thu, 2025-03-13 at 12:05 +0800, bill.poole@ymail.com wrote:

> The following INSERT .. ON CONFLICT (…) DO UPDATE statement
> inserts/updates
> 3 million rows with only 9 bytes per row and takes about 8 seconds on
> first run (to insert the rows) and about 14 seconds on subsequent runs
> (to update the rows),

I can confirm these times on my not so new laptop with local NVME.
That's the time it takes if you have an index on the table and want to be crash safe.

>           but is only inserting 27 MB of data (3 million rows with 9
> bytes per row); although after the first run, SELECT
> pg_size_pretty(pg_total_relation_size('test')) reports the table size
> as 191 MB and after the second run reports the table size as 382 MB
> (adding another 191 MB).

That is unavoidable, because PostgreSQL adds a new version for each row to the table.  To avoid that kind of bloat,
you'dhave to update in smaller batches and run VACUUM between those to free the "dead" row versions. 

> CREATE TABLE test (
>   id bigint PRIMARY KEY,
>   text1 text
> );
>
> INSERT INTO test (id, text1)
> SELECT generate_series, 'x'
> FROM generate_series(1, 3000000)
> ON CONFLICT (id) DO UPDATE
> SET text1 = 'x';
>
> If PostgreSQL is writing 191 MB on the first run and 382 MB on each
> subsequent run, then PostgreSQL is only writing about 28 MB/s.
> Although PostgreSQL is also able to write about 4.5 GB in about 35
> seconds (as stated above), which is about 128 MB/s, so it seems the
> performance constraint depends on the number of rows inserted more than the size of each row.

It is the index maintenance that is killing you.
Without the primary key, the first insert takes under 1.8 seconds here.
But you need the primary key index if you want to use INSERT ... ON CONFLICT.

The update has to do even more work, so it is slower.

If you don't need crash safety, you could use UNLOGGED tables and be somewhat faster (5.8 seconds for the initial
INSERThere). 

Essentially, the moderate performance is the price you are paying for data integrity (crash safety) and consistency
(primarykey). 

> Furthermore, deleting the rows takes about 18 seconds to perform
> (about 4 seconds longer than the time taken to update the rows):
>
> DELETE FROM test
> WHERE id in (
>   SELECT * FROM generate_series(1, 3000000)
> )

Well, that is not a great statement.

The following takes only 1.6 seconds here:

DELETE FROM test WHERE id BETWEEN 1 AND 3000000;

And if you want to delete all rows, TRUNCATE is very, very fast.

> It seems like it should be possible to do better than this on modern
> hardware, but I don’t have enough knowledge of the inner workings of
> PostgreSQL to know whether my instinct is correct on this, so I
> thought I’d raise the question with the experts.

With the table as it is you won't get better performance if you want the features that a relational database provides.

To get better performance, the best I can think of is to parallelize loading the data until you saturate CPU, disk or
hitinternal contention in the database. 

Yours,
Laurenz Albe




Re: Bulk DML performance

From
Laurenz Albe
Date:
On Thu, 2025-03-13 at 18:13 +0800, bill.poole@ymail.com wrote:
>
> it is noteworthy that inserting 27 MB of data into a newly created table creates
> 191 MB of data including the index and 127 MB of data excluding the index.

PostgreSQL has a lot of overhead per row.

>
> Can you help me understand why performing 3 million lookups on a b-tree index
> with all pages cached in memory takes so long?

It is probably not the lookup, but the *modification* of the index that is slow.

>
> It seems like deleting 3 million rows identified by 3 million IDs should be
> faster than updating 3 million rows (also identified by 3 million IDs).

It should be, yes.
To find out where the time is spent, use EXPLAIN (ANALYZE, BUFFERS) on the
statement.

> > To get better performance, the best I can think of is to parallelize loading
> > the data until you saturate CPU, disk or hit internal contention in the database.
>
> Sadly, I cannot do that because I need all rows to be inserted in a single
> database transaction, which I cannot do over multiple database connections.

Then the best you can do is to use COPY rather than INSERT.
It will perform better (but now vastly better).

Yours,
Laurenz Albe



Re: Bulk DML performance

From
Laurenz Albe
Date:
On Thu, 2025-03-13 at 12:28 +0100, I wrote:
> Then the best you can do is to use COPY rather than INSERT.
> It will perform better (but now vastly better).

Sorry, I meant "*not* vastly better".

Yours,
Laurenz Albe



RE: Bulk DML performance

From
Date:
> PostgreSQL has a lot of overhead per row.

Okay, thanks. I'm not actually too worried about this since in my scenario, each row is about 1.5 kB, so the % overhead
isnegligible. 

> It is probably not the lookup, but the *modification* of the index that is slow.

Yes that makes sense for the original 3 million inserts, but when I perform the update of the 3 million rows, the index
doesn'tchange - they are all HOT updates. 

> Then the best you can do is to use COPY rather than INSERT. It will perform better (but [not] vastly better).

I need to perform a merge (INSERT ... ON CONFLICT ... DO UPDATE) on the data, so sadly I cannot use COPY.

I have discovered that for some reason, performing the original insert without the ON CONFLICT statement is twice as
fastas performing the original insert with an ON CONFLICT ... DO UPDATE clause, completing in 4 seconds instead of 8.
Thatseems strange to me because I wouldn't have thought it would be doing any additional work since a unique constraint
ison the primary key, so each inserted value would need to be checked in either case, and there is no extra work to be
donein either case. 

INSERT INTO test (id, text1)
SELECT generate_series, 'x'
FROM generate_series(1, 3000000)

It remains 4 seconds even when adding a clause to not insert duplicates.

INSERT INTO test (id, text1)
SELECT generate_series, 'x'
FROM generate_series(1, 3000000)
WHERE NOT EXISTS (
  SELECT 1
  FROM test4
  WHERE id = generate_series
)

Furthermore, I have found that performing an UPDATE rather than an INSERT ... ON CONFLICT ... DO UPDATE is twice as
slow,completing in 16 seconds instead of 14 seconds. 

UPDATE test
SET text1 = 'x'
FROM generate_series(1, 3000000)
WHERE test4.id = generate_series

This also now means that updating 3 million rows takes 4x longer than inserting those rows. Do we expect updates to be
4xslower than inserts? 

Regards,
Bill

-----Original Message-----
From: Laurenz Albe <laurenz.albe@cybertec.at>
Sent: Thursday, 13 March 2025 7:28 PM
To: bill.poole@ymail.com; pgsql-performance@postgresql.org
Subject: Re: Bulk DML performance

On Thu, 2025-03-13 at 18:13 +0800, bill.poole@ymail.com wrote:
>
> it is noteworthy that inserting 27 MB of data into a newly created
> table creates
> 191 MB of data including the index and 127 MB of data excluding the index.

PostgreSQL has a lot of overhead per row.

>
> Can you help me understand why performing 3 million lookups on a
> b-tree index with all pages cached in memory takes so long?

It is probably not the lookup, but the *modification* of the index that is slow.

>
> It seems like deleting 3 million rows identified by 3 million IDs
> should be faster than updating 3 million rows (also identified by 3 million IDs).

It should be, yes.
To find out where the time is spent, use EXPLAIN (ANALYZE, BUFFERS) on the statement.

> > To get better performance, the best I can think of is to parallelize
> > loading the data until you saturate CPU, disk or hit internal contention in the database.
>
> Sadly, I cannot do that because I need all rows to be inserted in a
> single database transaction, which I cannot do over multiple database connections.

Then the best you can do is to use COPY rather than INSERT.
It will perform better (but now vastly better).

Yours,
Laurenz Albe




Re: Bulk DML performance

From
Renan Alves Fonseca
Date:
Hello,
Regarding the additional time for UPDATE, you can try the following:

CREATE TABLE test3 (
  id bigint PRIMARY KEY,            
  text1 text
) WITH (fillfactor=30);

My local test gives me almost the same time for INSERT (first insert) and UPDATES (following upserts).

Regarding the overall problem, there is always room for improvement. I did a quick test with partitions, and I found out that Postgres will not parallelize the upserts for us. One solution could be to partition the records at the application level, creating one connection per partition. On the DB side, the partitions can be implemented as standard tables (using a union view on top of them) or actual partitions of a main table. However, this solution does not strictly respect the "one single transaction'"constraint...

Regards,
Renan Fonseca

Em qui., 13 de mar. de 2025 às 08:40, <bill.poole@ymail.com> escreveu:

Hello! I’m building a system that needs to insert/update batches of millions of rows (using INSERT .. ON CONFLICT (…) DO UPDATE) in a single database transaction, where each row is about 1.5 kB. The system produces about 3 million rows (about 4.5 GB) of data in about 5 seconds, but PostgreSQL takes about 35 seconds to insert that data and about 55 seconds to update that data. This is both on my local dev machine as well as on a large AWS Aurora PostgreSQL instance (db.r8g.16xlarge with 64 vCPUs, 512 GB RAM and 30 Gbps).

 

The following INSERT .. ON CONFLICT (…) DO UPDATE statement inserts/updates 3 million rows with only 9 bytes per row and takes about 8 seconds on first run (to insert the rows) and about 14 seconds on subsequent runs (to update the rows), but is only inserting 27 MB of data (3 million rows with 9 bytes per row); although after the first run, SELECT pg_size_pretty(pg_total_relation_size('test')) reports the table size as 191 MB and after the second run reports the table size as 382 MB (adding another 191 MB).

 

CREATE TABLE test (

  id bigint PRIMARY KEY,

  text1 text

);

 

INSERT INTO test (id, text1)

SELECT generate_series, 'x'

FROM generate_series(1, 3000000)

ON CONFLICT (id) DO UPDATE

SET text1 = 'x';

 

If PostgreSQL is writing 191 MB on the first run and 382 MB on each subsequent run, then PostgreSQL is only writing about 28 MB/s. Although PostgreSQL is also able to write about 4.5 GB in about 35 seconds (as stated above), which is about 128 MB/s, so it seems the performance constraint depends on the number of rows inserted more than the size of each row.

 

Furthermore, deleting the rows takes about 18 seconds to perform (about 4 seconds longer than the time taken to update the rows):

 

DELETE FROM test

WHERE id in (

  SELECT * FROM generate_series(1, 3000000)

)

 

It seems like it should be possible to do better than this on modern hardware, but I don’t have enough knowledge of the inner workings of PostgreSQL to know whether my instinct is correct on this, so I thought I’d raise the question with the experts.

 

Thanks!

Bill

RE: Bulk DML performance

From
Date:

Thanks Renan! Reducing the fill factor has improved my update performance and I am now seeing the same time for updates as inserts.

 

I look forward to any advancements PostgreSQL may make in the future to improve the performance of bulk DML operations. It would be amazing if they could be parallelized in the future.

 

Best,

Bill

 

From: Renan Alves Fonseca <renanfonseca@gmail.com>
Sent: Friday, 14 March 2025 5:25 AM
To: bill.poole@ymail.com
Cc: pgsql-performance@postgresql.org
Subject: Re: Bulk DML performance

 

Hello,

Regarding the additional time for UPDATE, you can try the following:

 

CREATE TABLE test3 (
  id bigint PRIMARY KEY,            
  text1 text
) WITH (fillfactor=30);

 

My local test gives me almost the same time for INSERT (first insert) and UPDATES (following upserts).

Regarding the overall problem, there is always room for improvement. I did a quick test with partitions, and I found out that Postgres will not parallelize the upserts for us. One solution could be to partition the records at the application level, creating one connection per partition. On the DB side, the partitions can be implemented as standard tables (using a union view on top of them) or actual partitions of a main table. However, this solution does not strictly respect the "one single transaction'"constraint...

 

Regards,

Renan Fonseca

 

Em qui., 13 de mar. de 2025 às 08:40, <bill.poole@ymail.com> escreveu:

Hello! I’m building a system that needs to insert/update batches of millions of rows (using INSERT .. ON CONFLICT (…) DO UPDATE) in a single database transaction, where each row is about 1.5 kB. The system produces about 3 million rows (about 4.5 GB) of data in about 5 seconds, but PostgreSQL takes about 35 seconds to insert that data and about 55 seconds to update that data. This is both on my local dev machine as well as on a large AWS Aurora PostgreSQL instance (db.r8g.16xlarge with 64 vCPUs, 512 GB RAM and 30 Gbps).

 

The following INSERT .. ON CONFLICT (…) DO UPDATE statement inserts/updates 3 million rows with only 9 bytes per row and takes about 8 seconds on first run (to insert the rows) and about 14 seconds on subsequent runs (to update the rows), but is only inserting 27 MB of data (3 million rows with 9 bytes per row); although after the first run, SELECT pg_size_pretty(pg_total_relation_size('test')) reports the table size as 191 MB and after the second run reports the table size as 382 MB (adding another 191 MB).

 

CREATE TABLE test (

  id bigint PRIMARY KEY,

  text1 text

);

 

INSERT INTO test (id, text1)

SELECT generate_series, 'x'

FROM generate_series(1, 3000000)

ON CONFLICT (id) DO UPDATE

SET text1 = 'x';

 

If PostgreSQL is writing 191 MB on the first run and 382 MB on each subsequent run, then PostgreSQL is only writing about 28 MB/s. Although PostgreSQL is also able to write about 4.5 GB in about 35 seconds (as stated above), which is about 128 MB/s, so it seems the performance constraint depends on the number of rows inserted more than the size of each row.

 

Furthermore, deleting the rows takes about 18 seconds to perform (about 4 seconds longer than the time taken to update the rows):

 

DELETE FROM test

WHERE id in (

  SELECT * FROM generate_series(1, 3000000)

)

 

It seems like it should be possible to do better than this on modern hardware, but I don’t have enough knowledge of the inner workings of PostgreSQL to know whether my instinct is correct on this, so I thought I’d raise the question with the experts.

 

Thanks!

Bill

Re: Bulk DML performance

From
Álvaro Herrera
Date:
On 2025-Mar-13, bill.poole@ymail.com wrote:

> I need to perform a merge (INSERT ... ON CONFLICT ... DO UPDATE) on
> the data, so sadly I cannot use COPY.
> 
> I have discovered that for some reason, performing the original insert
> without the ON CONFLICT statement is twice as fast as performing the
> original insert with an ON CONFLICT ... DO UPDATE clause, completing
> in 4 seconds instead of 8. That seems strange to me because I wouldn't
> have thought it would be doing any additional work since a unique
> constraint is on the primary key, so each inserted value would need to
> be checked in either case, and there is no extra work to be done in
> either case.

As I recall, INSERT .. ON CONFLICT UPDATE requires to _insert_ a value
in the index prior to inserting the heap tuple, to guarantee uniqueness
in face of potentially concurrent inserters of the same value.  Maybe
have a look at the WAL produced by the operation with "pg_waldump -z" to
get some idea of the volume of each type of record.


Maybe you could try to use MERGE rather than INSERT .. ON CONFLICT
UPDATE.  The concurrency modelling there is different, and it will
probably have lower overhead.  But you may need to lock the table
explicitly to prevent concurrency problems.

-- 
Álvaro Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/
"Doing what he did amounts to sticking his fingers under the hood of the
implementation; if he gets his fingers burnt, it's his problem."  (Tom Lane)



Re: Bulk DML performance

From
Greg Sabino Mullane
Date:
On Mon, Mar 17, 2025 at 4:19 AM <bill.poole@ymail.com> wrote:

 Can you help me understand why performing 3 million lookups on a b-tree index with all pages cached in memory takes so long?

It's not the lookup, it's writing the 3 million rows (and in this particular upsert case, deleting 3 million, then inserting 3 million)

> Well, that is not a great statement.

Understood, but I was highlighting the performance of deleting 3 million rows identified by 3 million IDs, as opposed to deleting rows in a given range of IDs or deleting the whole table. It seems like deleting 3 million rows identified by 3 million IDs should be faster than updating 3 million rows (also identified by 3 million IDs).

It should indeed be faster. But keep in mind a delete immediately after that upsert now has twice as many rows to walk through as the upsert did. Also, a subselect like your original query can lead to a large nested loop. Try another variant such as this one:

with ids as (select x from generate_series(1, 3_000_000) x) delete from test using ids where id=x;

> With the table as it is you won't get better performance if you want the features that a relational database provides.

Sorry to hear that. I had hoped there was room to improve this performance.

If pure upsert performance is the goal, remove the unique index and store a timestamp along with your inserted data. Back to pure inserts again! (and a few new downsides). When querying, only use the version of the row with the highest timestamp.

Other random ideas:

* remove or consolidate columns you don't need, or can store in another table
* pre-filter the rows in the app, so you can do a pure-insert (or COPY) of known-to-be-new rows, then upsert the remaining rows
* use the smallest data types possible
* avoid or minimize toasted values
* pack your columns efficiently (e.g. reorder for 8 byte blocks)
* put the indexes on a ram-based tablespace
* boost your work_mem (for things like giant deletes which build hashes)
* revisit unlogged tables and partitioning

Cheers,
Greg

--
Enterprise Postgres Software Products & Tech Support

Re: Bulk DML performance

From
Renan Alves Fonseca
Date:
Hi,
Here are some observations.

Em seg., 17 de mar. de 2025 às 09:19, <bill.poole@ymail.com> escreveu:
> PostgreSQL has a lot of overhead per row.

Okay, thanks. I'm not actually too worried about this since in my scenario, each row is about 1.5 kB, so the % overhead is negligible.

> It is probably not the lookup, but the *modification* of the index that is slow.

Yes that makes sense for the original 3 million inserts, but when I perform the update of the 3 million rows, the index doesn't change - they are all HOT updates.

Using "perf" I can see that the overhead is indeed due to index lookup when we do HOT updates.
 
> Then the best you can do is to use COPY rather than INSERT. It will perform better (but [not] vastly better).

I need to perform a merge (INSERT ... ON CONFLICT ... DO UPDATE) on the data, so sadly I cannot use COPY.

I have discovered that for some reason, performing the original insert without the ON CONFLICT statement is twice as fast as performing the original insert with an ON CONFLICT ... DO UPDATE clause, completing in 4 seconds instead of 8. That seems strange to me because I wouldn't have thought it would be doing any additional work since a unique constraint is on the primary key, so each inserted value would need to be checked in either case, and there is no extra work to be done in either case.


In the INSERT case, we do not check the unique constraint for each row. We run into an error when inserting a duplicate, aborting the operation.
 
INSERT INTO test (id, text1)
SELECT generate_series, 'x'
FROM generate_series(1, 3000000)

It remains 4 seconds even when adding a clause to not insert duplicates.

INSERT INTO test (id, text1)
SELECT generate_series, 'x'
FROM generate_series(1, 3000000)
WHERE NOT EXISTS (
  SELECT 1
  FROM test4
  WHERE id = generate_series
)


In this case, we are not checking duplicates inside the input dataset. If you can guarantee, at the application level, that there are no duplicates, this seems a good speedup. Perhaps the MERGE clause...
 
Furthermore, I have found that performing an UPDATE rather than an INSERT ... ON CONFLICT ... DO UPDATE is twice as slow, completing in 16 seconds instead of 14 seconds.

UPDATE test
SET text1 = 'x'
FROM generate_series(1, 3000000)
WHERE test4.id = generate_series

This also now means that updating 3 million rows takes 4x longer than inserting those rows. Do we expect updates to be 4x slower than inserts?


It is not the update that is slower. It is the attached where clause that makes it slower. Try:
UPDATE test SET text1='x';

In my tests, the update of non-indexed columns is slightly faster than an insert.
 
Regards,
Renan Fonseca