Re: Recommendations on improving the insert on conflict do nothing performance - Mailing list pgsql-general

From Muhammad Usman Khan
Subject Re: Recommendations on improving the insert on conflict do nothing performance
Date
Msg-id CAPnRvGu_Zdf7rq+5EjRdcGZHwd711g=Cg2EpUZk6QwqKWKxzHw@mail.gmail.com
Whole thread Raw
In response to Recommendations on improving the insert on conflict do nothing performance  (Durgamahesh Manne <maheshpostgres9@gmail.com>)
Responses Re: Recommendations on improving the insert on conflict do nothing performance
List pgsql-general
Hi,
You can use the following approaches for optimization:
  • Instead of inserting one row at a time, perform bulk inserts, which will reduce the overhead of each individual transaction
  • Partitioning can improve write performance by splitting the data into smaller, more manageable chunks   
  • Tune postgres configuration like
    work_mem = '16MB' 
    shared_buffers = '8GB'
    effective_cache_size = '24GB'

On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <maheshpostgres9@gmail.com> wrote:
Hi 
    insert into dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid) values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing  
 *8vcpus and 32gb ram 
   Number of calls per sec 1600 at this time 42% of cpu utilized 
                 Max in ms 33.62 per call 
                 Avg in ms  0.17 per call 
                                                   Table "dictionary.dictionary"
     Column     |           Type           | Collation | Nullable | Default  | Storage  | Compression | Stats target | Description
----------------+--------------------------+-----------+----------+----------+----------+-------------+--------------+-------------
 lang           | text                     |           | not null |          | extended |             |              |
 tid            | text                     |           | not null |          | extended |             |              |
 basetid        | text                     |           | not null |          | extended |             |              |
 sportid        | text                     |           |          |          | extended |             |              |
 brandid        | text                     |           | not null |          | extended |             |              |
 translatedtext | text                     |           |          |          | extended |             |              |
 objecttype     | text                     |           |          |          | extended |             |              |
 createdat      | timestamp with time zone |           | not null | now()    | plain    |             |              |
 modified       | timestamp with time zone |           | not null | now()    | plain    |             |              |
 modifiedby     | text                     |           | not null | ''::text | extended |             |              |
 version        | integer                  |           | not null | 0        | plain    |             |              |
Indexes:
    "pk_dictionary" PRIMARY KEY, btree (lang, tid)
    "idx_dictionary_basetid" btree (basetid)
    "idx_dictionary_brandid" btree (brandid)
    "idx_dictionary_objecttype" btree (objecttype)
    "idx_dictionary_sportid" btree (sportid)
Triggers:
    i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW EXECUTE FUNCTION update_createdat_col()
    i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH ROW EXECUTE FUNCTION update_modified_col()
Access method: heap
How do we improve this query performance without taking more cpu?

Regards,
Durga Mahesh

pgsql-general by date:

Previous
From: Muhammad Usman Khan
Date:
Subject: Re: Removing duplicate rows in table
Next
From: Tom Lane
Date:
Subject: Re: Effects of REVOKE SELECT ON ALL TABLES IN SCHEMA pg_catalog FROM PUBLIC