Re: Powerfailure caused a reduction in INSERT performance of version 12 database. - Mailing list pgsql-sql
From | Frank Komsic |
---|---|
Subject | Re: Powerfailure caused a reduction in INSERT performance of version 12 database. |
Date | |
Msg-id | BL3PR05MB92361892FBBCBC62EE5ED73BB38C2@BL3PR05MB9236.namprd05.prod.outlook.com Whole thread Raw |
In response to | Re: Powerfailure caused a reduction in INSERT performance of version 12 database. (Frank Komsic <komsicf@shoeicanada.com>) |
List | pgsql-sql |
Hi Steve,
Additional info with a limit of records to display.
"QUERY PLAN"
"Index Scan Backward using noprod_pkey on prod.noprod (cost=0.41..124667.02 rows=2148 width=216) (actual time=0.048..2618.970 rows=2259 loops=1)"
" Output: id, line, noprod, ds, de, dst, det, quota, lab, wsr, nb, comment, metal, noprod_type, product, running, rate_min, rate_max"
"Planning Time: 0.117 ms"
"Execution Time: 2619.433 ms"
Why in the explain there is 2148 rows while there is 2259 rows in the analyze part?
Thanks
Frank
From: Frank Komsic <komsicf@shoeicanada.com>
Date: Monday, August 19, 2024 at 12:27 PM
To: Steve Midgley <science@misuse.org>
Cc: pgsql-sql <pgsql-sql@lists.postgresql.org>
Subject: Re: Powerfailure caused a reduction in INSERT performance of version 12 database.
HI Steve,
Here is an output of EXPLAIN ANALYZE
"QUERY PLAN"
"Limit (cost=0.40..6131.45 rows=100 width=217) (actual time=0.047..2425.721 rows=100 loops=1)"
" -> Index Scan Backward using noprod_pkey on noprod (cost=0.40..123970.26 rows=2022 width=217) (actual time=0.045..2425.681 rows=100 loops=1)"
"Planning Time: 0.103 ms"
"Execution Time: 2425.779 ms"
The execution time is an issue. Here it says 2425 ms but I can say it is more like 4500ms actually. Also I do not understand the width=217 if there are only 19 columns?
Thanks
Frank Komsic
From: Frank Komsic <komsicf@shoeicanada.com>
Date: Monday, August 19, 2024 at 12:09 PM
To: Steve Midgley <science@misuse.org>
Cc: pgsql-sql <pgsql-sql@lists.postgresql.org>
Subject: Re: Powerfailure caused a reduction in INSERT performance of version 12 database.
Hi Steve,
Thank you for your suggestions.
Steve wrote:
I'm far from an expert in this area but running explain it explain analyze seems like a useful thing to share with the group. Then I wonder if running vacuum analyze would be useful? Maybe the planner is doing something weird.
I have done a VACUUM ANALYZE and a VACUUM FULL on the questionable table with marginal improvement but still it seems to be slower than previously. I tried EXPLAIN ANALYZE and it does show it is slow for the number of records. REINDEXED the index with little success as well.
I'd also check if you lost any indexes you need during the bad day?
How do I check that?
Also being sure your system performance stats are correct - are you using all the cores and ram that you expect? Is the disk temporary space and swap performing normally?
Need to see and verify that… do not have historical data on the performance. Is there a way to get historicals or does it require a third party software?
Are other tables unaffected, somewhat affected or in the same situation as this table.
It seems other tables are fine as they do not have triggers on them. The data table of millions of records seems to visualize 100 k records faster than the table in question.
Currently I stopped all updates and the table visualizes in a little over 2 seconds. Previously while the updates were running it took 4 to 7 seconds to visualize.
I'd recommend, if your environment can tolerate debugging, with reuploading this table's data into an identical table and see if the problem exists there too.
I am not an expert in PostgreSQL. We have lost our programmer and do not have afall back plan for now. I have been educating myself on the administration of postgresql, just this problem seems a bit unusual from the training I had.
How can I reupload the data into an identical table?
Also if you dump the entire database can you reload in a new server and replicate there?
Yes the ultimate way to verify. I gather it would be a pg_basebackup and then restore function?
I hope this helps,
Steve
From: Steve Midgley <science@misuse.org>
Date: Monday, August 19, 2024 at 10:49 AM
To: Frank Komsic <komsicf@shoeicanada.com>
Cc: pgsql-sql <pgsql-sql@lists.postgresql.org>
Subject: Re: Powerfailure caused a reduction in INSERT performance of version 12 database.
On Mon, Aug 19, 2024, 7:34 AM Frank Komsic <komsicf@shoeicanada.com> wrote:
HI,
Not sure if this is the place to write to.
We have a database running on version 12.2 of postgresql. We have experienced an extended power failure and our backup systems did not work as intended resulting in a sudden shutdown of the database server (Windows 2012). After restarting the system it had performed some automated recovery and the database was functional. Unfortunately, the database speed had slowed down significantly, particularly in the INSERT commands in an UploadData application we have on our production machines that uploads 139 columns of data.
The data is inserted into partitioned tables and there is a trigger function updating additional tables. I have noticed that this additional table although small in records (2500 records with 18 columns) it take about 4-6 seconds to visualize the table using PGADMIN 8.10. The data in this table is being updated by multiple UlpadData applications. So the last 10 records the date columns are constantly being updated.
Visualizing the data before the latest 15 records it appears withing 1 second or less. The latest 15 records can take 2-4 seconds to visualize. Then visualizing the entire table take 5-7 seconds. This is too long if this occurs for every record that is inserted. If every record is 2 seconds of data, the application can never keep up with the updates.
Performed a full vacuum on this table with a little improvement.
Is there anything I can do to see what is going on?
Thank you
Frank Komsic
Shoei Canada Coproration
514-949-6319
I'm far from an expert in this area but running explain it explain analyze seems like a useful thing to share with the group. Then I wonder if running vacuum analyze would be useful? Maybe the planner is doing something weird.
I'd also check if you lost any indexes you need during the bad day?
Also being sure your system performance stats are correct - are you using all the cores and ram that you expect? Is the disk temporary space and swap performing normally?
Are other tables unaffected, somewhat affected or in the same situation as this table.
I'd recommend, if your environment can tolerate debugging, with reuploading this table's data into an identical table and see if the problem exists there too.
Also if you dump the entire database can you reload in a new server and replicate there?
I hope this helps,
Steve