Re: Powerfailure caused a reduction in INSERT performance of version 12 database. - Mailing list pgsql-sql

From Frank Komsic
Subject Re: Powerfailure caused a reduction in INSERT performance of version 12 database.
Date
Msg-id BY3PR05MB7971812AB14E81C7AAD0B76BB3932@BY3PR05MB7971.namprd05.prod.outlook.com
Whole thread Raw
In response to Re: Powerfailure caused a reduction in INSERT performance of version 12 database.  (Frank Komsic <komsicf@shoeicanada.com>)
List pgsql-sql

Hi,

 

I finally figured out my problem with slow query.  It was related to a replication.  I had a total of three replication slots on the main server.  One of the three was functional and the other two were not running (actually one was test converted to Pgsql 16 from the base 12.2, the other was just off).

 

After dropping one of the replication slots and started the other replication vm and it updated to the stream, the database started to function normally.

 

Have any of you experienced this slowdown of the queries due to replications that are way behind?

 

Thanks


Frank Komsic

 

From: Frank Komsic <komsicf@shoeicanada.com>
Date: Monday, August 19, 2024 at 1:16
PM
To: Steve Midgley <science@misuse.org>
Cc: pgsql-sql <pgsql-sql@lists.postgresql.org>
Subject: Re: Powerfailure caused a reduction in INSERT performance of version 12 database.

Thank you Steve,

 

Will try a few of the suggestions and see if and what works.

 

Best regards

 

Frank Komsic

 

From: Steve Midgley <science@misuse.org>
Date: Monday, August 19, 2024 at 12:37
PM
To: Frank Komsic <komsicf@shoeicanada.com>
Cc: pgsql-sql <pgsql-sql@lists.postgresql.org>
Subject: Re: Powerfailure caused a reduction in INSERT performance of version 12 database.

 

 

On Mon, Aug 19, 2024 at 9:08AM Frank Komsic <komsicf@shoeicanada.com> wrote:

Hi Steve,

 

Thank you for your suggestions.

 

 

Steve wrote:

I'm far from an expert in this area but running explain it explain analyze seems like a useful thing to share with the group. Then I wonder if running vacuum analyze would be useful? Maybe the planner is doing something weird. 

 

I have done a VACUUM ANALYZE and a VACUUM FULL on the questionable table with marginal improvement but still it seems to be slower than previously.  I tried EXPLAIN ANALYZE and it does show it is slow for the number of records.  REINDEXED the index with little success as well.

 

I'd also check if you lost any indexes you need during the bad day? 

 

How do I check that?

 

Also being sure your system performance stats are correct - are you using all the cores and ram that you expect? Is the disk temporary space and swap performing normally? 

 

Need to see and verify that… do not have historical data on the performance.  Is there a way to get historicals or does it require a third party software?

 

Are other tables unaffected, somewhat affected or in the same situation as this table. 

 

It seems other tables are fine as they do not have triggers on them.  The data table of millions of records seems to visualize 100 k records faster than the table in question.

 

Currently I stopped all updates and the table visualizes in a little over 2 seconds.  Previously while the updates were running it took 4 to 7 seconds to visualize.

 

I'd recommend, if your environment can tolerate debugging, with reuploading this table's data into an identical table and see if the problem exists there too. 

 

I am not an expert in PostgreSQL.  We have lost our programmer and do not have afall back plan for now.  I have been educating myself on the administration of postgresql, just this problem seems a bit unusual from the training I had.

 

How can I reupload the data into an identical table?

 

Also if you dump the entire database can you reload in a new server and replicate there? 

 

Yes the ultimate way to verify.  I gather it would be a pg_basebackup and then restore function?

 

I hope this helps, 

Steve 

 

 

I am not a postgres admin expert, but if I had this issue, I'd use pg_dump and pg_restore. I'm not familiar with pg_basebackup, but maybe it is more robust. Basically I'd run pg_dump (iirc "-F c" will dump in custom/native format), install the SAME version of postgres onto a machine with similar hardware, create a new database with the same name, then run pg_restore to get the data back into that.

 

Regarding your explain analyze: again I'm no expert, but I really don't understand how a table with 2500 rows and 18 cols can take 2.5s to enumerate.. So that remains a very mysterious thing for me. But pulling the data to a new server/database and verifying that the same problem occurs there seems wise.

But regarding the "width" - I believe that's a measure of all the columns concatenated together, returned in bytes, but I could be wrong.

 

The fact that there is a trigger on this table is suggestive that there is maybe a locking issue that is interfering with the search. If you can copy the data to a new location, you can disable the trigger to verify if that's a big part of the problem..

 

Steve

pgsql-sql by date:

Previous
From: "David G. Johnston"
Date:
Subject: object mapping for json/jsonb columns
Next
From: Michael Downey
Date:
Subject: PostgreSQL implicitly double-quoting identifier name with umlaut