Re: Need help identifying a periodic performance issue. - Mailing list pgsql-performance

From Robert Creager
Subject Re: Need help identifying a periodic performance issue.
Date
Msg-id B0A746BE-AAD4-4689-B9D2-B25A57D7BC0C@spectralogic.com
Whole thread Raw
In response to Re: Need help identifying a periodic performance issue.  (Robert Creager <robertc@spectralogic.com>)
List pgsql-performance


> On Nov 24, 2021, at 4:15 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:
>
> This message originated outside your organization.
>
> On Wed, Nov 24, 2021 at 10:44:12PM +0000, Robert Creager wrote:
>> I forgot, I had reloaded postgres, but had not re-started our app, so the connections wouldn’t have that plan setting on them. Re-doing now.
>
> Are you sure?  GUC changes should be applied for existing sessions, right ?
>
> Would you send the logs surrounding the slow COPY ?
> Specifically including the autovacuum logs.

Here are the log lines 5 minutes leading up to the 2min copy operation happening.  There is no vacuum activity. The previous auto vacuum happened 20 minutes earlier on a different table.



>
>> We are at it again.  I have a DELETE operation that’s taking 48 minutes so far.
>
> Before, you had slow COPY due to FKs.  Now you have a slow DELETE, which you
> only alluded to before.

Yeah, I had not been able to reproduce it previously with logging/dtracing enabled. And I was able to look at the query plan as I saw it happening.

And we’ve run across another problem query, which is also hitting that ds3.blob table.

INFO Nov 25 05:30:05,787 [WorkLogger] | Still in progress after 30 minutes: [IomDriverWorker] SQL: SELECT * FROM ds3.s3_object_property WHERE (key = 'x-amz-meta-o-spectra-backup-start-date' AND EXISTS (SELECT * FROM ds3.s3_object WHERE id = ds3.s3_object_property.object_id AND ((EXISTS (SELECT * FROM ds3.bucket WHERE id = ds3.s3_object.bucket_id AND (name LIKE 'Spectra%')) AND NOT EXISTS (SELECT * FROM ds3.blob WHERE object_id = ds3.s3_object.id AND (EXISTS (SELECT * FROM ds3.job...  (MonitoredWorkManager$WorkLogger.run:84)

>
>> So how do we avoid this query plan? Do we need to start doing explicit analyzes after every delete?
>
> If your DELETE is deleting the entire table, then I think you should VACUUM
> anyway (or else the next inserts will bloat the table).

We’re not deleting the entire table necessarily, we don’t know, customer driven thing.  In general, the COPY table used will not see a lot of deletes, this in from the test group, which is deleting a lot of data.

>
> But first, I believe Thomas was suggesting to put plan_cache_mode back to its
> default, and (for testing purposes) try using issue DISCARD PLANS.

Ok, I’ll do that now and see what happens with the COPY.

>
> On Fri, Nov 19, 2021 at 10:08:02AM +1300, Thomas Munro wrote:
>> Just to understand what's going on, it'd be interesting to know if the
>> problem goes away if you *just* inject the DISCARD PLANS statement
>> before running your COPYs, but if that doesn't help it'd also be
>> interesting to know what happens if you ANALYZE each table after each
>> COPY.  Are you running any explicit ANALYZE commands?  How long do
>> your sessions/connections live for?
>
> --
> Justin
>

Attachment

pgsql-performance by date:

Previous
From: arjun shetty
Date:
Subject: Re: Lock contention high
Next
From: pgsql-performance@jhacker.de
Date:
Subject: pg_trgm word_similarity query does not use index for input strings longer than 8 characters