Re: [PERFORM] Odd sudden performance degradation related to tempobject churn - Mailing list pgsql-performance

From Jeremy Finzel
Subject Re: [PERFORM] Odd sudden performance degradation related to tempobject churn
Date
Msg-id CAMa1XUgPEUD0-r3RRoAk9H2fLKseYCLz2Rx_y3q6sGwHe3Y+qw@mail.gmail.com
Whole thread Raw
In response to Re: [PERFORM] Odd sudden performance degradation related to tempobject churn  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: [PERFORM] Odd sudden performance degradation related to tempobject churn
List pgsql-performance
On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel <finzelj@gmail.com> wrote:
> This particular db is on 9.3.15.  Recently we had a serious performance
> degradation related to a batch job that creates 4-5 temp tables and 5
> indexes.  It is a really badly written job but what really confuses us is
> that this job has been running for years with no issue remotely approaching
> this one.  We are also using pgpool.
>
> The job would kick off with 20-30 of similar queries running at once.  The
> thing normally takes only 30ms or so to run - it only operates on 1 customer
> at a time (yes, it's horribly written).  All of a sudden the cluster started
> thrashing and performance seriously degraded.  We tried a number of things
> with no success:
>
> Analyzed the whole database
> Turned off full logging
> Turned off synchronous commit
> Vacuumed several of the catalog tables
> Checked if we had an abnormal high amount of traffic this time - we didn't
> No abnormal disk/network issues (we would have seen much larger issues if
> that had been the case)
> Tried turning down the number of app nodes running
>
> What ended up completely resolving the issue was converting the query to use
> ctes instead of temp tables.  That means we avoided the disk writing and the
> catalog churn, and useless indexes.  However, we are baffled as to why this
> could make such a big difference when we had no issue like this before, and
> we have seen no systematic performance degradation in our system.
>
> Any insights would be greatly appreciated, as we are concerned not knowing
> the root cause.

How are your disks setup? One big drive with everything on it?
Separate disks for pg_xlog and pg's data dir and the OS logging? IO
contention is one of the big killers of db performance.

It's one san volume ssd for the data and wal files.  But logging and memory spilling and archived xlogs go to a local ssd disk.
 
Logging likely isn't your problem, but yeah you don't need to log
ERRYTHANG to see the problem either. Log long running queries temp
usage, buffer usage, query plans on slow queries, stuff like that.

You've likely hit a "tipping point" in terms of data size. Either it's
cause the query planner to make a bad decision, or you're spilling to
disk a lot more than you used to. 
Be sure to log temporary stuff with log_temp_files = 0 in your
postgresql.conf and then look for temporary file in your logs. I bet
you've started spilling into the same place as your temp tables are
going, and by default that's your data directory. Adding another drive
and moving pgsql's temp table space to it might help.

We would not have competition between disk spilling and temp tables because what I described above - they are going to two different places.  Also, I neglected to mention that we turned on auto-explain during this crisis, and found the query plan was good, it was just taking forever due to thrashing just seconds after we kicked off the batches.  I did NOT turn on log_analyze and timing but it was enough to see there was no apparent query plan regression.  Also, we had no change in the performance/plan after re-analyzing all tables.
 
Also increasing work_mem (but don't go crazy, it's per sort, so can
multiply fast on a busy server)

We are already up at 400MB, and this query was using memory in the low KB levels because it is very small (1 - 20 rows of data per temp table, and no expensive selects with missing indexes or anything).
 
Also log your query plans or run explain / explain analyze on the slow
queries to see what they're doing that's so expensive.

Yes, we did do that and there was nothing remarkable about the plan when we ran them in production.  All we saw was that over time, the actual execution time (along with everything else on the entire system) started slowing down more and more as thrashing increased.  But we found no evidence of a plan regression.

Thank you!  Any more feedback is much appreciated.

pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: [PERFORM] Odd sudden performance degradation related to tempobject churn
Next
From: Scott Marlowe
Date:
Subject: Re: [PERFORM] Odd sudden performance degradation related to tempobject churn