Re: strange slow query - lost lot of time somewhere - Mailing list pgsql-hackers

From David Rowley
Subject Re: strange slow query - lost lot of time somewhere
Date
Msg-id CAApHDvq3CVRtSrQQ0Ym5fX6vBXoPwhY+Yg5ecX5pPgX7_Yzd4Q@mail.gmail.com
Whole thread Raw
In response to Re: strange slow query - lost lot of time somewhere  (Pavel Stehule <pavel.stehule@gmail.com>)
Responses RE: strange slow query - lost lot of time somewhere
Re: strange slow query - lost lot of time somewhere
List pgsql-hackers
On Tue, 3 May 2022 at 17:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:
> út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:
>> You sure there's not something taking an exclusive lock on one of these
>> tables every so often?
>
> I am almost sure, I can see this issue only every time when I set a higher work mem. I don't see this issue in other
cases.

hmm, I don't think the query being blocked on a table lock would cause
this behaviour. As far as I know, all table locks should be obtained
before the timer starts for the "Execution Time" timer in EXPLAIN
ANALYZE.  However, locks are obtained on indexes at executor startup,
so if there was some delay in obtaining a lock on the index it would
show up this way.  I just don't know of anything that obtains a
conflicting lock on an index without the same conflicting lock on the
table that the index belongs to.

I do agree that the perf report does indicate that the extra time is
taken due to some large amount of memory being allocated. I just can't
quite see how that would happen in Memoize given that
estimate_num_groups() clamps the distinct estimate as the number of
input rows, which is 91 in both cases in your problem query.

Are you able to run the Memoize query in psql with \watch 0.1 for a
few seconds while you do:

perf record --call-graph dwarf --pid <pid> sleep 2

then send along the perf report.

I locally hacked build_hash_table() in nodeMemoize.c to make the
hashtable 100 million elements and I see my perf report for a trivial
Memoize query come up as:

+   99.98%     0.00%  postgres  postgres           [.] _start
+   99.98%     0.00%  postgres  libc.so.6          [.]
__libc_start_main_alias_2 (inlined)
+   99.98%     0.00%  postgres  libc.so.6          [.] __libc_start_call_main
+   99.98%     0.00%  postgres  postgres           [.] main
+   99.98%     0.00%  postgres  postgres           [.] PostmasterMain
+   99.98%     0.00%  postgres  postgres           [.] ServerLoop
+   99.98%     0.00%  postgres  postgres           [.] BackendStartup (inlined)
+   99.98%     0.00%  postgres  postgres           [.] BackendRun (inlined)
+   99.98%     0.00%  postgres  postgres           [.] PostgresMain
+   99.98%     0.00%  postgres  postgres           [.] exec_simple_query
+   99.98%     0.00%  postgres  postgres           [.] PortalRun
+   99.98%     0.00%  postgres  postgres           [.] FillPortalStore
+   99.98%     0.00%  postgres  postgres           [.] PortalRunUtility
+   99.98%     0.00%  postgres  postgres           [.] standard_ProcessUtility
+   99.98%     0.00%  postgres  postgres           [.] ExplainQuery
+   99.98%     0.00%  postgres  postgres           [.] ExplainOneQuery
+   99.95%     0.00%  postgres  postgres           [.] ExplainOnePlan
+   87.87%     0.00%  postgres  postgres           [.] standard_ExecutorStart
+   87.87%     0.00%  postgres  postgres           [.] InitPlan (inlined)
+   87.87%     0.00%  postgres  postgres           [.] ExecInitNode
+   87.87%     0.00%  postgres  postgres           [.] ExecInitNestLoop
+   87.87%     0.00%  postgres  postgres           [.] ExecInitMemoize
+   87.87%     0.00%  postgres  postgres           [.]
build_hash_table (inlined) <----
+   87.87%     0.00%  postgres  postgres           [.] memoize_create (inlined)
+   87.87%     0.00%  postgres  postgres           [.]
memoize_allocate (inlined)
+   87.87%     0.00%  postgres  postgres           [.]
MemoryContextAllocExtended
+   87.87%     0.00%  postgres  postgres           [.] memset (inlined)

Failing that, are you able to pg_dump these tables and load them into
a PostgreSQL instance that you can play around with and patch?
Provided you can actually recreate the problem on that instance.

David



pgsql-hackers by date:

Previous
From: Peter Smith
Date:
Subject: Re: Perform streaming logical transactions by background workers and parallel apply
Next
From: Andrew Dunstan
Date:
Subject: Re: SQL/JSON: FOR ORDINALITY bug