Thread: Query runtime differences- trying to understand why.
I have a question on the execution time of a query and the fluctuations I'm seeing. I enabled auto_trace to capture some actual executions of a query by an app. Below are 2 executions of it. The top one took 1.697 milliseconds. The 2nd one took 31.241 milliseconds. Note the query has hints in it and the top one did not obey the hint to use the pk index on table product. That table is very small so full scanning it isn't really a problem. When I look at the steps in the plan below I do not believe the difference in runtime is due to plan differences.
Here are my observations for the top query/plan:
- The query found 318 rows when scanning the entry_guid_mod_idx index on table entry. It did 336 or 337 shared block hits for that step and it took 619 milliseconds for this step.
- I believe it nested looped to the full scan of table product 318 times. That step took 812 milliseconds (time including prior step). At that point the query did a total of 841 shared block hits. I believe it found 318 rows that matched.
- The LockRows step- this is coming from the SELECT FOR UPDATE SKIP LOCKED. I believe this step shows it returned 0 rows. Runtime was 1.696 seconds. This implies all 318 rows that matched the query were all locked.
Observations for the bottom query/plan:
- The query found 291 rows when scanning the entry_guid_mod_idx index on table entry. It did 313 or 315 shared block sits for that step and it took 977 milliseconds.
- I believe it nested looped to the index scan pkproduct 291 times. That step took 1.491 seconds (time including prior step). Ad that point the query did a total of 1365 shared block hits. I believe it found 291 rows that matched.
- The LockRows step- Again, I believe this is coming from the SELECT FOR UPDATE SKIP LOCKED. Like the top query this also shows it returned 0 rows which implies to me all 291 rows were locked. But here we can see the runtime was 31.239 milliseconds.
So my question- assuming my understanding above is correct both plans show it taking minimal time to get to the LockRows step. The top query skipped more rows than the bottom query but the bottom query took significantly more time at this LockRows step. Why would that be the case?
Thanks in advance.
Steve
2023-11-29 18:12:45 UTC:10.210.202.108(50098):alertu@pgbatchshared01aq:[15251]:LOG: duration: 1.697 ms plan:
Query Text: /*+ NestLoop(t1 t2) IndexScan(t2 pkproduct) */ SELECT T1.ENTRY_GUID FROM bchalertdemo.ENTRY T1,bchalertdemo.PRODUCT T2 WHERE T1.NEXT_RUN_DATE<$1 AND T1.STATUS_CODE=1 AND T2.ACTIVE_STATUS != 'F' AND (END_DATE IS NULL OR ($2 < END_DATE)) AND (($3 NOT BETWEEN SUSPEND_DATE AND RESUME_DATE) OR SUSPEND_DATE IS NULL OR RESUME_DATE IS NULL) AND T1.PRODUCT_CODE = T2.PRODUCT_CODE AND T1.LAST_RUN_DATE<$4 AND T2.PRODUCT_VALUE IN ($5) AND MOD(ABS(HASHTEXT(T1.ENTRY_GUID)),150) = $6 ORDER BY T1.PRIORITY, T1.ENTRY_FREQUENCY_CODE, T1.NEXT_RUN_DATE FOR UPDATE SKIP LOCKED LIMIT 1
Limit (cost=0.42..4.63 rows=1 width=64) (actual time=1.695..1.696 rows=0 loops=1)
Buffers: shared hit=841
-> LockRows (cost=0.42..1393.48 rows=331 width=64) (actual time=1.695..1.695 rows=0 loops=1)
Buffers: shared hit=841
-> Nested Loop (cost=0.42..1390.17 rows=331 width=64) (actual time=0.032..0.812 rows=318 loops=1)
Join Filter: (t1.product_code = t2.product_code)
Buffers: shared hit=337
-> Index Scan using entry_guid_mod_idx on entry t1 (cost=0.42..1384.18 rows=331 width=63) (actual time=0.016..0.619 rows=318 loops=1)
Index Cond: ((mod(abs(hashtext((entry_guid)::text)), 150) = 141) AND (next_run_date < '2023-11-29 12:12:38.535'::timestamp without time zone) AND (last_run_date < '2023-11-29 12:12:38.535'::timestamp without time zone)
AND (status_code = '1'::numeric))
Filter: (((end_date IS NULL) OR ('2023-11-29 12:12:38.535'::timestamp without time zone < end_date)) AND (('2023-11-29 12:12:38.535'::timestamp without time zone < suspend_date) OR ('2023-11-29 12:12:38.535'::timestamp
without time zone > resume_date) OR (suspend_date IS NULL) OR (resume_date IS NULL)))
Buffers: shared hit=336
-> Materialize (cost=0.00..1.02 rows=1 width=20) (actual time=0.000..0.000 rows=1 loops=318)
Buffers: shared hit=1
-> Seq Scan on product t2 (cost=0.00..1.01 rows=1 width=20) (actual time=0.013..0.013 rows=1 loops=1)
Filter: ((active_status <> 'F'::bpchar) AND ((product_value)::text = 'bchalert-poc'::text))
Buffers: shared hit=1
2023-11-29 18:12:45 UTC:10.210.202.108(50006):alertu@pgbatchshared01aq:[15226]:LOG: duration: 31.241 ms plan:
Query Text: /*+ NestLoop(t1 t2) IndexScan(t2 pkproduct) */ SELECT T1.ENTRY_GUID FROM bchalertdemo.ENTRY T1,bchalertdemo.PRODUCT T2 WHERE T1.NEXT_RUN_DATE<$1 AND T1.STATUS_CODE=1 AND T2.ACTIVE_STATUS != 'F' AND (END_DATE IS NULL OR ($2 < END_DATE)) AND (($3 NOT BETWEEN SUSPEND_DATE AND RESUME_DATE) OR SUSPEND_DATE IS NULL OR RESUME_DATE IS NULL) AND T1.PRODUCT_CODE = T2.PRODUCT_CODE AND T1.LAST_RUN_DATE<$4 AND T2.PRODUCT_VALUE IN ($5) AND MOD(ABS(HASHTEXT(T1.ENTRY_GUID)),150) = $6 ORDER BY T1.PRIORITY, T1.ENTRY_FREQUENCY_CODE, T1.NEXT_RUN_DATE FOR UPDATE SKIP LOCKED LIMIT 1
Limit (cost=0.55..5.50 rows=1 width=64) (actual time=31.238..31.239 rows=0 loops=1)
Buffers: shared hit=1365
-> LockRows (cost=0.55..366.92 rows=74 width=64) (actual time=31.237..31.238 rows=0 loops=1)
Buffers: shared hit=1365
-> Nested Loop (cost=0.55..366.18 rows=74 width=64) (actual time=0.044..1.491 rows=291 loops=1)
Buffers: shared hit=315
-> Index Scan using entry_guid_mod_idx on entry t1 (cost=0.42..363.72 rows=74 width=63) (actual time=0.035..0.977 rows=291 loops=1)
Index Cond: ((mod(abs(hashtext((entry_guid)::text)), 150) = $6) AND (next_run_date < $1) AND (last_run_date < $4) AND (status_code = '1'::numeric))
Filter: (((end_date IS NULL) OR ($2 < end_date)) AND (($3 < suspend_date) OR ($3 > resume_date) OR (suspend_date IS NULL) OR (resume_date IS NULL)))
Buffers: shared hit=313
-> Memoize (cost=0.14..0.32 rows=1 width=20) (actual time=0.001..0.001 rows=1 loops=291)
Cache Key: t1.product_code
Cache Mode: logical
Hits: 290 Misses: 1 Evictions: 0 Overflows: 0 Memory Usage: 1kB
Buffers: shared hit=2
-> Index Scan using pkproduct on product t2 (cost=0.12..0.31 rows=1 width=20) (actual time=0.005..0.006 rows=1 loops=1)
Index Cond: (product_code = t1.product_code)
Filter: ((active_status <> 'F'::bpchar) AND ((product_value)::text = ($5)::text))
Buffers: shared hit=2
Am 29.11.23 um 21:25 schrieb Dirschel, Steve: > > I have a question on the execution time of a query and the > fluctuations I'm seeing. I enabled auto_trace to capture some actual > executions of a query by an app. Below are 2 executions of it. The > top one took 1.697 milliseconds. The 2nd one took 31.241 > milliseconds. Note the query has hints in it > what database are you using? PostgreSQL doesn't hav hints... Regards, Andreas -- Andreas Kretschmer - currently still (garden leave) Technical Account Manager (TAM) www.enterprisedb.com
Sorry. DB is Aurora Postgres 14.6. Hints are enabled via extension pg_hint_plan. But my question isn't so much about thehints but the execution plan and where the time is going and why it appears for 1 execution of the query skipping lockedrows seems to take a lot more time than another execution of the query. Regards -----Original Message----- From: Andreas Kretschmer <andreas@a-kretschmer.de> Sent: Wednesday, November 29, 2023 3:10 PM To: pgsql-general@lists.postgresql.org Subject: [EXT] Re: Query runtime differences- trying to understand why. External Email: Use caution with links and attachments. Am 29.11.23 um 21:25 schrieb Dirschel, Steve: > > I have a question on the execution time of a query and the > fluctuations I'm seeing. I enabled auto_trace to capture some actual > executions of a query by an app. Below are 2 executions of it. The > top one took 1.697 milliseconds. The 2nd one took 31.241 milliseconds. > Note the query has hints in it > what database are you using? PostgreSQL doesn't hav hints... Regards, Andreas -- Andreas Kretschmer - currently still (garden leave) Technical Account Manager (TAM) https://urldefense.com/v3/__http://www.enterprisedb.com__;!!GFN0sa3rsbfR8OLyAw!eF3r308h3pnOusNslRf-zZZ1LUoQFkGOKK47Sm1ByfdIdLAtF0xPY4-4Xuo_F25zoV12fGtyNUelCdG10fZEKldOigKm0FXp$