RE: Big performance slowdown from 11.2 to 13.3 - Mailing list pgsql-performance

From ldh@laurent-hasson.com
Subject RE: Big performance slowdown from 11.2 to 13.3
Date
Msg-id MN2PR15MB2560EFE01CAB993E5CA1153185E49@MN2PR15MB2560.namprd15.prod.outlook.com
Whole thread Raw
In response to Re: Big performance slowdown from 11.2 to 13.3  (Vijaykumar Jain <vijaykumarjain.github@gmail.com>)
Responses Re: Big performance slowdown from 11.2 to 13.3  (Vijaykumar Jain <vijaykumarjain.github@gmail.com>)
List pgsql-performance

I am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html) and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.

 

 

From: Vijaykumar Jain <vijaykumarjain.github@gmail.com>
Sent: Thursday, July 22, 2021 16:32
To: ldh@laurent-hasson.com
Cc: Justin Pryzby <pryzby@telsasoft.com>; pgsql-performance@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3

 

Just asking, I may be completely wrong.

 

is this query parallel safe?

can we force parallel workers, by setting low parallel_setup_cost or otherwise to make use of scatter gather and Partial HashAggregate(s)?

I am just assuming more workers doing things in parallel, would require less disk spill per hash aggregate (or partial hash aggregate ?) and the scatter gather at the end.

 

I did some runs in my demo environment, not with the same query, some group by aggregates  with around 25M rows, and it showed reasonable results, not too off.

this was pg14 on ubuntu.

 

 

 

 

pgsql-performance by date:

Previous
From: Vijaykumar Jain
Date:
Subject: Re: Big performance slowdown from 11.2 to 13.3
Next
From: Vijaykumar Jain
Date:
Subject: Re: Big performance slowdown from 11.2 to 13.3