Re: Unable to execute Query in parallel for partitioned table - Mailing list pgsql-general

From Brajendra Pratap
Subject Re: Unable to execute Query in parallel for partitioned table
Date
Msg-id CABdERiNDam2oEy5bw9N8ctujHT3CAx=QN_2CoKw1vXsW8mFu3A@mail.gmail.com
Whole thread Raw
In response to Unable to execute Query in parallel for partitioned table  (Brajendra Pratap <brajendra.pratap767@gmail.com>)
Responses Re: Unable to execute Query in parallel for partitioned table
List pgsql-general
Hi Albe,

Thank you so much for information, will check this and get back to you if any help required.

I have a doubt why didn't the parallelism works here ,could u plz guide me?

Thank you so much again.

On Thu, 11 Feb, 2021, 1:23 PM Laurenz Albe, <laurenz.albe@cybertec.at> wrote:
On Thu, 2021-02-11 at 05:09 +0530, Brajendra Pratap wrote:
> I am unable to execute the below in parallel plz suggest how can I achieve parallelism here.
>
> select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by trn_transaction_date desc ) AS RowNumber from (
> select * from transactions where trn_store_date_id=20201202) abc;
>
> Query plan is as mentioned below :-
>
> explain analyze select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by trn_transaction_date desc ) AS RowNumber from (
> select * from transactions where trn_store_date_id=20201218) abc;
> LOG:  duration: 25820.176 ms  statement: explain analyze select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by trn_transaction_date desc ) AS RowNumber from (
> select * from transactions where trn_store_date_id=20201218) abc;
>                                                                                               QUERY PLAN
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  WindowAgg  (cost=4474843.51..4498066.81 rows=774110 width=21297) (actual time=21455.495..25241.738 rows=795190 loops=1)
>    ->  WindowAgg  (cost=4474843.51..4488390.44 rows=774110 width=21289) (actual time=10588.494..15311.865 rows=795190 loops=1)
>          ->  Sort  (cost=4474843.51..4476778.79 rows=774110 width=21281) (actual time=10588.422..11771.300 rows=795190 loops=1)
>                Sort Key: transactions.trn_transaction_date DESC
>                Sort Method: external merge  Disk: 1496856kB
>                ->  Result  (cost=0.00..270640.32 rows=774110 width=21281) (actual time=0.117..4504.159 rows=795190 loops=1)
>                      ->  Append  (cost=0.00..262899.22 rows=774110 width=21281) (actual time=0.094..1449.532 rows=795190 loops=1)
>                            ->  Seq Scan on transactions  (cost=0.00..0.00 rows=1 width=47554) (actual time=0.019..0.019 rows=0 loops=1)
>                                  Filter: (trn_store_date_id = 20201218)
>                            ->  Index Scan using idx_202012_trn_store_date_id on transactions_202012  (cost=0.56..259028.67 rows=774109 width=21281) (actual time=0.074..1357.764 rows=795190 loops=1)
>                                  Index Cond: (trn_store_date_id = 20201218)
>  Planning Time: 116.472 ms
>  Execution Time: 25676.098 ms
>
> Note :- We had tried different options like max_worker_processes, max_parallel_workers, max_parallel_workers_per_gather,max_parallel_maintenance_worker to execute it in parallel but no luck.

I don't think parallelization will help you here.

Your problem is probably the "abc.*" in the SELECT list.

There must be really large data in this table, so it takes a long time to fetch and
sort the rows.  Try selecting only the columns you need.

Alternatively, add a LIMIT clause.  Do you really need all 800000 rows?

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

pgsql-general by date:

Previous
From: "Peter J. Holzer"
Date:
Subject: Re: Which partition scheme makes sense for my time based IoT-datagrams?
Next
From: Thorsten Schöning
Date:
Subject: Re: Which partition scheme makes sense for my time based IoT-datagrams?