Re: Query takes around 15 to 20 min over 20Lakh rows - Mailing list pgsql-general

From Tom Lane
Subject Re: Query takes around 15 to 20 min over 20Lakh rows
Date
Msg-id 2441707.1630624966@sss.pgh.pa.us
Whole thread Raw
In response to Re: Query takes around 15 to 20 min over 20Lakh rows  ("David G. Johnston" <david.g.johnston@gmail.com>)
Responses Re: Query takes around 15 to 20 min over 20Lakh rows  (Shubham Mittal <mittalshubham30@gmail.com>)
List pgsql-general
"David G. Johnston" <david.g.johnston@gmail.com> writes:
> On Thu, Sep 2, 2021 at 3:16 PM Shubham Mittal <mittalshubham30@gmail.com>
> wrote:
>> *Please help in optimizing this query. I need to actually generate reports
>> daily using this query.. It takes almost 15 to 20 min to execute this query
>> due to joins.. *

> Use jsonb_populate_recordset (or one of its siblings) to get rid of as many
> of these key-based value extraction operations as possible and build a
> table from the contents of the jsonb.

While those duplicative extractions sure look inefficient, it's not
clear from the (lack of) given facts whether that's the main cost,
or whether the complicated FROM clause is producing a bad plan.
I'd suggest first looking at EXPLAIN ANALYZE output to verify which
plan step(s) are slow.  If it's the final output step that's expensive,
then yes the next step is to optimize the extractions.

Otherwise, see

https://wiki.postgresql.org/wiki/Slow_Query_Questions

            regards, tom lane



pgsql-general by date:

Previous
From: "David G. Johnston"
Date:
Subject: Re: Query takes around 15 to 20 min over 20Lakh rows
Next
From: jesusthefrog
Date:
Subject: gen_random_uuid key collision