Re: long-running query - needs tuning - Mailing list pgsql-performance

From Tom Lane
Subject Re: long-running query - needs tuning
Date
Msg-id 10905.1187895379@sss.pgh.pa.us
Whole thread Raw
In response to long-running query - needs tuning  (Kevin Kempter <kevin@kevinkempterllc.com>)
List pgsql-performance
Kevin Kempter <kevin@kevinkempterllc.com> writes:
>  Merge Join  (cost=17118858.51..17727442.30 rows=155 width=90)
>    Merge Cond: ("outer".customer_id = "inner".customer_id)
>    ->  GroupAggregate  (cost=17118772.93..17727347.34 rows=155 width=8)
>          ->  Sort  (cost=17118772.93..17270915.95 rows=60857208 width=8)
>                Sort Key: con.customer_id
>                ->  Seq Scan on dat_user_contacts con  (cost=0.00..7332483.08
> rows=60857208 width=8)
>    ->  Sort  (cost=85.57..88.14 rows=1026 width=74)
>          Sort Key: dat_customer_mailbox_counts.customer_id
>          ->  Seq Scan on dat_customer_mailbox_counts  (cost=0.00..34.26
> rows=1026 width=74)

The planner, at least, thinks that all the time will go into the sort
step.  Sorting 60M rows is gonna take awhile :-(.  What PG version is
this?  (8.2 has noticeably faster sort code than prior releases...)
What have you got work_mem set to?

Bad as the sort is, I suspect that the real problem is the
count(distinct) operator, which is going to require *another*
sort-and-uniq step for each customer_id group --- and judging by
the rowcount estimates, at least some of those groups must be
pretty large.  (AFAIR this time is not counted in the planner
estimates.)  Again, work_mem would have an effect on how fast
that goes.

            regards, tom lane

pgsql-performance by date:

Previous
From: Bill Moran
Date:
Subject: Re: deadlock_timeout parameter in Postgresql.cof
Next
From: Alvaro Herrera
Date:
Subject: Re: Optimising "in" queries