Re: hash aggregation - Mailing list pgsql-performance

From Sergey Konoplev
Subject Re: hash aggregation
Date
Msg-id CAL_0b1tHHHGz=umGWm=FXkSkt+yCJh-Cb10BJLoD-i3mwvNeaQ@mail.gmail.com
Whole thread Raw
In response to hash aggregation  (Korisk <Korisk@yandex.ru>)
Responses Re: hash aggregation  (Korisk <Korisk@yandex.ru>)
List pgsql-performance
On Wed, Oct 10, 2012 at 9:09 AM, Korisk <Korisk@yandex.ru> wrote:
> Hello! Is it possible to speed up the plan?
>  Sort  (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1)
>    Output: name, (count(name))
>    Sort Key: hashcheck.name
>    Sort Method: quicksort  Memory: 315kB
>    ->  HashAggregate  (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000
loops=1)
>          Output: name, count(name)
>          ->  Seq Scan on public.hashcheck  (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058
rows=25259817loops=1) 
>                Output: id, name, value
>  Total runtime: 10351.989 ms

AFAIU there are no query optimization solution for this.

It may be worth to create a table hashcheck_stat (name, cnt) and
increment/decrement the cnt values with triggers if you need to get
counts fast.

--
Sergey Konoplev

a database and software architect
http://www.linkedin.com/in/grayhemp

Jabber: gray.ru@gmail.com Skype: gray-hemp Phone: +14158679984


pgsql-performance by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: shared_buffers/effective_cache_size on 96GB server
Next
From: Ondrej Ivanič
Date:
Subject: Re: shared_buffers/effective_cache_size on 96GB server