Re: SELECT AND AGG huge tables - Mailing list pgsql-performance

From Matheus de Oliveira
Subject Re: SELECT AND AGG huge tables
Date
Msg-id CAJghg4KfcHHELARj4S4Lp4cOVcBddRXdLCRYJrXRtMBPYb+Zvw@mail.gmail.com
Whole thread Raw
In response to SELECT AND AGG huge tables  (houmanb <houman@gmx.at>)
List pgsql-performance
On Mon, Oct 15, 2012 at 5:59 PM, houmanb <houman@gmx.at> wrote:
Dear all,
We have a DB containing transactional data.
There are about *50* to *100 x 10^6* rows in one *huge* table.
We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us
a constant seeking time.

A typical select (see below) takes about 200 secs. As the database is the
backend for a web-based reporting facility 200 to 500 or even more secs
response times are not acceptable for the customer.

Is there any way to speed up select statements like this:

SELECT
   SUM(T.x),
   SUM(T.y),
   SUM(T.z),
   AVG(T.a),
   AVG(T.b)
FROM T
GROUP BY
   T.c
WHERE
   T.creation_date=$SOME_DATE;

There is an Index on T.c. But would it help to partition the table by T.c?
It should be mentioned, that T.c is actually a foreign key to a Table
containing a
tiny number of rows (15 rows representing different companies).

How selective is T.creation_date? Looks like an index on this column would be better than T.c (could use also, of course), which would be also true for the partitioning - something like per month or per year partitioning.
 
my postgres.conf is actually the default one, despite the fact that we
increased the value for work_mem=128MB


How much memory do you have? Could you increase shared_buffers?

Also with a SSD you could decrease random_page_cost a little bit.

See [1].


Regards.
-- 
Matheus de Oliveira
Analista de Banco de Dados PostgreSQL
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres

pgsql-performance by date:

Previous
From: Merlin Moncure
Date:
Subject: Re: SELECT AND AGG huge tables
Next
From: Ondrej Ivanič
Date:
Subject: Re: SELECT AND AGG huge tables