Dear all,
We have a DB containing transactional data.
There are about *50* to *100 x 10^6* rows in one *huge* table.
We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us
a constant seeking time.
A typical select (see below) takes about 200 secs. As the database is the
backend for a web-based reporting facility 200 to 500 or even more secs
response times are not acceptable for the customer.
Is there any way to speed up select statements like this:
SELECT
SUM(T.x),
SUM(T.y),
SUM(T.z),
AVG(T.a),
AVG(T.b)
FROM T
GROUP BY
T.c
WHERE
T.creation_date=$SOME_DATE;
There is an Index on T.c. But would it help to partition the table by T.c?
It should be mentioned, that T.c is actually a foreign key to a Table
containing a
tiny number of rows (15 rows representing different companies).
my postgres.conf is actually the default one, despite the fact that we
increased the value for work_mem=128MB
Thanks in advance
Houman
--
View this message in context: http://postgresql.1045698.n5.nabble.com/SELECT-AND-AGG-huge-tables-tp5728306.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.