Re: Read performance on Large Table - Mailing list pgsql-admin

From Ravi Krishna
Subject Re: Read performance on Large Table
Date
Msg-id CACER=P3SRMpoVdTZJ=TNc0OWeg_kJCnQH0fCnvT8bQ6TpxWCYw@mail.gmail.com
Whole thread Raw
In response to Read performance on Large Table  (Kido Kouassi <jjkido@gmail.com>)
List pgsql-admin
Looks like currently the table has no index. It has to do table scan. Please post the typical queries and we will suggest what columns to index.

On Thu, May 21, 2015 at 10:54 AM, Kido Kouassi <jjkido@gmail.com> wrote:
Hello Admin,

I am looking to build a database to log activities on my system

Currently I have the following  table:

CREATE TABLE table
(
  attr1 bigint,
  attr2 text,
  attr3 timestamp without time zone,
  attr4 integer,
  attr5 character varying(50),
  attr6 character varying(50),
  attr7 character varying(50),
  attr8 character varying(50),
  attr9 character varying(50),
  attr10 character varying(50)
)

Over 3 months the table has grown to 600+ Millions Rows,

We mainly do inserts, we almost never do updates or delete.

However the table is not very useful as it stands  because any select  we run it takes a very long time ( in hours) to complete.


Question1: Is Slowness to be expected with such big table?
Question2: is there anything I am not doing that I should do to get a better perfomance?

I am thinking about 2 solution but I wanted to Ask the more experienced people before implementing them: 

1: Break the table into multiple small table for each week.
2: Create an index on the Timestamp column sin it is the one we use the most in queries.

Thank you in advance for you Help,

--Kido K.


Note:
 explain analyse select count(*) from table where attr5='value' and attr3 > '05/19/2015 00:00:00' and attr3 < '05/21/2015 00:00:00';

Has been running for 30 minutes and counting....


pgsql-admin by date:

Previous
From: "Gilberto Castillo"
Date:
Subject: Re: Read performance on Large Table
Next
From: Scott Marlowe
Date:
Subject: Re: Read performance on Large Table