Re: Hi Community - Mailing list pgsql-admin

From Naresh Soni
Subject Re: Hi Community
Date
Msg-id CADg8u6=smhK4+DREEMKUSM+hDnbr2FAGE4t-1vomB383ERsYtw@mail.gmail.com
Whole thread Raw
In response to Re: Hi Community  (Kevin Grittner <kgrittn@ymail.com>)
Responses Re: Hi Community  (Scott Ribe <scott_ribe@elevated-dev.com>)
Re: Hi Community  (Kevin Grittner <kgrittn@ymail.com>)
List pgsql-admin

Hi Kevin,

Thanks for your response.

So you mean postgres can handle such huge records without any fine tuning required on postgres, by default  execpt we will need to use indexing for searching.

On 02-Feb-2015 8:03 PM, "Kevin Grittner" <kgrittn@ymail.com> wrote:
Naresh Soni <jmnaresh@gmail.com> wrote:

> This is my first question on the list, I wanted to ask if
> postgres can handle multi millions records? for example there
> will be 1 million records per table per day, so 365 millions per
> year.

Yes, I have had hundreds of millions of rows in a table without
performance problems. If you want to see such a table in action,
go to the following web site, bring up a court case, and click the
"Court Record Events" button. Last I knew the table containing
court record events had about 450 million rows, with no
partitioning.  The total database was 3.5 TB.

http://wcca.wicourts.gov/

> Is yes, then please elaborate.

You will want indexes on columns used in the searches. Depending
on details you have not provided it might be beneficial to
partition the table. Do not consider partitioning to be some
special magic which always makes things faster, though -- it can
easily make performance much worse if it is not a good fit.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

pgsql-admin by date:

Previous
From: Kevin Grittner
Date:
Subject: Re: Hi Community
Next
From: Scott Ribe
Date:
Subject: Re: Hi Community