Re: splitting data into multiple tables - Mailing list pgsql-performance

From nair rajiv
Subject Re: splitting data into multiple tables
Date
Msg-id d67ff5e61001251639k777ba5fey97e4af95f539619@mail.gmail.com
Whole thread Raw
In response to Re: splitting data into multiple tables  (Craig James <craig_james@emolecules.com>)
Responses Re: splitting data into multiple tables  (Andres Freund <andres@anarazel.de>)
List pgsql-performance


On Tue, Jan 26, 2010 at 1:01 AM, Craig James <craig_james@emolecules.com> wrote:
Kevin Grittner wrote:
nair rajiv <nair331@gmail.com> wrote:
 
I found there is a table which will approximately have 5 crore
entries after data harvesting.
Is it advisable to keep so much data in one table ?
 That's 50,000,000 rows, right?

You should remember that words like lac and crore are not English words, and most English speakers around the world don't know what they mean.  Thousand, million, billion and so forth are the English words that everyone knows.



Oh I am Sorry. I wasn't aware of that 
I repost my query with suggested changes.



Hello,

          I am working on a project that will take out structured content from wikipedia
and put it in our database. Before putting the data into the database I wrote a script to
find out the number of rows every table would be having after the data is in and I found
there is a table which will approximately have 50,000,000 rows after data harvesting.
Is it advisable to keep so much data in one table ?
          I have read about 'partitioning' a table. An other idea I have is to break the table into
different tables after the no of rows  in a table has reached a certain limit say 10,00,000.
For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b' each having 10,00,000 rows.
I needed advice on whether I should go for partitioning or the approach I have thought of.
          We have a HP server with 32GB ram,16 processors. The storage has 24TB diskspace (1TB/HD).
We have put them on RAID-5. It will be great if we could know the parameters that can be changed in the
postgres configuration file so that the database makes maximum utilization of the server we have.
For eg parameters that would increase the speed of inserts and selects.


Thank you in advance
Rajiv Nair 

Craig

pgsql-performance by date:

Previous
From: Tory M Blue
Date:
Subject: Re: Data Set Growth causing 26+hour runtime, on what we believe to be very simple SQL
Next
From: Andres Freund
Date:
Subject: Re: splitting data into multiple tables