Table performance with millions of rows - Mailing list pgsql-performance

From Robert Blayzor
Subject Table performance with millions of rows
Date
Msg-id 7DF18AB9-C4A4-4C28-957D-12C00FCB5F71@inoc.net
Whole thread Raw
Responses Re: Table performance with millions of rows (partitioning)  (Justin Pryzby <pryzby@telsasoft.com>)
List pgsql-performance
Question on large tables…


When should one consider table partitioning vs. just stuffing 10 million rows into one table?

I currently have CDR’s that are injected into a table at the rate of over 100,000 a day, which is large.


At some point I’ll want to prune these records out, so being able to just drop or truncate the table in one shot makes
childtable partitions attractive. 


From a pure data warehousing standpoint, what are the do’s/don’t of keeping such large tables?

Other notes…
- This table is never updated, only appended (CDR’s)
- Right now daily SQL called to delete records older than X days. (costly, purging ~100k records at a time)



--
inoc.net!rblayzor
XMPP: rblayzor.AT.inoc.net
PGP:  https://inoc.net/~rblayzor/
















pgsql-performance by date:

Previous
From: David Miller
Date:
Subject: Re: Batch insert heavily affecting query performance.
Next
From: Justin Pryzby
Date:
Subject: Re: Table performance with millions of rows (partitioning)