Re: Table performance with millions of rows (partitioning) - Mailing list pgsql-performance

From Justin Pryzby
Subject Re: Table performance with millions of rows (partitioning)
Date
Msg-id 20171228012009.GI4172@telsasoft.com
Whole thread Raw
In response to Table performance with millions of rows  (Robert Blayzor <rblayzor.bulk@inoc.net>)
Responses Re: Table performance with millions of rows (partitioning)
List pgsql-performance
On Wed, Dec 27, 2017 at 07:54:23PM -0500, Robert Blayzor wrote:
> Question on large tables…
> 
> When should one consider table partitioning vs. just stuffing 10 million rows into one table?

IMO, whenever constraint exclusion, DROP vs DELETE, or seq scan on individual
children justify the minor administrative overhead of partitioning.  Note that
partitioning may be implemented as direct insertion into child tables, or may
involve triggers or rules.

> I currently have CDR’s that are injected into a table at the rate of over
> 100,000 a day, which is large.
> 
> At some point I’ll want to prune these records out, so being able to just
> drop or truncate the table in one shot makes child table partitions
> attractive.

That's one of the major use cases for partitioning (DROP rather than DELETE and
thus avoiding any following vacuum+analyze).
https://www.postgresql.org/docs/10/static/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW

Justin


pgsql-performance by date:

Previous
From: Robert Blayzor
Date:
Subject: Table performance with millions of rows
Next
From: Robert Blayzor
Date:
Subject: Re: Table performance with millions of rows (partitioning)