Thread: table partioning performance

table partioning performance

From
"Colin Taylor"
Date:
Hi there,  we've partioned a table (using 8.2) by day due to the 50TB of data (500k row size, 100G rows) we expect to store it in a year.
Our performance on inserts and selects against the master table is disappointing, 10x slower (with ony 1 partition constraint) than we get  by going to the partioned table directly. Browsing the list I get the impression this just a case of too many partitions?  would be better off doing partitions of partitions ?

Any other advice  or pointers to more information with dealing with these sorts of scales appreciated.

thanks
Colin.

Re: table partioning performance

From
"Luke Lonergan"
Date:
Colin,


On 1/6/07 8:37 PM, "Colin Taylor" <colin.taylor@gmail.com> wrote:

> Hi there,  we've partioned a table (using 8.2) by day due to the 50TB of data
> (500k row size, 100G rows) we expect to store it in a year.
> Our performance on inserts and selects against the master table is
> disappointing, 10x slower (with ony 1 partition constraint) than we get  by
> going to the partioned table directly. Browsing the list I get the impression
> this just a case of too many partitions?  would be better off doing partitions
> of partitions ?

Can you post an "explain analyze" of your query here so we can see what's
going on?

- Luke



Re: table partioning performance

From
"Steven Flatt"
Date:
On 1/6/07, Colin Taylor <colin.taylor@gmail.com> wrote:
Hi there,  we've partioned a table (using 8.2) by day due to the 50TB of data (500k row size, 100G rows) we expect to store it in a year.
Our performance on inserts and selects against the master table is disappointing, 10x slower (with ony 1 partition constraint) than we get  by going to the partioned table directly.
 
Are you implementing table partitioning as described at: http://developer.postgresql.org/pgdocs/postgres/ddl-partitioning.html ?
 
If yes, and if I understand your partitioning "by day" correctly, then you have one base/master table with 366 partitions (inherited/child tables).  Do each of these partitions have check constraints and does your master table use rules to redirect inserts to the appropriate partition?  I guess I don't understand your "only 1 partition constraint" comment.
 
We use partitioned tables extensively and we have observed linear performance degradation on inserts as the number of rules on the master table grows (i.e. number of rules = number of partitions).  We had to come up with a solution that didn't have a rule per partition on the master table.  Just wondering if you are observing the same thing.
 
Selects shouldn't be affected in the same way, theoretically, if you have constraint_exclusion enabled.
 
Steve
 

Re: table partioning performance

From
"Merlin Moncure"
Date:
On 1/7/07, Colin Taylor <colin.taylor@gmail.com> wrote:
> Hi there,  we've partioned a table (using 8.2) by day due to the 50TB of
> data (500k row size, 100G rows) we expect to store it in a year.
> Our performance on inserts and selects against the master table is
> disappointing, 10x slower (with ony 1 partition constraint) than we get  by
> going to the partioned table directly. Browsing the list I get the
> impression this just a case of too many partitions?  would be better off
> doing partitions of partitions ?
>
> Any other advice  or pointers to more information with dealing with these
> sorts of scales appreciated.

as others have stated, something is not set up correctly.  table
partitioning with constraint exclusion should be considerably faster
for situations were the planner can optimize for it (select queries
are case dependent,  but inserts are not).

also, I would like to speak for everybody else here and ask for as
much detail as possible about the hardware and software challenges you
are solving :-)  in particular, I am curious how you arrived at 500k
row size.

merlin

Re: table partioning performance

From
"Simon Riggs"
Date:
On Mon, 2007-01-08 at 15:02 -0500, Steven Flatt wrote:
> On 1/6/07, Colin Taylor <colin.taylor@gmail.com> wrote:
>         Hi there,  we've partioned a table (using 8.2) by day due to
>         the 50TB of data (500k row size, 100G rows) we expect to store
>         it in a year.
>         Our performance on inserts and selects against the master
>         table is disappointing, 10x slower (with ony 1 partition
>         constraint) than we get  by going to the partioned table
>         directly.
>
> Are you implementing table partitioning as described at:
> http://developer.postgresql.org/pgdocs/postgres/ddl-partitioning.html ?
>
> If yes, and if I understand your partitioning "by day" correctly, then
> you have one base/master table with 366 partitions (inherited/child
> tables).  Do each of these partitions have check constraints and does
> your master table use rules to redirect inserts to the appropriate
> partition?  I guess I don't understand your "only 1 partition
> constraint" comment.
>
> We use partitioned tables extensively and we have observed linear
> performance degradation on inserts as the number of rules on the
> master table grows (i.e. number of rules = number of partitions).  We
> had to come up with a solution that didn't have a rule per partition
> on the master table.  Just wondering if you are observing the same
> thing.

If you are doing date range partitioning it should be fairly simple to
load data into the latest table directly. That was the way I originally
intended for it to be used. The rules approach isn't something I'd
recommend as a bulk loading option and its a lot more complex anyway.

> Selects shouldn't be affected in the same way, theoretically, if you
> have constraint_exclusion enabled.

Selects can incur parsing overhead if there are a large number of
partitions. That will be proportional to the number of partitions, at
present.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com



Re: table partioning performance

From
"Jim C. Nasby"
Date:
On Mon, Jan 08, 2007 at 03:02:24PM -0500, Steven Flatt wrote:
> We use partitioned tables extensively and we have observed linear
> performance degradation on inserts as the number of rules on the master
> table grows (i.e. number of rules = number of partitions).  We had to come
> up with a solution that didn't have a rule per partition on the master
> table.  Just wondering if you are observing the same thing.

Except for the simplest partitioning cases, you'll be much better off
using a trigger on the parent table to direct inserts/updates/deletes to
the children. As a bonus, using a trigger makes it a lot more realistic
to deal with an update moving data between partitions.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: table partioning performance

From
"Steven Flatt"
Date:
On 1/9/07, Simon Riggs <simon@2ndquadrant.com> wrote:
If you are doing date range partitioning it should be fairly simple to
load data into the latest table directly. That was the way I originally
intended for it to be used. The rules approach isn't something I'd
recommend as a bulk loading option and its a lot more complex anyway.
The problem we have with blindly loading all data into the latest table is that some data (< 5%, possibly even much less) is actually delivered "late" and belongs in earlier partitions.  So we still needed the ability to send data to an arbitrary partition.
 
Steve
 

Re: table partioning performance

From
"Steven Flatt"
Date:
On 1/10/07, Jim C. Nasby <jim@nasby.net> wrote:
Except for the simplest partitioning cases, you'll be much better off
using a trigger on the parent table to direct inserts/updates/deletes to
the children. As a bonus, using a trigger makes it a lot more realistic
to deal with an update moving data between partitions.
 
In our application, data is never moved between partitions.
 
The problem I found with triggers is the non-robustness of the PLpgSQL record data type.  For example, in an "on insert" trigger, I can't determine the fields of the NEW record unless I hard code the column names into the trigger.  This makes it hard to write a generic trigger, which I can use for all our partitioned tables.  It would have been somewhat of a pain to write a separate trigger for each of our partitioned tables.
 
For that and other reasons, we moved some of the insert logic up to the application level in our product.
 
Steve
 

Re: table partioning performance

From
"Jim C. Nasby"
Date:
On Wed, Jan 10, 2007 at 04:39:06PM -0500, Steven Flatt wrote:
> On 1/10/07, Jim C. Nasby <jim@nasby.net> wrote:
> >
> >Except for the simplest partitioning cases, you'll be much better off
> >using a trigger on the parent table to direct inserts/updates/deletes to
> >the children. As a bonus, using a trigger makes it a lot more realistic
> >to deal with an update moving data between partitions.
>
>
> In our application, data is never moved between partitions.
>
> The problem I found with triggers is the non-robustness of the PLpgSQL
> record data type.  For example, in an "on insert" trigger, I can't determine
> the fields of the NEW record unless I hard code the column names into the
> trigger.  This makes it hard to write a generic trigger, which I can use for
> all our partitioned tables.  It would have been somewhat of a pain to write
> a separate trigger for each of our partitioned tables.
>
> For that and other reasons, we moved some of the insert logic up to the
> application level in our product.

Yeah, I think the key there would be to produce a function that wrote
the function for you.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: table partioning performance

From
"Simon Riggs"
Date:
On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
> On 1/9/07, Simon Riggs <simon@2ndquadrant.com> wrote:
>         If you are doing date range partitioning it should be fairly
>         simple to
>         load data into the latest table directly. That was the way I
>         originally
>         intended for it to be used. The rules approach isn't something
>         I'd
>         recommend as a bulk loading option and its a lot more complex
>         anyway.
> The problem we have with blindly loading all data into the latest
> table is that some data (< 5%, possibly even much less) is actually
> delivered "late" and belongs in earlier partitions.  So we still
> needed the ability to send data to an arbitrary partition.

Yes, understand the problem.

COPY is always going to be faster than INSERTs anyhow and COPY doesn't
allow views, nor utilise rules. You can set up a client-side program to
pre-qualify the data and feed it to multiple simultaneous COPY commands,
as the best current way to handle this.

--
Next section aimed at pgsql-hackers, relates directly to above:


My longer term solution looks like this:

1. load all data into newly created partition (optimised in a newly
submitted patch for 8.3), then add the table as a new partition

2. use a newly created, permanent "errortable" into which rows that
don't match constraints or have other formatting problems would be put.
Following the COPY you would then run an INSERT SELECT to load the
remaining rows from the errortable into their appropriate tables. The
INSERT statement could target the parent table, so that rules to
distribute the rows would be applied appropriately. When all of those
have happened, drop the errortable. This would allow the database to
apply its constraints accurately without aborting the load when a
constraint error occurs.

In the use case you outline this would provide a fast path for 95% of
the data load, plus a straightforward mechanism for the remaining 5%.

We discussed this on hackers earlier, though we had difficulty with
handling unique constraint errors, so the idea was shelved. The
errortable part of the concept was sound however.
http://archives.postgresql.org/pgsql-hackers/2005-11/msg01100.php
James William Pye had a similar proposal
http://archives.postgresql.org/pgsql-hackers/2006-02/msg00120.php

The current TODO says
"Allow COPY to report error lines and continue
This requires the use of a savepoint before each COPY line is processed,
with ROLLBACK on COPY failure."

If we agreed that the TODO actually has two parts to it, each of which
is separately implementable:
1. load errors to a table (all errors apart from uniqueness violation)
2. do something sensible with unique violation ERRORs

IMHO part (1) can be implemented without Savepoints, which testing has
shown (see James' results) would not be an acceptable solution for bulk
data loading. So (1) can be implemented fairly easily, whereas (2)
remains an issue that we have no acceptable solution for, as yet.

Can we agree to splitting the TODO into two parts? That way we stand a
chance of getting at least some functionality in this important area.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com



Re: [HACKERS] table partioning performance

From
"Jim C. Nasby"
Date:
On Thu, Jan 11, 2007 at 12:15:50PM +0000, Simon Riggs wrote:
> On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
> > On 1/9/07, Simon Riggs <simon@2ndquadrant.com> wrote:
> >         If you are doing date range partitioning it should be fairly
> >         simple to
> >         load data into the latest table directly. That was the way I
> >         originally
> >         intended for it to be used. The rules approach isn't something
> >         I'd
> >         recommend as a bulk loading option and its a lot more complex
> >         anyway.
> > The problem we have with blindly loading all data into the latest
> > table is that some data (< 5%, possibly even much less) is actually
> > delivered "late" and belongs in earlier partitions.  So we still
> > needed the ability to send data to an arbitrary partition.
>
> Yes, understand the problem.
>
> COPY is always going to be faster than INSERTs anyhow and COPY doesn't
> allow views, nor utilise rules. You can set up a client-side program to
> pre-qualify the data and feed it to multiple simultaneous COPY commands,
> as the best current way to handle this.
>
> --
> Next section aimed at pgsql-hackers, relates directly to above:

I'm wondering if you see any issues with COPYing into a partitioned
table that's using triggers instead of rules to direct data to the
appropriate tables?

BTW, I think improved copy error handling would be great, and might
perform better than triggers, once we have it...
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: [HACKERS] table partioning performance

From
"Simon Riggs"
Date:
On Thu, 2007-01-11 at 15:01 -0600, Jim C. Nasby wrote:

> I'm wondering if you see any issues with COPYing into a partitioned
> table that's using triggers instead of rules to direct data to the
> appropriate tables?

The data demographics usually guides you towards what to do.

You could COPY into the table that would receive most rows and use
before triggers to INSERT into the other tables, rather than the main
one. I'd be surprised if that was very fast for an even distribution
though. It could well be faster if you have a small number of rows into
a large number of targets because that would be quicker than re-scanning
a temp table repeatedly just to extract a few rows each time.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com