> -----Original Message-----
> From: pgsql-novice-owner@postgresql.org [mailto:pgsql-novice-
> owner@postgresql.org] On Behalf Of Majid Azimi
> Sent: Friday, December 17, 2010 11:59 AM
> To: PostgreSQL - Novice
> Subject: [NOVICE] Very Large Table Partitioning
>
> Hi guys.
>
> here is our problem:
>
> We have a table that if we want to save all user's records in it, we
> have a very large table. maybe 10TB+
> so we are deciding to use table partitioning. But again we have problem
> here:
>
> if we decide to partition table per user we have lots of tables (maybe
> more than 100000+) with only 10000 records each.
> is this a good idea? is there any limit for number of tables?
As tom (the authority on it) said it would be bad to have 100s or more of
partitions. You might base the partitions on a range of users . We have had
mixed success doing this. Sometimes it works well, other times we found it
didn't work so great and had to rework it all to work a different way.
>
> The table structure is not in a way that we can partition in a better
> way. is this a good idea to add a column like "date inserted" and
> partition per year for example?
Do you roll off data ever? Table partitioning right now can be very useful
when you need to remove large amounts of data, since it's much better to
drop an entire child partition than doing a delete from XXXX where YYYY due
to the maintenance costs with a large delete like that. But if you don't
already have a date field then I suspect this might not be your use case.
do you do updates to existing data ?
do you have any low cardinality columns now ? sometimes these lead to what
to partition on. I know you said that the table structure is not in a way
that lends its self to partitioning, but maybe there is a pattern that you
haven't considered that could lend it self to partitioning. It would
typically be something that the majority of queries have in their filter
clauses.
-Mark