On Thu, May 21, 2015 at 9:18 AM, Scott Ribe <scott_ribe@elevated-dev.com> wrote:
> On May 21, 2015, at 9:05 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>>
>> I've done a lot of partitioning of big data sets in postgresql and if
>> there's some common field, like data, that makes sense to partition
>> on, it can be a huge win.
>
> Indeed. I recently did it on exactly this kind of thing, a log of activity. And the common queries weren’t slow at
all.
>
> But if I wanted to upgrade via dump/restore with minimal downtime, rather than set up Slony or try my luck with
pg_upgrade,I could dump the historical partitions, drop those tables, then dump/restore, then restore the historical
partitionsat my convenience. (In this particular db, history is unusually huge compared to the live data.)
I use an interesting method to setup partitioning. I setup my
triggers, then insert the data in chunks from the master table to
itself.
insert into master_table select * from only master_table limit 10000;
and run that over and over. The data is all in the same "table" to the
application. But it's slowly moving to the partitions without
interruption.
Note: ALWAYS use triggers for partitioning. Rules are way too slow.