Thread: Convert Existing Table to a Partition Table in PG10
I have a large table that needs converting to the native partitioning available in PG10. I'm looking for a strategy to safely move the data. I was considering a CREATE TABLE new LIKE old PARTITION ON(my_timestamp). But I figured postgres would execute that before I could create the child tables. Does the new table and all of it;s child tables need to be created first and then move the data over?
If I use the strategy of creating the new table and child tables, is it faster to directly populate the child tables with data or copy the data in to the new table and let the native partitioning move the data to the correct child partition?
One other question. My table gets updated every minute. I use a trigger to add a subset of some of the data to another table. Will that still work?
Thanks in advance,
On Sat, Jun 23, 2018 at 6:42 PM Ravi Krishna <srkrishna@yahoo.com> wrote:
Thanks. I used Ashutosh Bapa's blop post as a guideline.
For others attempting this, I first created my parent and child structure. Then inserted the data from the original table into the child tables. Once I was happy with my new structure I drop the original table. (which is still backed just in case)
I found a couple of areas I'd missed on reading about partitions. First, no primary key. That was a big surprise. Second, my triggers I use to catch information going into the table, needed to be pointed at the child tables. I also leaned that my range partition value I used on a timestamp needed to have fractional seconds. I used a range of 2017-01-01 00:00:00 to 2017-23:59:59 which failed when I attempted to add a record that had a timestamp of 2017-23:59:59. Adding a fractional second to the range solved the problem.
Clifford
On Sun, Jul 1, 2018 at 12:15 AM, Clifford Snow <clifford@snowandsnow.us> wrote: ..... > I also leaned that my range partition value I used on a timestamp needed to > have fractional seconds. I used a range of 2017-01-01 00:00:00 to > 2017-23:59:59 which failed when I attempted to add a record that had a > timestamp of 2017-23:59:59. Adding a fractional second to the range solved > the problem. Your problem probably comes from using closed intervals. Timestamps are like real numbers, partitions on real numbers are best done using half closed interval. You can conver the real line using non overlapping half open intervals, but you cannot do it with open or closed ones ( non ov. ). Assuming you are yearly range partitions ( 2017-23:59:59 should be 2017-12-31 23:59:59 ), to use closed interval you have to rely on "real" second numbers being stored in the computer with a finite precision ( so you can, say, add up to the microseconds, and pray it does not change to picoseconds in a future release ). If you use half open ( 2017-01-01 00:00:00 <= ts < 2018.01.01 00:00:00 ) the problem is much easier. You can even drop the HMS ( 2017-01-01 <= ts < 2018-01-01 ) and it will work, even if the systems peeks a different HMS value for each year, as you use the same value for an interval start as for the previous end. And, if timestamp supported defaulting the M and D like it does with HMS ( which it does not ) you could even drop them. And I think postgres does not use leap seconds, but If it did '2016-12-31 23:59:60.9999999999999" does not need to be remembered in the half-open style. This is a general problem, not a postgres or timestamp related one. Anything which has decimals is generally better partitioned with half-open intervals. With integer-like things ( like dates, which are countable ) it does not matter that much, I use half-open for easier upgrading if I need to, but you can translate open-closed-half. Francisco Olarte.
On 1 July 2018 at 10:15, Clifford Snow <clifford@snowandsnow.us> wrote: > I also leaned that my range partition value I used on a timestamp needed to > have fractional seconds. I used a range of 2017-01-01 00:00:00 to > 2017-23:59:59 which failed when I attempted to add a record that had a > timestamp of 2017-23:59:59. Adding a fractional second to the range solved > the problem. Please be aware that with RANGE partitions the upper bound is non-inclusive. The lower bound is inclusive. If you want a 2017 partition, then FOR VALUES FROM ('2017-01-01') TO ('2018-01-01') will allow all 2017 timestamps and only 2017 timestamps. You've no need to consider precision of the type and how many 9's you add to anything here. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
David,
Thanks for the suggestion. That really simplifies creating the RANGE.
For all, I'm pretty much a postgresql novice, but I've tried to document what I've learned in the hopes that it can help someone else.
You can read my blog post at https://osm_seattle.snowandsnow.us/articles/Partitioning-Postgresql/
Clifford
On Sun, Jul 1, 2018 at 2:23 PM David Rowley <david.rowley@2ndquadrant.com> wrote:
On 1 July 2018 at 10:15, Clifford Snow <clifford@snowandsnow.us> wrote:
> I also leaned that my range partition value I used on a timestamp needed to
> have fractional seconds. I used a range of 2017-01-01 00:00:00 to
> 2017-23:59:59 which failed when I attempted to add a record that had a
> timestamp of 2017-23:59:59. Adding a fractional second to the range solved
> the problem.
Please be aware that with RANGE partitions the upper bound is
non-inclusive. The lower bound is inclusive.
If you want a 2017 partition, then FOR VALUES FROM ('2017-01-01') TO
('2018-01-01') will allow all 2017 timestamps and only 2017
timestamps.
You've no need to consider precision of the type and how many 9's you
add to anything here.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 02/07/18 01:43, Clifford Snow wrote: > David, > Thanks for the suggestion. That really simplifies creating the RANGE. > > For all, I'm pretty much a postgresql novice, but I've tried to document > what I've learned in the hopes that it can help someone else. > > You can read my blog post > at https://osm_seattle.snowandsnow.us/articles/Partitioning-Postgresql/ Please consider adding your PostgreSQL-related posts to Planet. https://planet.postgresql.org/add.html -- Vik Fearing +33 6 46 75 15 36 http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Vic,
I'd be happy to add my blog to planet.postgresql.org but my of my articles are not on postgresql. I'm using github pages for my blog and I do have tags for each article. Is there someone to filter on those tags?
Clifford
On Sun, Jul 1, 2018 at 5:13 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:
On 02/07/18 01:43, Clifford Snow wrote:
> David,
> Thanks for the suggestion. That really simplifies creating the RANGE.
>
> For all, I'm pretty much a postgresql novice, but I've tried to document
> what I've learned in the hopes that it can help someone else.
>
> You can read my blog post
> at https://osm_seattle.snowandsnow.us/articles/Partitioning-Postgresql/
Please consider adding your PostgreSQL-related posts to Planet.
https://planet.postgresql.org/add.html
--
Vik Fearing +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Sun, Jul 1, 2018 at 07:09:33PM -0700, Clifford Snow wrote: > Vic, > I'd be happy to add my blog to planet.postgresql.org but my of my articles are > not on postgresql. I'm using github pages for my blog and I do have tags for > each article. Is there someone to filter on those tags? When you register your blog, there is an 'Authorfilter' field, but that only filters authors, which I assume looks at the RSS "author" tag. Doesn't seem it can filter on category tags. The recommended solution is to create a feed on your end just for that category: https://wiki.postgresql.org/wiki/Planet_PostgreSQL All blogs should be about PostgreSQL or closely related technologies. If you want to blog about other things as well, please put your PostgreSQL specific posts in a separate category/tag/label, and use the feed for this category only for Planet PostgreSQL. --------------------------------------------------------------------------- > > Clifford > > On Sun, Jul 1, 2018 at 5:13 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote: > > On 02/07/18 01:43, Clifford Snow wrote: > > David, > > Thanks for the suggestion. That really simplifies creating the RANGE. > > > > For all, I'm pretty much a postgresql novice, but I've tried to document > > what I've learned in the hopes that it can help someone else. > > > > You can read my blog post > > at https://osm_seattle.snowandsnow.us/articles/Partitioning-Postgresql/ > > Please consider adding your PostgreSQL-related posts to Planet. > https://planet.postgresql.org/add.html > -- > Vik Fearing +33 6 46 75 15 36 > http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support > > > > -- > @osm_seattle > osm_seattle.snowandsnow.us > OpenStreetMap: Maps with a human touch -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +