Re: moving data between tables causes the db to overwhelm the system - Mailing list pgsql-performance

From Robert Schnabel
Subject Re: moving data between tables causes the db to overwhelm the system
Date
Msg-id 4A9D1486.9010603@missouri.edu
Whole thread Raw
In response to Re: moving data between tables causes the db to overwhelm the system  (Pierre Frédéric Caillaud<lists@peufeu.com>)
List pgsql-performance
> If you want to partition your huge data set by "time", and the data
> isn't already ordered by "time" on disk, you could do this :
>
> SET work_mem TO something very large like 10GB since you got 32GB RAM,
> check your shared buffers etc first;
> CREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY "time"; <- huge
> sort, will take some time
>
> SET maintenance_work_mem TO something very large;
> CREATE INDEX tmp_time ON tmp( "time" );
>
> CREATE TABLE partition1 AS SELECT * FROM tmp WHERE "time" BETWEEN
> beginning AND end;
> (repeat...)
>
> Since tmp is clustered on "time" you'll get a nice fast bitmap-scan,
> and you won't need to seq-scan N times (or randomly index-scan) bigTable.
>
I went through the same exercise a couple months ago with a table that
had ~1.7 billion rows.  I used a similar approach to what's described
above but in my case I didn't create the tmp table and did the ORDER BY
when I did each select on the bigTable to do the insert (I didn't have
many of them).  My data was structured such that this was easier than
doing the huge sort.  In any event, it worked great and my smaller
partitions are much much faster.

Bob

pgsql-performance by date:

Previous
From: Pierre Frédéric Caillaud
Date:
Subject: Re: moving data between tables causes the db to overwhelm the system
Next
From: Hrishikesh (हृषीकेश मेहेंदळे)
Date:
Subject: Re: Performance issues with large amounts of time-series data