Thread: table move across databases
Hi, Is there a way to move a table between databases? The reason I ask is that I have a growing 'log' table which is only ever inserted to and no rows are ever deleted, updated and replace. It is indexed by the time of insert. I'd like to archive off this table to another database so that vacuum and pg_dump don't take so long on the 'live' database. I'm taking a nightly backup, and this data never changes so it doesn't need to be archived as often. If there is a way to do all this or something similar without rotating the table to a different database then I would like to hear it (the only other way I can think of is to use the -t option of pg_dump for each of the other tables, but that's too ugly) many thanks in advance ,dunc
On Mon, 2002-07-22 at 22:59, Duncan Sargeant wrote: > Hi, > > Is there a way to move a table between databases? > > The reason I ask is that I have a growing 'log' table which is only > ever inserted to and no rows are ever deleted, updated and replace. It > is indexed by the time of insert. I'd like to archive off this table to > another database so that vacuum and pg_dump don't take so long on the > 'live' database. I'm taking a nightly backup, and this data never > changes so it doesn't need to be archived as often. If there is a way > to do all this or something similar without rotating the table to a > different database then I would like to hear it (the only other way I > can think of is to use the -t option of pg_dump for each of the other > tables, but that's too ugly) 1. Create 2 views, one defining the data you want to keep, and one defining that which you want to archive off. 2. COPY off each view into separate files. 3. Drop the original table, then recreate it, sans index. 4. COPY the data you want to keep back into the newly emptied table. 5. Re-index the table. 6. Now that you've shrunk the table, you can a on-line procedure (possibly pg/sql) to archive off old data on a periodic basis will run much faster. -- +-----------------------------------------------------------------+ | Ron Johnson, Jr. Home: ron.l.johnson@cox.net | | Jefferson, LA USA | | | | "The greatest dangers to liberty lurk in insidious encroachment | | by men of zeal, well-meaning, but without understanding." | | Justice Louis Brandeis, dissenting, Olmstead v US (1928) | +-----------------------------------------------------------------+
On Tue, 2002-07-23 at 15:59, Duncan Sargeant wrote: > Hi, > > Is there a way to move a table between databases? > > The reason I ask is that I have a growing 'log' table which is only > ever inserted to and no rows are ever deleted, updated and replace. It > is indexed by the time of insert. I'd like to archive off this table to > another database so that vacuum and pg_dump don't take so long on the > 'live' database. I'm taking a nightly backup, and this data never > changes so it doesn't need to be archived as often. If there is a way > to do all this or something similar without rotating the table to a > different database then I would like to hear it (the only other way I > can think of is to use the -t option of pg_dump for each of the other > tables, but that's too ugly) What I do myself for a similar situation, is to: ################################ use Pg; use POSIX qw(strftime); # database connect removed $today_date = strftime( "%Y-%m-%d", localtime); $todaytable = strftime( "%Y%m%d", localtime); $query = "CREATE TABLE archive_$todaytable AS "; $query .= "SELECT * "; $query .= "FROM history "; $query .= "WHERE date(history.d_time) < '$today_date';"; $result = $conn->exec( $query ); # Remove all of that stuff from the history table $query = "DELETE FROM history "; $query .= "WHERE date(history.d_time) < '$today_date';"; $result = $conn->exec( $query ); ################################ This gives me a separate table with the old records. I can then use pg_dump ... -t archive_YYMMDD to dump that table out, and finally drop it. I do this before the table gets too much in it, so that vacuuming it out is less of an issue, although that will be much less problematic when I upgrade to 7.2 in the next month or so. I hope this gives you some ideas, Andrew. -- -------------------------------------------------------------------- Andrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington WEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St DDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267 Are you enrolled at http://schoolreunions.co.nz/ yet?
Andrew McMillan wrote on Tue July 23, at 20:49 +1200: > What I do myself for a similar situation, is to: > > This gives me a separate table with the old records. I can then use > pg_dump ... -t archive_YYMMDD to dump that table out, and finally drop > it. > > I do this before the table gets too much in it, so that vacuuming it out > is less of an issue, although that will be much less problematic when I > upgrade to 7.2 in the next month or so. > > I hope this gives you some ideas, Thanks Andrew, it does. Although it does frustrate me not being able to keep the archived data online without copying all the data twice. At least I get a backup out of one of them. ,dunc