Thread: Copying data from table to table (cloned tables)
Any one could suggest the best way to copy data from table to table in the same db, the reason why am seeking for this is that the first table is becoming very big, and old data has no reason why to stay there, so i created a cloned table but without indexes and constraints (lighter) and now i would like to copy 'some' data from first to second table (copied data is data older than 3 months, i have a timestamp column).
In other way, i have a table called 'hotqueues' where i store fresh messages queued, once messages are treated, they stay in 'hotqueues' but with a flag indicating that their arent queued for treatment..
so in this way, data will rest there forever, slowing down any searches in that table, the solution was to copy old messages to another table called 'coldqueues' that has the same structure as 'hotqueues' but lighter (without constraints and indexes).
How to copy these data with 100% data-loose free.
Thanks for any help you can provide.
FZ> Hello all, FZ> Any one could suggest the best way to copy data from table to table in the FZ> same db, the reason why am seeking for this is that the first table is FZ> becoming very big, and old data has no reason why to stay there, so i FZ> created a cloned table but without indexes and constraints (lighter) and now FZ> i would like to copy 'some' data from first to second table (copied data is FZ> data older than 3 months, i have a timestamp column). FZ> In other way, i have a table called 'hotqueues' where i store fresh messages FZ> queued, once messages are treated, they stay in 'hotqueues' but with a flag FZ> indicating that their arent queued for treatment.. FZ> so in this way, data will rest there forever, slowing down any searches in FZ> that table, the solution was to copy old messages to another table called FZ> 'coldqueues' that has the same structure as 'hotqueues' but lighter (without FZ> constraints and indexes). FZ> How to copy these data with 100% data-loose free. FZ> Thanks for any help you can provide. A simple solution: INSERT INTO coldqueues SELECT * FROM hotqueues where ...; See documentation page: http://www.postgresql.org/docs/8.1/static/sql-insert.html DAQ
Use the INTO keyword on a SELECT statement:
SELECT *
INTO coldqueues
FROM hotqueues
WHERE treatment_flag = 'DONE';
DELETE FROM hotqueues WHERE treatment_flag = 'DONE';
You will need to drop the coldqueues table first if you've already created it as this will create it for you. I don't know if indexes and constraints are copied from the source table, but you can always drop them afterwards. Don't forget to re-apply permissions etc to the new table if required.
Another way if you don't want to drop the table would be to add the keyword TEMP, copy to temporary table to a file, then copy that file back to the proper coldqueues table:
SELECT *
INTO TEMP coldqueues_temp
FROM hotqueues
WHERE treatment_flag = 'DONE';
COPY coldqueues_temp TO '/tmp/coldqueues.sql';
COPY coldqueues FROM '/tmp/coldqueues.sql';
DELETE FROM hotqueues WHERE treatment_flag = 'DONE';
This will prevent you having to delete the table in the first place – which you won't want to do on future archives. Once you close the session you have open, the "coldqueues_temp" table will automatically be dropped for you.
Hope this helps,
-p
-----Original Message-----
From: pgsql-novice-owner@postgresql.org [mailto:pgsql-novice-owner@postgresql.org] On Behalf Of Fourat Zouari
Sent: Tuesday, 10 October 2006 09:52
To: pgsql-novice@postgresql.org; pgsql-admin@postgresql.org
Subject: [NOVICE] Copying data from table to table (cloned tables)
Hello all,
Any one could suggest the best way to copy data from table to table in the same db, the reason why am seeking for this is that the first table is becoming very big, and old data has no reason why to stay there, so i created a cloned table but without indexes and constraints (lighter) and now i would like to copy 'some' data from first to second table (copied data is data older than 3 months, i have a timestamp column).
In other way, i have a table called 'hotqueues' where i store fresh messages queued, once messages are treated, they stay in 'hotqueues' but with a flag indicating that their arent queued for treatment..
so in this way, data will rest there forever, slowing down any searches in that table, the solution was to copy old messages to another table called 'coldqueues' that has the same structure as 'hotqueues' but lighter (without constraints and indexes).
How to copy these data with 100% data-loose free.
Thanks for any help you can provide.*******************Confidentiality and Privilege Notice*******************
The material contained in this message is privileged and confidential to the addressee. If you are not the addressee indicated in this message or responsible for delivery of the message to such person, you may not copy or deliver this message to anyone, and you should destroy it and kindly notify the sender by reply email.
Information in this message that does not relate to the official business of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta. Weatherbeeta, its employees, contractors or associates shall not be liable for direct, indirect or consequential loss arising from transmission of this message or any attachments
On Mon, Oct 09, 2006 at 05:02:07PM -0700, Bricklen Anderson wrote: > Fourat Zouari wrote: > >Hello all, > >Any one could suggest the best way to copy data from table to table in > >the same db, the reason why am seeking for this is that the first table > >is becoming very big, and old data has no reason why to stay there, so i > >created a cloned table but without indexes and constraints (lighter) and > >now i would like to copy 'some' data from first to second table (copied > >data is data older than 3 months, i have a timestamp column). > > > >In other way, i have a table called 'hotqueues' where i store fresh > >messages queued, once messages are treated, they stay in 'hotqueues' but > >with a flag indicating that their arent queued for treatment.. > >so in this way, data will rest there forever, slowing down any searches > >in that table, the solution was to copy old messages to another table > >called 'coldqueues' that has the same structure as 'hotqueues' but > >lighter (without constraints and indexes). > >How to copy these data with 100% data-loose free. > > > >Thanks for any help you can provide. > > If you just want to copy the data across to the other table: > begin; > insert into table2 select * from table1 where <some criteria>; > commit; > > if you also want to remove that same data from table1: > begin; > insert into table2 select * from table1 where <some criteria>; > delete from table1 where <same criteria as above>; > commit; You need to be careful with this method. For what the OP wants to do it would probably work, but not always. The problem is that in some scenarios, <same criteria as above> won't necessarily return the same set of rows. Starting in 8.2 you'll be able to do something like INSERT INTO table2 DELET FROM table1 WHERE ... RETURNING *; The RETURNING * will return all the data that the command deleted. In older versions, your best bet is to store the data you're moving in a temporary table, and then use that to delete the exact rows. -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
On Tue, Oct 10, 2006 at 10:24:57AM -0500, Jim C. Nasby wrote: > > If you just want to copy the data across to the other table: > > begin; > > insert into table2 select * from table1 where <some criteria>; > > commit; > > > > if you also want to remove that same data from table1: > > begin; > > insert into table2 select * from table1 where <some criteria>; > > delete from table1 where <same criteria as above>; > > commit; I forgot to mention you could also use a serializable transaction with your method... BEGIN ISOLATION LEVEL SERIALIZABLE; INSERT ... DELETE ... COMMIT; Just remember you'll need to deal with the possibility of a 'could not serialize' error. -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
On Tue, Oct 10, 2006 at 10:24:57AM -0500, Jim C. Nasby wrote:
> > If you just want to copy the data across to the other table:
> > begin;
> > insert into table2 select * from table1 where <some criteria>;
> > commit;
> >
> > if you also want to remove that same data from table1:
> > begin;
> > insert into table2 select * from table1 where <some criteria>;
> > delete from table1 where <same criteria as above>;
> > commit;
I forgot to mention you could also use a serializable transaction with
your method...
BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT ...
DELETE ...
COMMIT;
Just remember you'll need to deal with the possibility of a 'could not
serialize' error.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
It makes it so that a transaction is highly isolated.
http://www.postgresql.org/docs/8.1/interactive/transaction-iso.html
--
Brandon Aiken
CS/IT Systems Engineer
From: pgsql-novice-owner@postgresql.org [mailto:pgsql-novice-owner@postgresql.org] On Behalf Of Fourat Zouari
Sent: Wednesday, October 11, 2006 7:30 AM
To: Jim C. Nasby
Cc: Bricklen Anderson; pgsql-novice@postgresql.org; pgsql-admin@postgresql.org
Subject: Re: [NOVICE] [ADMIN] Copying data from table to table (cloned tables)
what's the benefits using a serialized trans ?
On 10/10/06, Jim C. Nasby <jim@nasby.net> wrote:
On Tue, Oct 10, 2006 at 10:24:57AM -0500, Jim C. Nasby wrote:
> > If you just want to copy the data across to the other table:
> > begin;
> > insert into table2 select * from table1 where <some criteria>;
> > commit;
> >
> > if you also want to remove that same data from table1:
> > begin;
> > insert into table2 select * from table1 where <some criteria>;
> > delete from table1 where <same criteria as above>;
> > commit;
I forgot to mention you could also use a serializable transaction with
your method...
BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT ...
DELETE ...
COMMIT;
Just remember you'll need to deal with the possibility of a 'could not
serialize' error.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
On Wed, Oct 11, 2006 at 13:30:22 +0200, Fourat Zouari <fourat@gmail.com> wrote: > what's the benefits using a serialized trans ? All of the queries in the transaction see a consistant view of the database. So that two identical select queries will return the same information within a serialized transaction. (As long as the current transaction doesn't make any changes between the two selects that would affect their output.)