Re: How many insert + update should one transaction handle? - Mailing list pgsql-general

From Yonatan Ben-Nes
Subject Re: How many insert + update should one transaction handle?
Date
Msg-id 433840BF.9070906@canaan.co.il
Whole thread Raw
In response to Re: How many insert + update should one transaction  (Robert Treat <xzilla@users.sourceforge.net>)
Responses Re: How many insert + update should one transaction handle?
List pgsql-general
Robert Treat wrote:
> On Fri, 2005-09-23 at 14:53, Dawid Kuroczko wrote:
>
>>On 9/23/05, Yonatan Ben-Nes < da@canaan.co.il <mailto:da@canaan.co.il> >
>>wrote:
>>
>>Hi all,
>>
>>Every few days I need to DELETE all of the content of few tables and
>>INSERT new data in them.
>>The amount of new data is about 5 million rows and each row get about 3
>>queries (INSERT + UPDATE).
>>
>
> <snip>
>
>>Or you could even try (haven't tested it):
>>BEGIN;
>>CREATE new_table;
>>SELECT INTO new_table * FROM temp_table;
>>DROP TABLE table;
>>ALTER TABLE new_table RENAME TO table;
>>COMMIT; -- leaving you with fresh 5mln new tuples table
>>...with a risk of loosing all the changes made to old table after BEGIN;
>>
>
>
> yeah, i was thinking
>
> create newtable;
> ~~ load data into newtable
>
> begin;
>   drop oldtable;
>   alter table newtable rename to oldtable
> commit;
>
> this seperates the data loading piece from the piece where you promote
> the data to live data, plus then the time you have to hold the
> transaction open is only for the drop and rename, which will be quite
> fast.
>
> the only potential issues would be making sure you dont have FK/View
> type issues, but it doesn't sound like it would apply here.
>
>
> Robert Treat

Sorry everyone for not responding... I just didnt know that the
discussion continued :)

Anyway I saw the idea:
BEGIN;
CREATE new_table;
SELECT INTO new_table * FROM temp_table;
DROP TABLE table;
ALTER TABLE new_table RENAME TO table;
COMMIT;

Where if I understood correctly "table" is the final table, "temp_table"
is the table that receive all the proccess and at the end of it got
10mil delete tuples and 5mil active and finally "new_table" is the
receipent of all of the active tuples from "temp_table".

Its looking quite promising to me but I did alittle check and saw that
between the drop table command & the commit I get a lock on the table
(obvious but problematic to a 24/7 site) so im wondering to myself how
much time such a transaction will take from the drop command point?

If users wont be able to access the table for some extremly small amount
of time (less then a second obviously) then though I dont like it much
it is better then running a vacuum full which will slow all my server
for a considerable amount of time...

So anyone know how much time does such a process take? (tried to explain
analyze it with no success :)).

Thanks alot everyone!
   Ben-Nes Yonatan

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Index use in BETWEEN statement...
Next
From: Peter Wiersig
Date:
Subject: Re: SQL command to dump the contents of table failed: PQendcopy()