unless you specify otherwiise, every insert carries its own transaction
begin/commit. That's a lot of overhead for a single insert, no? Why
not use a single transaction for, say, each 1000 inserts? That would
strike a nice balance of security with efficiency.
pseudo code for the insert:
Begin Transaction;
FOR i in 1..200000 LOOP
INSERT INTO viewfoo (x) VALUES (x);
IF i % 1000 = 0 THEN
Commit Transaction;
Begin Transaction;
END IF;
END LOOP;
Commit Transaction;
End
This approach should speed up things dramatically.
>>> "Karen Hill" <karen_hill22@yahoo.com> 2/6/2007 2:39 PM >>>
On Feb 5, 9:33 pm, t...@sss.pgh.pa.us (Tom Lane) wrote:
> "Karen Hill" <karen_hil...@yahoo.com> writes:
> > I have a pl/pgsql function that is inserting 200,000 records for
> > testing purposes. What is the expected time frame for this
operation
> > on a pc with 1/2 a gig of ram and a 7200 RPM disk?
>
> I think you have omitted a bunch of relevant facts. Bare INSERT is
> reasonably quick:
>
> regression=# create table foo (f1 int);
> CREATE TABLE
> regression=# \timing
> Timing is on.
> regression=# insert into foo select x from generate_series(1,200000)
x;
> INSERT 0 200000
> Time: 5158.564 ms
> regression=#
>
> (this on a not-very-fast machine) but if you weigh it down with a
ton
> of index updates, foreign key checks, etc, it could get slow ...
> also you haven't mentioned what else that plpgsql function is doing.
>
The postgres version is 8.2.1 on Windows. The pl/pgsql function is
inserting to an updatable view (basically two tables).
CREATE TABLE foo1
(
) ;
CREATE TABLE foo2
(
);
CREATE VIEW viewfoo AS
(
);
CREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD
(
);
CREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$
BEGIN
FOR i in 1..200000 LOOP
INSERT INTO viewfoo (x) VALUES (x);
END LOOP;
END;
$$ LANGUAGE plpgsql;
---------------------------(end of
broadcast)---------------------------
TIP 6: explain analyze is your friend