On Tue, 2007-02-06 at 12:01 -0500, Merlin Moncure wrote:
> On 2/6/07, Scott Marlowe <smarlowe@g2switchworks.com> wrote:
> > On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:
> > > On 2/6/07, Scott Marlowe <smarlowe@g2switchworks.com> wrote:
> > > > On Mon, 2007-02-05 at 18:35, Karen Hill wrote:
> > > > > I have a pl/pgsql function that is inserting 200,000 records for
> > > > > testing purposes. What is the expected time frame for this operation
> > > > > on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is
> > > > > a 2ghz cpu. So far I've been sitting here for about 2 million ms
> > > > > waiting for it to complete, and I'm not sure how many inserts postgres
> > > > > is doing per second.
> > > >
> > > > That really depends. Doing 200,000 inserts as individual transactions
> > > > will be fairly slow. Since PostgreSQL generally runs in autocommit
> > > > mode, this means that if you didn't expressly begin a transaction, you
> > > > are in fact inserting each row as a transaction. i.e. this:
> > >
> > > I think OP is doing insertion inside a pl/pgsql loop...transaction is
> > > implied here.
> >
> > Yeah, I noticed that about 10 seconds after hitting send... :)
>
> actually, I get the stupid award also because RI check to unindexed
> column is not possible :) (this haunts deletes, not inserts).
Sure it's possible:
CREATE TABLE parent (col1 int4);
-- insert many millions of rows into parent
CREATE TABLE child (col1 int4 REFERENCES parent(col1));
-- insert many millions of rows into child, very very slowly.
- Mark Lewis