Re: BUG #4204: COPY to table with FK has memory leak - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: BUG #4204: COPY to table with FK has memory leak
Date
Msg-id 873ao2qoer.fsf@oxford.xeocode.com
Whole thread Raw
Responses Re: BUG #4204: COPY to table with FK has memory leak  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: BUG #4204: COPY to table with FK has memory leak  (Decibel! <decibel@decibel.org>)
List pgsql-hackers
[moving to -hackers]

"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> "Tomasz Rybak" <bogomips@post.pl> writes:
>> I tried to use COPY to import 27M rows to table:
>> CREATE TABLE sputnik.ccc24 (
>>         station CHARACTER(4) NOT NULL REFERENCES sputnik.station24 (id),
>>         moment INTEGER NOT NULL,
>>         flags INTEGER NOT NULL
>> ) INHERITS (sputnik.sputnik);
>> COPY sputnik.ccc24(id, moment, station, strength, sequence, flags)
>> FROM '/tmp/24c3' WITH DELIMITER AS ' ';
>
> This is expected to take lots of memory because each row-requiring-check
> generates an entry in the pending trigger event list.  Even if you had
> not exhausted memory, the actual execution of the retail checks would
> have taken an unreasonable amount of time.  The recommended way to do
> this sort of thing is to add the REFERENCES constraint *after* you load
> all the data; that'll be a lot faster in most cases because the checks
> are done "in bulk" using a JOIN rather than one-at-a-time.

Hm, it occurs to me that we could still do a join against the pending event
trigger list... I wonder how feasible it would be to store the pending trigger
event list in a temporary table instead of in ram.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production
Tuning


pgsql-hackers by date:

Previous
From: Gregory Stark
Date:
Subject: Re: Avoiding second heap scan in VACUUM
Next
From: "Pavel Stehule"
Date:
Subject: Re: Packages in oracle Style