Kevin Brown <kevin@sysexperts.com> writes:
> Slavisa Garic wrote:
>> Using pg module in python I am trying to run the COPY command to populate
>> the large table. I am using this to replace the INSERT which takes about
>> few hours to add 70000 entries where copy takes minute and a half. 
> That difference in speed seems quite large.  Too large.  Are you batching
> your INSERTs into transactions (you should be in order to get good
> performance)?  Do you have a ton of indexes on the table?  Does it have
> triggers on it or some other thing (if so then COPY may well wind up doing
> the wrong thing since the triggers won't fire for the rows it
> inserts)?
COPY *does* fire triggers, and has done so for quite a few releases.
My bet is that the issue is failing to batch individual INSERTs into
transactions.  On a properly-set-up machine you can't get more than one
transaction commit per client per disk revolution, so the penalty for
trivial transactions like single inserts is pretty steep.
        regards, tom lane