I added BEGIN and COMMIT as shown below around insert and executed it from pgadmin for 100,000 rows. It ran in just 1 min.
BEGIN;
INSERT INTO TABLE VALUES(....);
INSERT INTO TABLE VALUES(....);
.
,
COMMIT;
However when I run above from psql by passing it to psql(As shown below) as a file. It still takes a lot of time. Am I doing anything wrong? How can I run this from pgadmin within a minute?
psql -h host -U user -p Port -d database < INSERT_FILE.sql
aditya desai <admad123@gmail.com> writes: > One of the service layer app is inserting Millions of records in a table > but one row at a time. Although COPY is the fastest way to import a file in > a table. Application has a requirement of processing a row and inserting it > into a table. Is there any way this INSERT can be tuned by increasing > parameters? It is taking almost 10 hours for just 2.2 million rows in a > table. Table does not have any indexes or triggers.
Using a prepared statement for the INSERT would help a little bit. What would help more, if you don't expect any insertion failures, is to group multiple inserts per transaction (ie put BEGIN ... COMMIT around each batch of 100 or 1000 or so insertions). There's not going to be any magic bullet that lets you get away without changing the app, though.
It's quite possible that network round trip costs are a big chunk of your problem, in which case physically grouping multiple rows into each INSERT command (... or COPY ...) is the only way to fix it. But I'd start with trying to reduce the transaction commit overhead.