Thread: slow loop inserts?

slow loop inserts?

From
Ezequiel Lovelle
Date:

Hi, I'm new to postgres and I have the next question.

I have a php program that makes 100000 inserts in my database.
autoincrement numbers inserted into a table with 5 columns.
The script takes about 4 minutes from a webserver
Is it a normal time?

How could reduce this time by a bulce of inserts?

When I turn off fsync get much more performance, but it is not ideal in power failure

 

Hardware: 2 disks 1TB 7200 rpm with software raid 1 (gmirror raid)

8 Gb RAM

CPU Intel Quad Core 2.4 Ghz

OS: Freebsd 8.2

Postgres version: 9.0.4

 

My postgres config:


listen_addresses = '*'
wal_level = archive
fsync = on
archive_mode = on
archive_command = 'exit 0'
maintenance_work_mem = 480MB
checkpoint_completion_target = 0.5
effective_cache_size = 5632MB
work_mem = 40MB
wal_buffers = 16MB
checkpoint_segments = 30
shared_buffers = 1920MB
max_connections = 40

 

My execution time of my script:

[root@webserver ~]# time php script.php

real    4m54.846s
user    0m2.695s
sys     0m1.775s

 

My scipt:

<?php

pg_connect("host=host  port=port dbname=db user=user password=pass") or die ("No me conecto...");
for ( $var = 1; $var <= 100000 ; $var++ )
{
$sql = "INSERT INTO server (aa, bb, cc, dd, ee) VALUES ('$var','$var','$var','$var','$var')";
pg_query($sql);
}
?>

my dd test is:

#time sh -c "dd if=/dev/zero of=/tmp/test count=500000 && fsync"
500000+0 records in
500000+0 records out
256000000 bytes transferred in 2.147917 secs (119185237 bytes/sec)
usage: fsync file ...

real    0m2.177s
user    0m0.188s
sys     0m0.876s

 

Thanks, any help will be well recived,

Re: slow loop inserts?

From
Dan Birken
Date:
Try wrapping all your inserts in a transaction:

pg_query('BEGIN');
// your inserts
pg_query('COMMIT');

That way you won't have to sync each of those inserts to disk, should provide a huge speedup.  Of course this means your 10,000 inserts will be all or nothing, but it seems like in this case that should be fine.

-Dan

On Sun, May 15, 2011 at 3:02 PM, Ezequiel Lovelle <elovelle@dialdata.com.ar> wrote:

Hi, I'm new to postgres and I have the next question.

I have a php program that makes 100000 inserts in my database.
autoincrement numbers inserted into a table with 5 columns.
The script takes about 4 minutes from a webserver
Is it a normal time?

How could reduce this time by a bulce of inserts?

When I turn off fsync get much more performance, but it is not ideal in power failure

 

Hardware: 2 disks 1TB 7200 rpm with software raid 1 (gmirror raid)

8 Gb RAM

CPU Intel Quad Core 2.4 Ghz

OS: Freebsd 8.2

Postgres version: 9.0.4

 

My postgres config:


listen_addresses = '*'
wal_level = archive
fsync = on
archive_mode = on
archive_command = 'exit 0'
maintenance_work_mem = 480MB
checkpoint_completion_target = 0.5
effective_cache_size = 5632MB
work_mem = 40MB
wal_buffers = 16MB
checkpoint_segments = 30
shared_buffers = 1920MB
max_connections = 40

 

My execution time of my script:

[root@webserver ~]# time php script.php

real    4m54.846s
user    0m2.695s
sys     0m1.775s

 

My scipt:

<?php

pg_connect("host=host  port=port dbname=db user=user password=pass") or die ("No me conecto...");
for ( $var = 1; $var <= 100000 ; $var++ )
{
$sql = "INSERT INTO server (aa, bb, cc, dd, ee) VALUES ('$var','$var','$var','$var','$var')";
pg_query($sql);
}
?>

my dd test is:

#time sh -c "dd if=/dev/zero of=/tmp/test count=500000 && fsync"
500000+0 records in
500000+0 records out
256000000 bytes transferred in 2.147917 secs (119185237 bytes/sec)
usage: fsync file ...

real    0m2.177s
user    0m0.188s
sys     0m0.876s

 

Thanks, any help will be well recived,