Thread: pgsql inserts problem
Hi, I have a (big) problem with postgresql when making lots of inserts per second. I have a tool that is generating an output of ~2500 lines per seconds. I write a script in PERL that opens a pipe to that tool, reads every line and inserts data. I tryed both commited and not commited variants (the inserts were commited at every 60 seconds), and the problem persists. The problems is that only ~15% of the lines are inserted into the database. The same script modified to insert the same data in a similar table created in a MySQL database inserts 100%. I also dropped the indexes on various columns, just to make sure that the overhead is not to big (but I also need that indexes because I'll make lots of SELECTs from that table). I tried both variants: connecting to a host and localy (through postgresql server's socket (/tmp/s.PGSQL.5432). Where can be the problem? I'm using postgresql 7.4 devel snapshot 20030628 and 20030531. Some of the settings are: shared_buffers = 520 max_locks_per_transaction = 128 wal_buffers = 8 max_fsm_relations = 30000 max_fsm_pages = 482000 sort_mem = 131072 vacuum_mem = 131072 effective_cache_size = 10000 random_page_cost = 2 -- Any views or opinions presented within this e-mail are solely those of the author and do not necessarily represent those of any company, unless otherwise expressly stated.
On Wed, Aug 27, 2003 at 15:50:32 +0300, Tarhon-Onu Victor <mituc@iasi.rdsnet.ro> wrote: > > The problems is that only ~15% of the lines are inserted into > the database. The same script modified to insert the same data in a > similar table created in a MySQL database inserts 100%. Did you check the error status for the records that weren't entered? My first guess is that you have some bad data you are trying to insert.
> > The problems is that only ~15% of the lines are inserted into > > the database. The same script modified to insert the same data in a > > similar table created in a MySQL database inserts 100%. > > Did you check the error status for the records that weren't entered? > > My first guess is that you have some bad data you are trying to insert. I wouldn't be surprised, MySQL will just insert a zero instead of failing in most cases :P Chris
On Wednesday 27 August 2003 13:50, Tarhon-Onu Victor wrote: > > shared_buffers = 520 > max_locks_per_transaction = 128 > wal_buffers = 8 > max_fsm_relations = 30000 > max_fsm_pages = 482000 > sort_mem = 131072 > vacuum_mem = 131072 > effective_cache_size = 10000 > random_page_cost = 2 Slightly off-topic, but I think your tuning settings are a bit out. You've got 4MB allocated to shared_buffers but 128MB allocated to sort_mem? And only 80MB to effective_cache_size? Your settings might be right, but you'd need a very strange set of circumstances. As for PG silently discarding inserts, your best bet might be to write a short Perl script to reproduce the problem. Without that, people are likely to be sceptical - if PG tended to do this sort of thing, none of us would use it. -- Richard Huxton Archonet Ltd