Thread: pgsql inserts problem

pgsql inserts problem

From
Tarhon-Onu Victor
Date:
    Hi,

    I have a (big) problem with postgresql when making lots of
inserts per second. I have a tool that is generating an output of ~2500
lines per seconds. I write a script in PERL that opens a pipe to that
tool, reads every line and inserts data.
    I tryed both commited and not commited variants (the inserts
were commited at every 60 seconds), and the problem persists.

    The problems is that only ~15% of the lines are inserted into
the database. The same script modified to insert the same data in a
similar table created in a MySQL database inserts 100%.

    I also dropped the indexes on various columns, just to make sure
that the overhead is not to big (but I also need that indexes because
I'll make lots of SELECTs from that table).
    I tried both variants: connecting to a host and localy (through
postgresql server's socket (/tmp/s.PGSQL.5432).

    Where can be the problem?

    I'm using postgresql 7.4 devel snapshot 20030628 and 20030531.
Some of the settings are:

shared_buffers = 520
max_locks_per_transaction = 128
wal_buffers = 8
max_fsm_relations = 30000
max_fsm_pages = 482000
sort_mem = 131072
vacuum_mem = 131072
effective_cache_size = 10000
random_page_cost = 2

--
Any views or opinions presented within this e-mail are solely those of
the author and do not necessarily represent those of any company, unless
otherwise expressly stated.

Re: pgsql inserts problem

From
Bruno Wolff III
Date:
On Wed, Aug 27, 2003 at 15:50:32 +0300,
  Tarhon-Onu Victor <mituc@iasi.rdsnet.ro> wrote:
>
>     The problems is that only ~15% of the lines are inserted into
> the database. The same script modified to insert the same data in a
> similar table created in a MySQL database inserts 100%.

Did you check the error status for the records that weren't entered?

My first guess is that you have some bad data you are trying to insert.

Re: pgsql inserts problem

From
"Christopher Kings-Lynne"
Date:
> > The problems is that only ~15% of the lines are inserted into
> > the database. The same script modified to insert the same data in a
> > similar table created in a MySQL database inserts 100%.
>
> Did you check the error status for the records that weren't entered?
>
> My first guess is that you have some bad data you are trying to insert.

I wouldn't be surprised, MySQL will just insert a zero instead of failing in
most cases :P

Chris


Re: pgsql inserts problem

From
Tarhon-Onu Victor
Date:
On Wed, 27 Aug 2003, Bruno Wolff III wrote:

> Did you check the error status for the records that weren't entered?
>
> My first guess is that you have some bad data you are trying to insert.

    Of course, I checked the error status for every insert, there is
no error. It seems like in my case the postgres server cannot handle so
much inserts per second some of the lines are not being parsed and data
inserted into the database.
    I don't know where can be the problem: in the DBD::Pg Perl DBI
driver or my postgresql server settings are not optimal.

--
Any views or opinions presented within this e-mail are solely those of
the author and do not necessarily represent those of any company, unless
otherwise expressly stated.

Re: pgsql inserts problem

From
"Shridhar Daithankar"
Date:
On 27 Aug 2003 at 15:50, Tarhon-Onu Victor wrote:

>
>     Hi,
>
>     I have a (big) problem with postgresql when making lots of
> inserts per second. I have a tool that is generating an output of ~2500
> lines per seconds. I write a script in PERL that opens a pipe to that
> tool, reads every line and inserts data.
>     I tryed both commited and not commited variants (the inserts
> were commited at every 60 seconds), and the problem persists.

Assuming one record per line, you are committing after 150K records, that's not
good.

Try committing every 5 seconds. And open more than one conenction. That will
certainly improve performance. Afterall concurrency is biggest assset of
postgresql.

Fiddle around with combination and see which works best for you.

Bye
 Shridhar

--
Mencken and Nathan's Ninth Law of The Average American:    The quality of a
champagne is judged by the amount of noise the    cork makes when it is popped.


Re: pgsql inserts problem

From
"Christopher Kings-Lynne"
Date:
> Of course, I checked the error status for every insert, there is
> no error. It seems like in my case the postgres server cannot handle so
> much inserts per second some of the lines are not being parsed and data
> inserted into the database.

That sounds extremely unlikely.  Postgres is not one to fail without any
sort of error.  There's something else that is the problem.  More than
likely, it's a problem in your code.

Chris


Re: pgsql inserts problem

From
Richard Huxton
Date:
On Wednesday 27 August 2003 13:50, Tarhon-Onu Victor wrote:
>
> shared_buffers = 520
> max_locks_per_transaction = 128
> wal_buffers = 8
> max_fsm_relations = 30000
> max_fsm_pages = 482000
> sort_mem = 131072
> vacuum_mem = 131072
> effective_cache_size = 10000
> random_page_cost = 2

Slightly off-topic, but I think your tuning settings are a bit out. You've got
4MB allocated to shared_buffers but 128MB allocated to sort_mem? And only
80MB to effective_cache_size? Your settings might be right, but you'd need a
very strange set of circumstances.

As for PG silently discarding inserts, your best bet might be to write a short
Perl script to reproduce the problem. Without that, people are likely to be
sceptical - if PG tended to do this sort of thing, none of us would use it.

--
  Richard Huxton
  Archonet Ltd