Thread: Postgres tuning?

Postgres tuning?

From
Simon Windsor
Date:
Hi

I am in the process of converting a small multi-user application from
MySQL, and most queries are performing better. The only noticeable
exception is a batch load, which is half the speed of MySQL version.

What are the basic parameters I should be focusing on for best
performance ?

- sort_mem
- shared_buffers

Many thanks

Simon

--
Simon Windsor
Email: simon.windsor@cornfield.org.uk
Tel: 01454 617689
Mob: 07960 321599


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


Re: Postgres tuning?

From
"Scott Marlowe"
Date:
On Tue, 2004-06-29 at 15:30, Simon Windsor wrote:
> Hi
>
> I am in the process of converting a small multi-user application from
> MySQL, and most queries are performing better. The only noticeable
> exception is a batch load, which is half the speed of MySQL version.
>
> What are the basic parameters I should be focusing on for best
> performance ?
>
> - sort_mem
> - shared_buffers

How does the batch insert work?  Just one insert after another?  Try
wrapping them in a transaction:

begin;
insert into ... (repeat a few thousand to million times)
commit;


Re: Postgres tuning?

From
Date:
On 6/29/04 4:30 PM, "Simon Windsor" <simon.windsor@cornfield.org.uk> wrote:

> I am in the process of converting a small multi-user application from
> MySQL, and most queries are performing better. The only noticeable
> exception is a batch load, which is half the speed of MySQL version.

If you're talking about loading up and array and telling it to load the
array with a single INSERT, you can't do that.  You have to insert a record
at a time.  I wish it were possible - I could really use it.

The closest thing is COPY.  I've been told COPY does such a bulk load.  The
down side of COPY is that you have to know the column order - ok for initial
loads, but dangerous for application usage.

> begin;
> insert into ... (repeat a few thousand to million times)
> commit;

This does not accomplish the bulk load - it only makes all of the inserts
part of a single transaction for atomic commit or rollback.

Wes


Re: Postgres tuning?

From
Paul Thomas
Date:
On 29/06/2004 22:30 Simon Windsor wrote:
> Hi
>
> I am in the process of converting a small multi-user application from
> MySQL, and most queries are performing better. The only noticeable
> exception is a batch load, which is half the speed of MySQL version.
>
> What are the basic parameters I should be focusing on for best
> performance ?
>
> - sort_mem
> - shared_buffers

How does your batch load work? Is it a series of inserts? If it is then
you should try wrap a bunch of your inserts within a single transaction
and then commit. If you issue an insert without explicitely using a
transaction, PG will effectively wrap the insert withing a BEGIN...COMMIT
block and you will actually incur the transaction overhead on each insert.
You'll find that inserting rows in batches of 100 or more within a
transaction really speeds thing up.

HTH


--
Paul Thomas
+------------------------------+---------------------------------------------+
| Thomas Micro Systems Limited | Software Solutions for
Business             |
| Computer Consultants         |
http://www.thomas-micro-systems-ltd.co.uk   |
+------------------------------+---------------------------------------------+

Re: Postgres tuning?

From
"Dann Corbit"
Date:
Using the COPY command:
http://techdocs.postgresql.org/techdocs/usingcopy.php
http://www.postgresql.com/docs/7.4/static/sql-copy.html

Using the COPY API:
http://www.postgresql.com/docs/7.4/static/libpq-copy.html

> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org] On Behalf Of
> wespvp@syntegra.com
> Sent: Tuesday, June 29, 2004 4:02 PM
> To: Simon Windsor; Postgres List
> Subject: Re: [GENERAL] Postgres tuning?
>
>
> On 6/29/04 4:30 PM, "Simon Windsor"
> <simon.windsor@cornfield.org.uk> wrote:
>
> > I am in the process of converting a small multi-user
> application from
> > MySQL, and most queries are performing better. The only noticeable
> > exception is a batch load, which is half the speed of MySQL version.
>
> If you're talking about loading up and array and telling it
> to load the array with a single INSERT, you can't do that.
> You have to insert a record at a time.  I wish it were
> possible - I could really use it.
>
> The closest thing is COPY.  I've been told COPY does such a
> bulk load.  The down side of COPY is that you have to know
> the column order - ok for initial loads, but dangerous for
> application usage.
>
> > begin;
> > insert into ... (repeat a few thousand to million times) commit;
>
> This does not accomplish the bulk load - it only makes all of
> the inserts part of a single transaction for atomic commit or
> rollback.
>
> Wes
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to
> majordomo@postgresql.org)
>

Re: Postgres tuning?

From
Alvaro Herrera
Date:
On Tue, Jun 29, 2004 at 06:01:51PM -0500, wespvp@syntegra.com wrote:

> The closest thing is COPY.  I've been told COPY does such a bulk load.  The
> down side of COPY is that you have to know the column order - ok for initial
> loads, but dangerous for application usage.

As of 7.3 you can use COPY with a column list.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"You knock on that door or the sun will be shining on places inside you
that the sun doesn't usually shine" (en Death: "The High Cost of Living")