Re: Hardware performance for large updates - Mailing list pgsql-sql

From Josh Berkus
Subject Re: Hardware performance for large updates
Date
Msg-id web-1635967@davinci.ethosmedia.com
Whole thread Raw
In response to Re: Hardware performance for large updates  (Joe Conway <mail@joeconway.com>)
List pgsql-sql
Joe,

> I think we'd need more information to be of any help -- schema,
> functions, explain output, etc.

Yeah, I know.   I'm just looking for general tips here ... I need to do
the actual optimization interactively.    

Particularly, the difficulty is that this application gets many small
requests during the day (100 simultaneous uses) and shares a server
with Apache.   So I have to be concerned about how much memory each
connection soaks up, during the day.   At night, the maintainence tasks
run a few, really massive procedures.

So I should probably restart Postgres with different settings at night,
hey?

> I do think you probably could increase Shared Buffers, as 256 is
> pretty small. There's been a lot of debate over the best setting. The
> usual guidance is start at 25% of physical RAM (16384 == 128MB if you
> have 512MB RAM), then tweak to optimize performance for your
> application and hardware. 

Hmmm... how big is a shared buffer, anyway?   I'm having trouble
finding actual numbers in the docs.

> You might also bump sort mem up a bit
> (maybe to 2048). Again, I would test using my app and hardware to get
> the best value. 

> Are you on a Linux server -- if so I found that
> fdatasync works better than (the default) fsync for wal_sync_method.

Yes, I am.   Any particular reason why fdatasync works better?

Thanks a lot!

-Josh Berkus



pgsql-sql by date:

Previous
From: Joe Conway
Date:
Subject: Re: Hardware performance for large updates
Next
From: Joe Conway
Date:
Subject: Re: Hardware performance for large updates