Thread: Optimisation help

Optimisation help

From
dforums
Date:
<div class="moz-text-html" lang="x-western"> Hello<br /><br /><br /> We hace a Quad Xeon server, with 8GO of ram, sata
II750Go<br /><br /> An postgresql database, of 10 Go<br /><br /> I have several treatment every 2 minutes who select,
insert,update thousand of data in a table. It take a lot of time (0.3300 ms per line) just to check if a string of 15
charis present, and decide to update it under few constraint<br /><br /> I suppose the main problem is from database
serversettings.<br /><br /> This is my settings : <br /><br /><br /> max_connections = 256<br /> shared_buffers =
1500                  # min 16 or max_connections*2, 8KB each<br /> temp_buffers = 500                      # min 100,
8KBeach<br /> max_prepared_transactions = 100 <br /><br /> work_mem = 22000                        # min 64, size in
KB<br/> maintenance_work_mem = 500000           # min 1024, size in KB<br /> max_stack_depth = 8192 <br /><br /><br />
max_fsm_pages= 100000                  # min max_fsm_relations*16, 6 bytes each<br /> max_fsm_relations = 5000  <br
/><br/><br /> vacuum_cost_delay = 50                  # 0-1000 milliseconds<br /> vacuum_cost_page_hit =
1000            # 0-10000 credits<br /> vacuum_cost_page_miss = 1000            # 0-10000 credits<br />
vacuum_cost_page_dirty= 120            # 0-10000 credits<br /> vacuum_cost_limit = 2000                # 0-10000
credits<br/><br /> # - Background writer -<br /><br /> bgwriter_delay = 50                     # 10-10000 milliseconds
betweenrounds<br /> bgwriter_lru_percent = 1.0              # 0-100% of LRU buffers scanned/round<br />
bgwriter_lru_maxpages= 25              # 0-1000 buffers max written/round<br /> bgwriter_all_percent = 0.333           
#0-100% of all buffers scanned/round<br /> bgwriter_all_maxpages = 50              # 0-1000 buffers max
written/round<br/><br /> wal_buffers = 16                        # min 4, 8KB each<br /> commit_delay =
500                     # range 0-100000, in microseconds<br /> commit_siblings = 50                    # range
1-1000<br/><br /> # - Checkpoints -<br /><br /> checkpoint_segments = 50                # in logfile segments, min 1,
16MBeach<br /> checkpoint_timeout = 1800               # range 30-3600, in seconds<br /> checkpoint_warning = 180   
<br/><br /> effective_cache_size = 2048             # typically 8KB each<br /> random_page_cost = 3   <br /><br /><br
/>Shared memory set to :<br /> echo /proc/sys/kernel/shmmax = 256000000<br /><br /> Could you help  please...<br /><br
/>tx<br /><br /><br /> David<br /><br /><br /><br /><br /><br /><br /></div> 

Re: Optimisation help

From
Alan Hodgson
Date:
On Tuesday 04 March 2008, dforums <dforums@vieonet.com> wrote:
>  Hello
>
>
>  We hace a Quad Xeon server, with 8GO of ram, sata II 750Go
>
>
>  I suppose the main problem is from database server settings.

No, the problem is your hard drive is too slow. One drive can only do maybe
150 seeks per second.

Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your
drive.

--
Alan

Re: Optimisation help

From
dforums
Date:
tX for your reply,

I do not have more information on disk speed. I'll get it latter.

But My most fear is that for now the database is only of 10 Go.

But I will have to increase it 10 times during the next six month I'm
afraid that these problems will increase.

Regards

David

Alan Hodgson a écrit :
> On Tuesday 04 March 2008, dforums <dforums@vieonet.com> wrote:
>>  Hello
>>
>>
>>  We hace a Quad Xeon server, with 8GO of ram, sata II 750Go
>>
>>
>>  I suppose the main problem is from database server settings.
>
> No, the problem is your hard drive is too slow. One drive can only do maybe
> 150 seeks per second.
>
> Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your
> drive.
>

Re: Optimisation help

From
dforums
Date:
In regards of update, I have around 10000 updates while a laps of 10 minutes

Is there a settings to optimise updates ?

regards

david

Alan Hodgson a écrit :
> On Tuesday 04 March 2008, dforums <dforums@vieonet.com> wrote:
>>  Hello
>>
>>
>>  We hace a Quad Xeon server, with 8GO of ram, sata II 750Go
>>
>>
>>  I suppose the main problem is from database server settings.
>
> No, the problem is your hard drive is too slow. One drive can only do maybe
> 150 seeks per second.
>
> Oh, and updates in PostgreSQL are expensive. But mostly I'd say it's your
> drive.
>

Re: Optimisation help

From
Greg Smith
Date:
On Tue, 4 Mar 2008, dforums wrote:

> max_connections = 256
> shared_buffers = 1500                   # min 16 or max_connections*2, 8KB each
> work_mem = 22000                        # min 64, size in KB
> effective_cache_size = 2048             # typically 8KB each

Well, you're giving the main database server a whopping 1500*8K=12MB of
space to work with.  Meanwhile you're giving each of the 256 clients up to
22MB of work_mem, which means they can use 5.6GB total.  This is quite
backwards.

Increase shared_buffers to something like 250000 (2GB), decrease work_mem
to at most 10000 and probably lower, and raise effective_cache_size to
something like 5GB=625000.  Whatever data you've collected about
performance with your current settings is pretty much meaningless with
only giving 12MB of memory to shared_buffers and having a tiny setting for
effective_cache_size.

Oh, and make sure you ANALYZE your tables regularly.

> random_page_cost = 3

And you shouldn't be playing with that until you've got the memory usage
to something sane.

Also, you didn't mention what version of PostgreSQL you're using.  You'll
need 8.1 or later to have any hope of using 8GB of RAM effectively on a
4-core system.

> But My most fear is that for now the database is only of 10 Go. But I
> will have to increase it 10 times during the next six month I'm afraid
> that these problems will increase.

It's very unlikely you will be able to get good performance on a 100GB
database with a single SATA drive.  You should be able to get great
performance with the current size though.

> In regards of update, I have around 10000 updates while a laps of 10
> minutes.  Is there a settings to optimise updates ?

10000 updates / 600 seconds = 17 updates/second.  That's trivial; even a
single boring drive can get 100/second.  As someone already suggested your
real problem here is that you'll be hard pressed to handle the amount of
seeking that goes into a larger database with only a single drive.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Optimisation help

From
dforums
Date:
Thanks i'm trying with this new settings. I gain only 3 second (2:40 vs
  2:37 min) on a treatment of 1000 lines, with it's done every 2 minutes.

For the database version, i'm under postgresql 8.1.11. x64

As i'm in a procedure it seems that postgresql explain analyse doesn't
give details.

I suppose that I have to fragment my procedure to see exactly where i'm
wasting so much time.

regards

david

Greg Smith a écrit :
> On Tue, 4 Mar 2008, dforums wrote:
>
>> max_connections = 256
>> shared_buffers = 1500                   # min 16 or max_connections*2,
>> 8KB each
>> work_mem = 22000                        # min 64, size in KB
>> effective_cache_size = 2048             # typically 8KB each
>
> Well, you're giving the main database server a whopping 1500*8K=12MB of
> space to work with.  Meanwhile you're giving each of the 256 clients up
> to 22MB of work_mem, which means they can use 5.6GB total.  This is
> quite backwards.
>
> Increase shared_buffers to something like 250000 (2GB), decrease
> work_mem to at most 10000 and probably lower, and raise
> effective_cache_size to something like 5GB=625000.  Whatever data you've
> collected about performance with your current settings is pretty much
> meaningless with only giving 12MB of memory to shared_buffers and having
> a tiny setting for effective_cache_size.
>
> Oh, and make sure you ANALYZE your tables regularly.
>
>> random_page_cost = 3
>
> And you shouldn't be playing with that until you've got the memory usage
> to something sane.
>
> Also, you didn't mention what version of PostgreSQL you're using.
> You'll need 8.1 or later to have any hope of using 8GB of RAM
> effectively on a 4-core system.
>
>> But My most fear is that for now the database is only of 10 Go. But I
>> will have to increase it 10 times during the next six month I'm afraid
>> that these problems will increase.
>
> It's very unlikely you will be able to get good performance on a 100GB
> database with a single SATA drive.  You should be able to get great
> performance with the current size though.
>
>> In regards of update, I have around 10000 updates while a laps of 10
>> minutes.  Is there a settings to optimise updates ?
>
> 10000 updates / 600 seconds = 17 updates/second.  That's trivial; even a
> single boring drive can get 100/second.  As someone already suggested
> your real problem here is that you'll be hard pressed to handle the
> amount of seeking that goes into a larger database with only a single
> drive.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your Subscription:
> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance
>
>
>

Re: Optimisation help

From
dforums
Date:
Hello,

After controling the settings I so, that shared_buffers is configurated
at 1024 (the default), however, in my postgresql.conf I set it to
250000, is it due to shared memory settings, should I increase shmmax?

regards

david

Greg Smith a écrit :
> On Tue, 4 Mar 2008, dforums wrote:
>
>> max_connections = 256
>> shared_buffers = 1500                   # min 16 or max_connections*2,
>> 8KB each
>> work_mem = 22000                        # min 64, size in KB
>> effective_cache_size = 2048             # typically 8KB each
>
> Well, you're giving the main database server a whopping 1500*8K=12MB of
> space to work with.  Meanwhile you're giving each of the 256 clients up
> to 22MB of work_mem, which means they can use 5.6GB total.  This is
> quite backwards.
>
> Increase shared_buffers to something like 250000 (2GB), decrease
> work_mem to at most 10000 and probably lower, and raise
> effective_cache_size to something like 5GB=625000.  Whatever data you've
> collected about performance with your current settings is pretty much
> meaningless with only giving 12MB of memory to shared_buffers and having
> a tiny setting for effective_cache_size.
>
> Oh, and make sure you ANALYZE your tables regularly.
>
>> random_page_cost = 3
>
> And you shouldn't be playing with that until you've got the memory usage
> to something sane.
>
> Also, you didn't mention what version of PostgreSQL you're using.
> You'll need 8.1 or later to have any hope of using 8GB of RAM
> effectively on a 4-core system.
>
>> But My most fear is that for now the database is only of 10 Go. But I
>> will have to increase it 10 times during the next six month I'm afraid
>> that these problems will increase.
>
> It's very unlikely you will be able to get good performance on a 100GB
> database with a single SATA drive.  You should be able to get great
> performance with the current size though.
>
>> In regards of update, I have around 10000 updates while a laps of 10
>> minutes.  Is there a settings to optimise updates ?
>
> 10000 updates / 600 seconds = 17 updates/second.  That's trivial; even a
> single boring drive can get 100/second.  As someone already suggested
> your real problem here is that you'll be hard pressed to handle the
> amount of seeking that goes into a larger database with only a single
> drive.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your Subscription:
> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance
>
>
>

Re: Optimisation help

From
"Steinar H. Gunderson"
Date:
On Wed, Mar 05, 2008 at 12:15:25AM +0000, dforums wrote:
> In regards of update, I have around 10000 updates while a laps of 10 minutes
>
> Is there a settings to optimise updates ?

If you can, batch them into a single transaction.

If you can, upgrade to 8.3. HOT might help you here.

/* Steinar */
--
Homepage: http://www.sesse.net/

Re: Optimisation help

From
Erik Jones
Date:
On Mar 4, 2008, at 6:54 PM, dforums wrote:

> Hello,
>
> After controling the settings I so, that shared_buffers is
> configurated at 1024 (the default), however, in my postgresql.conf I
> set it to 250000, is it due to shared memory settings, should I
> increase shmmax?

Did you do a full restart of the db cluster?  Changes to shared memory
settings require that.

Erik Jones

DBA | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: Optimisation help

From
dforums
Date:
OK I found the cause, it was a default settings added on server start.
(-B 1024) Grrrrrrrrr!!!!!

Now it works really better I devide the full time per 2.

I suppose I steal have to look deep in the procedure to see some hack,
has somebody suggest, I will try to buffer all updates in one.

One question, Could I optimise the treatment if I'm doing the select on
a view while updating the main table ????


regards

David


Erik Jones a écrit :
>
> On Mar 4, 2008, at 6:54 PM, dforums wrote:
>
>> Hello,
>>
>> After controling the settings I so, that shared_buffers is
>> configurated at 1024 (the default), however, in my postgresql.conf I
>> set it to 250000, is it due to shared memory settings, should I
>> increase shmmax?
>
> Did you do a full restart of the db cluster?  Changes to shared memory
> settings require that.
>
> Erik Jones
>
> DBA | Emma®
> erik@myemma.com
> 800.595.4401 or 615.292.5888
> 615.292.0777 (fax)
>
> Emma helps organizations everywhere communicate & market in style.
> Visit us online at http://www.myemma.com
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your Subscription:
> http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance
>
>
>