Re: Howto Increased performace ? - Mailing list pgsql-performance

From Iain
Subject Re: Howto Increased performace ?
Date
Msg-id 005801c4ec84$426378f0$7201a8c0@mst1x5r347kymb
Whole thread Raw
In response to Howto Increased performace ?  ("Amrit Angsusingh" <amrit@spr.go.th>)
List pgsql-performance
Ho Cosimo,

I had read that before, so you are right. The amount of memory being used
could run much higher than I wrote.

In my case, I know that not all the connections are not busy all the time
(this isn't a web application with thousands of users connecting to a pool)
so not all active connections will be doing sorts all the time. As far as I
can tell, sort memory is allocated as needed, so my estimate of 400MB should
still be reasonable, and I have plenty of unaccounted for memory outside the
effective cache so it shouldn't be a problem.

Presumably, that memory isn't needed after the result set is built.

If I understand correctly, there isn't any way to limit the amount of memory
allocated for sorting, which means that you can't specifiy generous sort_mem
values to help out when there is spare capacity (few connections) because in
the worst case it could cause swapping when the system is busy. In the the
not so bad case, the effective cache size estimate will just be completely
wrong.

Maybe a global sort memory limit would be a good idea, I don't know.

regards
Iain


> Iain wrote:
>
>> sort_mem 4096 (=400MB RAM for 100 connections)
>
> If I understand correctly, memory usage related to `sort_mem'
> is per connection *and* per sort.
> If every client runs a query with 3 sorts in its plan, you are
> going to need (in theory) 100 connections * 4Mb * 3 sorts,
> which is 1.2 Gb.
>
> Please correct me if I'm wrong...
>
> --
> Cosimo



pgsql-performance by date:

Previous
From: "Iain"
Date:
Subject: Re: Howto Increased performace ?
Next
From: Robert Treat
Date:
Subject: Re: slony replication