Thread: PostgreSQL Database performance

PostgreSQL Database performance

From
"Pradeep"
Date:

Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL configuration file it was not reflecting in OS level and also Database performance is degrading.

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated 24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source PostgreSQL database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

 

Thanks & Regards

Pradeep Kanth

 

Re: PostgreSQL Database performance

From
Naveed Shaikh
Date:
Which version of PostgreSQL are you using on your windows?

Increasing work_mem can lead to far less disk-swapping, and therefore far quicker queries. However, it can cause problems if set too high, and should be constrained taking into account max_connections. The following calculation is what is typically recommended to determine a decent work_mem value:

Total RAM * 0.25 / max_connections

If there are large reporting queries that run on the database which require more work memory than a typical connection,work_mem can be set for those particular queries. If, for example, there is a reporting user that only runs infrequent but large reports, a specific work_mem setting can be applied to that particular role.
e.g.

ALTER ROLE reporting SET work_mem = '64MB';


---
Warm Regards,
----------
Naveed Shaikh

On Sat, Sep 3, 2016 at 9:08 AM, Pradeep <pgundala@avineonindia.com> wrote:

Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL configuration file it was not reflecting in OS level and also Database performance is degrading.

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated 24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source PostgreSQL database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

 

Thanks & Regards

Pradeep Kanth

 


Re: PostgreSQL Database performance

From
"Ilya Kazakevich"
Date:

Hi.

 

“shared_buffers” should be set to 30-40% of your system RAM.

This param controls how much memory database may use.

 

Please see https://www.postgresql.org/docs/9.1/static/runtime-config-resource.html

 

 

 

Ilya Kazakevich

 

JetBrains

http://www.jetbrains.com

The Drive to Develop

 

From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Pradeep
Sent: Saturday, September 03, 2016 6:39 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL Database performance

Re: PostgreSQL Database performance

From
Naveed Shaikh
Date:
On Windows, large values for shared_buffers aren't as effective. You may find better results keeping the setting relatively low and using the operating system cache more instead. The useful range for shared_buffers on Windows systems is generally from 64MB to 512MB.

---
Warm Regards,
----------
Naveed Shaikh

On Tue, Sep 6, 2016 at 11:22 PM, Ilya Kazakevich <Ilya.Kazakevich@jetbrains.com> wrote:

Hi.

 

“shared_buffers” should be set to 30-40% of your system RAM.

This param controls how much memory database may use.

 

Please see https://www.postgresql.org/docs/9.1/static/runtime-config-resource.html

 

 

 

Ilya Kazakevich

 

JetBrains

http://www.jetbrains.com

The Drive to Develop

 

From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Pradeep
Sent: Saturday, September 03, 2016 6:39 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL Database performance


Re: PostgreSQL Database performance

From
Scott Marlowe
Date:
On Fri, Sep 2, 2016 at 9:38 PM, Pradeep <pgundala@avineonindia.com> wrote:
> Dear Team,
>
>
>
> Could you please help me, after changing the below parameters in PostgreSQL
> configuration file it was not reflecting in OS level and also Database
> performance is degrading.
>
>
>
> Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated
> 24GB RAM out of 32GB.

Actually effective_cache_size allocates nothing. It tells the pgsql
server about how much memory the machine it is running on is using for
OS level caching. On  32G machine with 1G or so of shared_buffers that
number is about right.

> However after changing the below parameters, In task bar it is showing 2.7GB
> Utilization even though my utilization is more.

2.7G is ok. Postgresql expects the OS to help out with caching so it
doesn't need to grab all the memory in the machine etc. In fact that
would be counterproductive in most situations.

> So kindly suggest us, whether it will impact or not in Open source
> PostgreSQL database
>
> max_connections = 100
> shared_buffers = 512MB
> effective_cache_size = 24GB
> work_mem = 110100kB

This is WAY too high for work_mem. Work_mem is how much memory a
single sort can grab at once. Each query may run > 1 sort, and you
could have 100 queries running at once.

This setting is 110GB. That's about 109.9GB too high for safety. When
things go wrong with this too big, they go very wrong, sending the
machine into a swap storm from which it may not return.

> maintenance_work_mem = 2GB
>
> checkpoint_segments = 64
>
> checkpoint_completion_target = 0.9

Too high of a checkpoint completion target may cause buffers to get
written out more often than needed. but it varies based on load etc.

> wal_buffers = 16MB
>
> default_statistics_target = 100

It's far more likely that you've just got poorly written queries. I'd
make a post with explain analyze output etc. Here's a good resource
for reporting slow queries:

https://wiki.postgresql.org/wiki/Slow_Query_Questions

--
To understand recursion, one must first understand recursion.


Re: PostgreSQL Database performance

From
Steve Atkins
Date:
> On Sep 6, 2016, at 12:08 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>
> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep <pgundala@avineonindia.com> wrote:
>>
>> max_connections = 100
>> shared_buffers = 512MB
>> effective_cache_size = 24GB
>> work_mem = 110100kB
>
> This is WAY too high for work_mem. Work_mem is how much memory a
> single sort can grab at once. Each query may run > 1 sort, and you
> could have 100 queries running at once.
>
> This setting is 110GB. That's about 109.9GB too high for safety. When
> things go wrong with this too big, they go very wrong, sending the
> machine into a swap storm from which it may not return.

It's an oddly spelled 110MB, which doesn't seem unreasonable.

>
> It's far more likely that you've just got poorly written queries. I'd
> make a post with explain analyze output etc. Here's a good resource
> for reporting slow queries:
>
> https://wiki.postgresql.org/wiki/Slow_Query_Questions

+1

Cheers,
  Steve


Re: PostgreSQL Database performance

From
Jeff Janes
Date:
On Fri, Sep 2, 2016 at 8:38 PM, Pradeep <pgundala@avineonindia.com> wrote:

Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL configuration file it was not reflecting in OS level and also Database performance is degrading.



What were they before you changed them?  

Do you mean that the performance is degrading after the change, or that the performance is degrading as the database grows, and your changes have not helped?

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated 24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source PostgreSQL database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100


None of these seem unreasonable.  In order to recommend changes, we would have to know more about your database workload.  Or better yet, specific queries which seem too slow.

Cheers,

Jeff

Re: PostgreSQL Database performance

From
Scott Marlowe
Date:
On Tue, Sep 6, 2016 at 1:18 PM, Steve Atkins <steve@blighty.com> wrote:
>
>> On Sep 6, 2016, at 12:08 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>>
>> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep <pgundala@avineonindia.com> wrote:
>>>
>>> max_connections = 100
>>> shared_buffers = 512MB
>>> effective_cache_size = 24GB
>>> work_mem = 110100kB
>>
>> This is WAY too high for work_mem. Work_mem is how much memory a
>> single sort can grab at once. Each query may run > 1 sort, and you
>> could have 100 queries running at once.
>>
>> This setting is 110GB. That's about 109.9GB too high for safety. When
>> things go wrong with this too big, they go very wrong, sending the
>> machine into a swap storm from which it may not return.
>
> It's an oddly spelled 110MB, which doesn't seem unreasonable.

oh yeah. still kind biggish but not as big as I had thought.


Re: PostgreSQL Database performance

From
"Pradeep"
Date:

Dear Naveed,

 

I am using PostgreSQL 9.3 version on Windows .After changing these parameters, I have not seen any resource management utilization.

I have observed before and after changing  the parameter values ,it is not reflecting the memory level. Maximum utilization of RAM is 3GB only.

So kindly let me ,whether it will impact the  RAM utilization or not?

 

Thanks & Regards

Pradeep Kanth

Ext : 3026

 

From: Naveed Shaikh [mailto:naveed.shaikh@enterprisedb.com]
Sent: 06 September, 2016 11:22 PM
To: Pradeep
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] PostgreSQL Database performance

 

Which version of PostgreSQL are you using on your windows?

 

Increasing work_mem can lead to far less disk-swapping, and therefore far quicker queries. However, it can cause problems if set too high, and should be constrained taking into account max_connections. The following calculation is what is typically recommended to determine a decent work_mem value:

Total RAM * 0.25 / max_connections

If there are large reporting queries that run on the database which require more work memory than a typical connection,work_mem can be set for those particular queries. If, for example, there is a reporting user that only runs infrequent but large reports, a specific work_mem setting can be applied to that particular role.
e.g.

ALTER ROLE reporting SET work_mem = '64MB';


---

Warm Regards,

----------

Naveed Shaikh

 

On Sat, Sep 3, 2016 at 9:08 AM, Pradeep <pgundala@avineonindia.com> wrote:

Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL configuration file it was not reflecting in OS level and also Database performance is degrading.

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated 24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source PostgreSQL database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

 

Thanks & Regards

Pradeep Kanth

 

 

Re: PostgreSQL Database performance

From
Naveed Shaikh
Date:
Hello Pradeep,

After making the changes have you restarted the cluster?


---
Warm Regards,
----------
Naveed Shaikh


On Wed, Sep 7, 2016 at 9:42 AM, Pradeep <pgundala@avineonindia.com> wrote:

Dear Naveed,

 

I am using PostgreSQL 9.3 version on Windows .After changing these parameters, I have not seen any resource management utilization.

I have observed before and after changing  the parameter values ,it is not reflecting the memory level. Maximum utilization of RAM is 3GB only.

So kindly let me ,whether it will impact the  RAM utilization or not?

 

Thanks & Regards

Pradeep Kanth

Ext : 3026

 

From: Naveed Shaikh [mailto:naveed.shaikh@enterprisedb.com]
Sent: 06 September, 2016 11:22 PM
To: Pradeep
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] PostgreSQL Database performance

 

Which version of PostgreSQL are you using on your windows?

 

Increasing work_mem can lead to far less disk-swapping, and therefore far quicker queries. However, it can cause problems if set too high, and should be constrained taking into account max_connections. The following calculation is what is typically recommended to determine a decent work_mem value:

Total RAM * 0.25 / max_connections

If there are large reporting queries that run on the database which require more work memory than a typical connection,work_mem can be set for those particular queries. If, for example, there is a reporting user that only runs infrequent but large reports, a specific work_mem setting can be applied to that particular role.
e.g.

ALTER ROLE reporting SET work_mem = '64MB';


---

Warm Regards,

----------

Naveed Shaikh

 

On Sat, Sep 3, 2016 at 9:08 AM, Pradeep <pgundala@avineonindia.com> wrote:

Dear Team,

 

Could you please help me, after changing the below parameters in PostgreSQL configuration file it was not reflecting in OS level and also Database performance is degrading.

 

Example: I am using Windows 2008 R2 server .For PostgreSQL I have allocated 24GB RAM out of 32GB.

However after changing the below parameters, In task bar it is showing 2.7GB Utilization even though my utilization is more.

So kindly suggest us, whether it will impact or not in Open source PostgreSQL database

 

max_connections = 100

shared_buffers = 512MB

effective_cache_size = 24GB

work_mem = 110100kB

maintenance_work_mem = 2GB

checkpoint_segments = 64

checkpoint_completion_target = 0.9

wal_buffers = 16MB

default_statistics_target = 100

 

Thanks & Regards

Pradeep Kanth

 

 


Re: PostgreSQL Database performance

From
Merlin Moncure
Date:
On Tue, Sep 6, 2016 at 11:12 PM, Pradeep <pgundala@avineonindia.com> wrote:
> Dear Naveed,
>
> I am using PostgreSQL 9.3 version on Windows .After changing these
> parameters, I have not seen any resource management utilization.
>
> I have observed before and after changing  the parameter values ,it is not
> reflecting the memory level. Maximum utilization of RAM is 3GB only.
>
> So kindly let me ,whether it will impact the  RAM utilization or not?

Postgres reserves some memory for itself and relies on the operating
system to buffer the rest.  So this is not really unusual or
interesting.  What would be interesting is specific examples of things
that are not running as fast as you think they should be.

merlin


Re: PostgreSQL Database performance

From
John R Pierce
Date:
On 9/2/2016 8:38 PM, Pradeep wrote:
> ...  In task bar it is showing 2.7GB Utilization ...

odd, the task bar doesn't show any sort of memory utilization on any of
my windows systems.    are you referring instead to the Task Manager ?

Note the Windows Task Manager by default doesn't show shared memory
resources either, you'd have to go to the 'details...' view, then add a
column for 'Memory (shared working set)' to see this.

also note, the 'working set' is the memory thats actually being used,
not potentially allocated.   so even if you've told PG it can have 10GB
of shared buffers, if your database accesses since restarting the
database server have only touched 2GB of actual data, thats all that can
be 'working set'


--
john r pierce, recycling bits in santa cruz



Re: PostgreSQL Database performance

From
Jim Nasby
Date:
On 9/6/16 2:08 PM, Scott Marlowe wrote:
>> checkpoint_completion_target = 0.9
> Too high of a checkpoint completion target may cause buffers to get
> written out more often than needed. but it varies based on load etc.

The odds on that don't seem to be terribly high. Even if that is a
common occurrence if it's enough to make a difference then you're
already close to the limits of your IO, and if that's true then you
definitely want to spread the checkpoint out over a longer interval.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461