Re: Postgre Performance - Mailing list pgsql-general

From Raghavendra
Subject Re: Postgre Performance
Date
Msg-id CA+h6Ahhv4eA611CDZSN5-hONwg-wDVQa_4+Mwa6Z1x+2-aW1+Q@mail.gmail.com
Whole thread Raw
In response to Postgre Performance  ("Deshpande, Yogesh Sadashiv (STSD-Openview)" <yogesh-sadashiv.deshpande@hp.com>)
Responses Re: Postgre Performance  ("Deshpande, Yogesh Sadashiv (STSD-Openview)" <yogesh-sadashiv.deshpande@hp.com>)
List pgsql-general
Dear Yogesh,

To get best answer's from community member's you need to provide complete information like,PG version, Server /Hardware info etc., So that it help's member's to assist you in right way.


---
Regards,
Raghavendra
EnterpriseDB Corporation



On Tue, Oct 18, 2011 at 7:27 PM, Deshpande, Yogesh Sadashiv (STSD-Openview) <yogesh-sadashiv.deshpande@hp.com> wrote:

Hello ,

 

We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens a connection to database. We are observing that for each connection , postgre also created on sub processes. We have set max_connection to 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this we are seeing very high CPU usage.  We need following information

 

1.       Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.

2.       Is there any configuration we do that would limit the sub process to some value say 50 and any request for connection would get queued.

 

Basically we wanted to limit the number of processes so that client code doesn’t have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?

 

Thanks

Yogesh


pgsql-general by date:

Previous
From: "Vishnu S."
Date:
Subject: POstgreSQL Tablespace deletion issue
Next
From: Simon Riggs
Date:
Subject: Re: What's the impact of archive_command failing?