Re: pgbench unable to scale beyond 100 concurrent connections - Mailing list pgsql-hackers

From Sachin Kotwal
Subject Re: pgbench unable to scale beyond 100 concurrent connections
Date
Msg-id CA+N_YAendkKenq+qFKsh1JiDv+manmqwKhVTmfTocd7kFWsy9A@mail.gmail.com
Whole thread Raw
In response to Re: pgbench unable to scale beyond 100 concurrent connections  (Craig Ringer <craig@2ndquadrant.com>)
Responses Re: pgbench unable to scale beyond 100 concurrent connections
List pgsql-hackers
Hi,


On Wed, Jun 29, 2016 at 6:29 PM, Craig Ringer <craig@2ndquadrant.com> wrote:
On 29 June 2016 at 18:47, Sachin Kotwal <kotsachin@gmail.com> wrote:
 
I am testing pgbench with more than 100 connections.
also set max_connection in postgresql.conf more than 100.

Initially pgbench tries to scale nearby 150 but later it come down to 100 connections and stable there.

It this limitation of pgbench? or bug? or i am doing it wrong way?

What makes you think this is a pgbench limitation?
 
As I mentioned when I tried same thing with sysbench It can give me 200+ concurrent connection with same method and same machine.

 
It sounds like you're benchmarking the client and server on the same system. Couldn't this be a limitation of the backend PostgreSQL server?

I think having client and server on same server should not be problem. 
As i can do this with different benchmarking tool It should not be limitation of backend PostgreSQL server.

 
It also sounds like your method of counting concurrent connections is probably flawed. You're not allowing for setup and teardown time; if you want over 200 connections really running at very high rates of connection and disconnection you'll probably need to raise max_connections a bit to allow for the ones that're starting up or tearing down at any given time.

May be. Please let me know how I can count concurrent connection in this case. 
There should not be connection and disconnection because I am not using -C option of pgbench which cause connection and disconnection for each query.
If I set max_connection of postgresql.conf to 200 and testing with -c 150 .
This should work fine, but it is not.


 
Really, though, why would you want to do this? I can measure my car's speed falling off a cliff, but that's not a very interesting benchmark for a car. I can't imagine any sane use of the database this way, with incredibly rapid setup and teardown of lots of connections. Look into connection pooling, either client side or in a proxy like pgbouncer.


I am testing one scenario of multiple coordinator with help of postgres_fdw to enhance connection ability of postgres without any connection pooling .
Setup might be difficult to explain here but will explain if required.

can you test simply 100 scale database size with pgbench and run pgbench with 200+ connection of small virtual box to see same observation ?

Please let me know if I can help to know to reproduce this problem.



 
--
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



--

Thanks and Regards,
Sachin Kotwal

pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: Reviewing freeze map code
Next
From: Julien Rouhaud
Date:
Subject: Re: Rename max_parallel_degree?