Thread: more than 1000 connections

more than 1000 connections

From
"Jorge Medina"
Date:
hi guys

I know this list it's about SQL, but if somebody have a pgsql engine
with 1000 or more concurrent connections please show me the
postgresql.conf or if the pgpool work as a solution to this problem.

thanks.

--
Jorge Andrés Medina Oliva.
Evolve or die!


Re: more than 1000 connections

From
"Richard Broersma"
Date:
On Tue, Aug 5, 2008 at 8:14 AM, Jorge Medina <jorge@bsdchile.cl> wrote:

> I know this list it's about SQL, but if somebody have a pgsql engine
> with 1000 or more concurrent connections please show me the
> postgresql.conf or if the pgpool work as a solution to this problem.

The PG performance list would probably be the best mailing list for
this question.  But from previous discussions there is a test
conducted by Sun using postgresql.  They were able to produce 843 JOPS
(which I think means 843 concurrent java operations per seconds) using
a mid-grade server hardware.  All of the postgresql.conf options are
shown as well as the compiler options used to build postgresql.

http://www.spec.org/jAppServer2004/results/res2007q3/jAppServer2004-20070703-00073.html

I hope this helps.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug


Re: more than 1000 connections

From
"Richard Broersma"
Date:
On Tue, Aug 5, 2008 at 12:11 PM, Jorge Medina <jorge@bsdchile.cl> wrote:

> ok, so I think if can not increment the max_connections to 1000
> because my  main memory it's 2G some test with max connections allowed
> ?

Sorry I don't understand your question.  Also, don't forget to
reply-all so that everyone on the list can participate.


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug


Re: more than 1000 connections

From
Craig Ringer
Date:
Jorge Medina wrote:
> hi guys
> 
> I know this list it's about SQL, but if somebody have a pgsql engine
> with 1000 or more concurrent connections please show me the
> postgresql.conf or if the pgpool work as a solution to this problem.

Out of interest - why 1000 connections?

Do you really expect to have 1000 jobs concurrently active and doing
work? If you don't, then you'll be wasting resources and slowing things
down for no reason. There is a connection overhead in PostgreSQL - IIRC
mostly related to database-wide locking and synchronization, but also
some memory for each backend - that means you probably shouldn't run
vastly more backends than you intend to have actively working.

If you described your problem, perhaps someone could give you a useful
answer. Your mention of pgpool suggests that you're probably using a web
app and running into connection count limits, but I shouldn't have to
guess that.

--
Craig Ringer


Re: more than 1000 connections

From
Mark Roberts
Date:
On Wed, 2008-08-06 at 08:06 +0800, Craig Ringer wrote:
> Out of interest - why 1000 connections?
> 
> Do you really expect to have 1000 jobs concurrently active and doing
> work? If you don't, then you'll be wasting resources and slowing
> things
> down for no reason. There is a connection overhead in PostgreSQL -
> IIRC
> mostly related to database-wide locking and synchronization, but also
> some memory for each backend - that means you probably shouldn't run
> vastly more backends than you intend to have actively working.
> 
> If you described your problem, perhaps someone could give you a useful
> answer. Your mention of pgpool suggests that you're probably using a
> web
> app and running into connection count limits, but I shouldn't have to
> guess that.
> 
> --
> Craig Ringer

This is actually a fantastic point.  Have you considered using more than
one box to field the connections and using some sort of replication or
worker process to move them to a master database of some sort?  I don't
know about the feasibility of it, but it might work out depending on
what kind of application you're trying to write.

Disclaimer: I work in a data warehousing and we only have 45 concurrent
connections right now.  OLTP and/or large connection counts isn't really
what I spend my days thinking about. ;-)

-Mark



Re: more than 1000 connections

From
"Jorge Medina"
Date:
On Wed, Aug 6, 2008 at 11:54 AM, Mark Roberts
<mailing_lists@pandapocket.com> wrote:
>
> On Wed, 2008-08-06 at 08:06 +0800, Craig Ringer wrote:
>> Out of interest - why 1000 connections?
>>
>> Do you really expect to have 1000 jobs concurrently active and doing
>> work? If you don't, then you'll be wasting resources and slowing
>> things
>> down for no reason. There is a connection overhead in PostgreSQL -
>> IIRC
>> mostly related to database-wide locking and synchronization, but also
>> some memory for each backend - that means you probably shouldn't run
>> vastly more backends than you intend to have actively working.
>>
>> If you described your problem, perhaps someone could give you a useful
>> answer. Your mention of pgpool suggests that you're probably using a
>> web
>> app and running into connection count limits, but I shouldn't have to
>> guess that.
>>
>> --
>> Craig Ringer
>
> This is actually a fantastic point.  Have you considered using more than
> one box to field the connections and using some sort of replication or
> worker process to move them to a master database of some sort?  I don't
> know about the feasibility of it, but it might work out depending on
> what kind of application you're trying to write.
>
> Disclaimer: I work in a data warehousing and we only have 45 concurrent
> connections right now.  OLTP and/or large connection counts isn't really
> what I spend my days thinking about. ;-)
>
> -Mark
>
>
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql
>
I have many trouble's with server, because my webmail(roundcube) works
with the db and the machine only have 2G of RAM but collapse with 60
concurrent connections, I try with persistent connections and the same
problem, I need configure a pool of connection or something.
my config
max_connections = 100;
shared_buffer = 32MB
increase to 460 connections and 128MB of shared buffers but it's the same

--
Jorge Andrés Medina Oliva.
Evolve or die!


Re: more than 1000 connections

From
"Scott Marlowe"
Date:
On Wed, Aug 6, 2008 at 10:42 AM, Jorge Medina <jorge@bsdchile.cl> wrote:
>>
> I have many trouble's with server, because my webmail(roundcube) works
> with the db and the machine only have 2G of RAM but collapse with 60
> concurrent connections, I try with persistent connections and the same
> problem, I need configure a pool of connection or something.
> my config
> max_connections = 100;
> shared_buffer = 32MB
> increase to 460 connections and 128MB of shared buffers but it's the same

What, exactly, are the symptoms of a collapse?  What do the logs
(pgsql, system, your application) have to say?


Re: more than 1000 connections

From
"Jorge Medina"
Date:
On Wed, Aug 6, 2008 at 1:38 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Wed, Aug 6, 2008 at 11:29 AM, Jorge Medina <jorge@bsdchile.cl> wrote:
>> On Wed, Aug 6, 2008 at 12:47 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>>> On Wed, Aug 6, 2008 at 10:42 AM, Jorge Medina <jorge@bsdchile.cl> wrote:
>>>>>
>>>> I have many trouble's with server, because my webmail(roundcube) works
>>>> with the db and the machine only have 2G of RAM but collapse with 60
>>>> concurrent connections, I try with persistent connections and the same
>>>> problem, I need configure a pool of connection or something.
>>>> my config
>>>> max_connections = 100;
>>>> shared_buffer = 32MB
>>>> increase to 460 connections and 128MB of shared buffers but it's the same
>>>
>>> What, exactly, are the symptoms of a collapse?  What do the logs
>>> (pgsql, system, your application) have to say?
>>>
>> affect directly the performance
>
> I'm trying to help you here, but that answer helps no one.
>
I know, sorry but the logs don't show anything when many people try
login from the webmail begin to grow connections to postgresql and the
all system turn too slow.


--
Jorge Andrés Medina Oliva.
Evolve or die!


Re: more than 1000 connections

From
"Scott Marlowe"
Date:
On Wed, Aug 6, 2008 at 1:34 PM, Jorge Medina <jorge@bsdchile.cl> wrote:
> On Wed, Aug 6, 2008 at 1:38 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>> On Wed, Aug 6, 2008 at 11:29 AM, Jorge Medina <jorge@bsdchile.cl> wrote:
>>> On Wed, Aug 6, 2008 at 12:47 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>>>> On Wed, Aug 6, 2008 at 10:42 AM, Jorge Medina <jorge@bsdchile.cl> wrote:
>>>>>>
>>>>> I have many trouble's with server, because my webmail(roundcube) works
>>>>> with the db and the machine only have 2G of RAM but collapse with 60
>>>>> concurrent connections, I try with persistent connections and the same
>>>>> problem, I need configure a pool of connection or something.
>>>>> my config
>>>>> max_connections = 100;
>>>>> shared_buffer = 32MB
>>>>> increase to 460 connections and 128MB of shared buffers but it's the same
>>>>
>>>> What, exactly, are the symptoms of a collapse?  What do the logs
>>>> (pgsql, system, your application) have to say?
>>>>
>>> affect directly the performance
>>
>> I'm trying to help you here, but that answer helps no one.
>>
> I know, sorry but the logs don't show anything when many people try
> login from the webmail begin to grow connections to postgresql and the
> all system turn too slow.

I'm not trying to be difficult, but there's a huge difference between
the system slowing down a bit, slowing down a lot, slowing down to a
crawl, and actually collapsing (failing to respond.)

Are you running out of available connections?  Do you have a lot that
are idle? Do you run the machine out of memory?  What are the symptoms
of your failure?


Re: more than 1000 connections

From
Craig Ringer
Date:
Jorge Medina wrote:
> I have many trouble's with server, because my webmail(roundcube) works
> with the db and the machine only have 2G of RAM

2GB of RAM can go a long way. It looks like this webmail app is sensible
enough to use IMAP for mail store access, so you don't have to worry
about messages in the database. That should give you quite a bit of
working memory for other things.

It would help to know how large your database is - both in physical
on-disk size, and how large an SQL dump (with pg_dump) of the database is.

> but collapse with 60
> concurrent connections, I try with persistent connections and the same
> problem

First: PHP's persistent database connections are a *bad* answer. They
handle variable spiky very poorly, and they're wasteful of database
server resources.

Look into using a connection pooler like pgpool.

As for "collapse"... that's completely uninformative.

- Describe the symptoms of a collapse from the user's perspective. What stops working, or slows down? How? When? Does
itrecover?
 

- Describe the symptoms of a collapse from the admin's perspective. How does the server behave? What is in the error
logs?What do the usual performance monitoring tools tell you?
 

Start with:

- Does the server run low on memory? Does it have too many processes struggling for time on too few CPUs? Is it
stallingdue to disk I/O bottlenecks?
 

- In `free -m' do you see increasing swap usage?

- If you follow `vmstat 1' output, do you see increasing swap activity as load increases? Does disk throughput increase
ordecrease when the server "collapses"? What is CPU activity like when the server "collapses"?
 

- In `top', what processes are keeping the server's CPUs busy? Are any particular processes hogging CPU time when the
server"collapses"?
 

- In `top', what are the states of the apache and postgres processes of interest? If you see lots of processes in the D
statethen you might well have disk I/O bandwidth or scheduling issues.
 

- If you use `tail -f' to follow BOTH the apache and postgresql logs (in different terminals), do you see anything
change,any warnings/errors, etc as the server "collapses"?
 

- Does the server recover from "collapse" when load is reduced?

- Can you still ping the server when it has "collapsed"?

- Can you still ssh into the server when it has "collapsed"? Use `ssh -v' so you can see the progress of the
connection.

Also, please provide a spec for your server hardware, including:

- CPU type and number of CPUs/cores

- Operating system/distro and version

- PostgreSQL version

- Where you got PostgreSQL from/how you installed it

- Model of RAID controller you use and its configuration eg RAID 5, RAID 10, etc.

- Disk info:
-- Number of disks
-- Interface of disk(s) eg SATA, PATA, SAS, SCSI
-- Capacity of disk(s)
-- Spindle speed of disk(s), eg 7200RPM, 10kRPM, 15kRPM

Finally:

- We know the physical RAM is 2GB, but how much RAM is free when the server is idle? Use `free -m' and read the value
forfree memory on the +-buffers/cache line.
 

[I'm going to put together a "so, you think we're psychic" email
template on the wiki soon, as the number of people who don't provide the
basic information needed to usefully answer most questions seems to be
continually increasing.]

> I need configure a pool of connection or something.

Yes, I'd say so. What is preventing you from trying that? What is your
question?

--
Craig Ringer