HI Mark,
My DBServer module already serves as a broker. At the moment it opens
a new connection for every incoming Agent connection. I did it this
way because I wanted to leave synchronisation to PGSQL. I might have
to modify it a bit and use a shared, single connection for all agents.
I guess that is not a bad option I just have to ensure that the code
is not below par :),
Also thank for the postgresql.conf hint, that limit was pretty low on
our server so this might help a bit,
Regards,
Slavisa
On 4/14/05, Mark Lewis <mark.lewis@mir3.com> wrote:
> If there are potentially hundreds of clients at a time, then you may be
> running into the maximum connection limit.
>
> In postgresql.conf, there is a max_connections setting which IIRC
> defaults to 100. If you try to open more concurrent connections to the
> backend than that, you will get a connection refused.
>
> If your DB is fairly gnarly and your performance needs are minimal it
> should be safe to increase max_connections. An alternative approach
> would be to add some kind of database broker program. Instead of each
> agent connecting directly to the database, they could pass their data to
> a broker, which could then implement connection pooling.
>
> -- Mark Lewis
>
> On Tue, 2005-04-12 at 22:09, Slavisa Garic wrote:
> > This is a serious problem for me as there are multiple users using our
> > software on our server and I would want to avoid having connections
> > open for a long time. In the scenario mentioned below I haven't
> > explained the magnitute of the communications happening between Agents
> > and DBServer. There could possibly be 100 or more Agents per
> > experiment, per user running on remote machines at the same time,
> > hence we need short transactions/pgsql connections. Agents need a
> > reliable connection because failure to connect could mean a loss of
> > computation results that were gathered over long periods of time.
>
>