Thread: limit number of concurrent callers to a stored proc?

limit number of concurrent callers to a stored proc?

From
Alan Stange
Date:
Hello all,

is there a simple way to limit the number of concurrent callers to a
stored proc?

The problem we have is about 50 clients come and perform the same
operation at nearly the same time.  Typically, this query takes a few
seconds to run, but in the case of this thundering herd the query time
drops to 70 seconds or much more.  The query can return up to 15MB of data.

The machine is a dual opteron, 8 GB memory, lots of fiber channel disk,
Linux 2.6, etc.

So, I'm thinking that a semaphore than will block more than N clients
from being in the core of the function at one time would be a good thing.

Thanks!

-- Alan

Re: limit number of concurrent callers to a stored proc?

From
Gavin Sherry
Date:
Hi Alan,

On Wed, 17 Aug 2005, Alan Stange wrote:

> Hello all,
>
> is there a simple way to limit the number of concurrent callers to a
> stored proc?
>
> The problem we have is about 50 clients come and perform the same
> operation at nearly the same time.  Typically, this query takes a few
> seconds to run, but in the case of this thundering herd the query time
> drops to 70 seconds or much more.  The query can return up to 15MB of data.
>
> The machine is a dual opteron, 8 GB memory, lots of fiber channel disk,
> Linux 2.6, etc.
>
> So, I'm thinking that a semaphore than will block more than N clients
> from being in the core of the function at one time would be a good thing.

There is no PostgreSQL feature which will do this for you. It should be
possible to implement this yourself, without too much pain. If you're
using PL/PgSQL, write another function in C or one of the other more
sophisticated PLs to implement the logic for you. At the beginning of the
function, execute the function to increment the count; at the end, execute
a function to decrement it.

If you're writing the function in C or one of those more sophisticated
PLs, it's even easier.

As an aside, using semaphores might be a little painful. I'd just grab
some shared memory and keep a counter in it. If the counter is greater
than your desired number of concurrent executions, you sleep and try again
soon.

That being said, did you want to give us a look at your function and data
and see if we can improve the performance at all?

Thanks,

Gavin

Re: limit number of concurrent callers to a stored proc?

From
Christopher Kings-Lynne
Date:
You could use a 1 column/1 row table perhaps.  Use some sort of locking
mechanism.

Also, check out contrib/userlock

Chris

Alan Stange wrote:
> Hello all,
>
> is there a simple way to limit the number of concurrent callers to a
> stored proc?
>
> The problem we have is about 50 clients come and perform the same
> operation at nearly the same time.  Typically, this query takes a few
> seconds to run, but in the case of this thundering herd the query time
> drops to 70 seconds or much more.  The query can return up to 15MB of data.
>
> The machine is a dual opteron, 8 GB memory, lots of fiber channel disk,
> Linux 2.6, etc.
>
> So, I'm thinking that a semaphore than will block more than N clients
> from being in the core of the function at one time would be a good thing.
> Thanks!
>
> -- Alan
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faq


Re: limit number of concurrent callers to a stored

From
Ron
Date:
At 09:40 PM 8/17/2005, Alan Stange wrote:

>is there a simple way to limit the number of concurrent callers to a
>stored proc?
>
>The problem we have is about 50 clients come and perform the same
>operation at nearly the same time.  Typically, this query takes a
>few seconds to run, but in the case of this thundering herd the
>query time drops to 70 seconds or much more.  The query can return
>up to 15MB of data.

I'm assuming there is some significant write activity going on at
some point as a result of the query, since MVCC should not care about
concurrent read activity?

Is that "a few seconds each query" or "a few seconds total if we run
50 queries sequentially but 70+ seconds per query if we try to run 50
queries concurrently"?

A) If the former, "a few seconds" * 50 can easily be 70+ seconds, and
things are what you should expect.  Getting higher performance in
that situation means reducing per query times, which may or may not
be easy.  Looking at the stored procedure code with an eye towards
optimization would be a good place to start.

B) If the later, then table access contention is driving performance
into the ground, and there are a few things you can try:
1= lock the table(s) under these circumstances so only one query of
the 50 can be acting on it at a time.  If the table(s) is/are small
enough to be made RAM resident, this may be a particularly low-cost,
low-effort, reasonable solution.

2= put a queue into place and only let some small number n of queries
run against the table(s) concurrently.  Adjust n until you get best
performance.  There are a few ways this could be done.

3= Buy a SSD and put the table(s) in question on it.  IIRC, 3.5"
format SSDs that can "drop in" replace HDs are available in up to
147GB capacities.


>The machine is a dual opteron, 8 GB memory, lots of fiber channel
>disk, Linux 2.6, etc.
>
>So, I'm thinking that a semaphore than will block more than N
>clients from being in the core of the function at one time would be
>a good thing.

This will only help in case "B" above.  If you go the "hard" route of
using systems programming, you will have a lot of details that must
be paid attention to correctly or Bad Things (tm) will
happen.  Putting the semaphore in place is the tip of the iceberg.


Hope this helps,
Ron Peacetree




Re: limit number of concurrent callers to a stored proc?

From
"Merlin Moncure"
Date:
Christopher
> You could use a 1 column/1 row table perhaps.  Use some sort of
locking
> mechanism.
>
> Also, check out contrib/userlock

userlock is definitely the way to go for this type of problem.

The are really the only way to provide locking facilities that live
outside transactions.

You are provided with 48 bits of lock space in the form of offset/block
in 32 bit field and a 16 bit field.  The 16 bit field could be the pid
of the locker and the 32 bit field the oid of the function.

Unfortunately, userlocks are not really easy to query via the pg_locks()
view.  However this has been addressed for 8.1.  In 8.1, it will be
trivial to create a function which checked the number of lockers on the
function oid and acquire a lock if less than a certain amount.

Merlin