Thread: Setting up a database for 10000 concurrent users
I'm trying to setup a database for 10000 concurrent users for a test. I have a system with 1GB of RAM where I will use 512MB for PostgreSQL. It is running SuSE 9.3 I have changed SHMMAX & SHMALL echo "536870912" >/proc/sys/kernel/shmmax echo "536870912" >/proc/sys/kernel/shmall and max_connections = 10000 in postgresql.conf When trying to start the database server it leaves this in the log. FATAL: could not create semaphores: No space left on device DETAIL: Failed system call was semget(5432129, 17, 03600). HINT: This error does *not* mean that you have run out of disk space. It occurs when either the system limit for the maximum number of semaphore sets (SEMMNI), or the system wide maximum number of semaphores (SEMMNS), would be exceeded. You need to raise the respective kernel parameter. Alternatively, reduce PostgreSQL's consumption of semaphores by reducing its max_connections parameter (currently 5000). Calculated the values should be SEMMNI = 10000 / 16 SEMMNS = (10000 / 16) * 17 "plus room for other applications" How much should that be ? And where can I change those values on a 2.6 kernel ? Poul
> Calculated the values should be > SEMMNI = 10000 / 16 > SEMMNS = (10000 / 16) * 17 "plus room for other applications" How much > should that be ? > And where can I change those values on a 2.6 kernel ? > I will try to answer myself with another question. Can it be that it should be changed in /usr/src/linux/include/linux/sem.h #define SEMMNI 128 /* <= IPCMNI max # of semaphore identifiers */ #define SEMMSL 250 /* <= 8 000 max num of semaphores per id */ #define SEMMNS (SEMMNI*SEMMSL) /* <= INT_MAX max # of semaphores in system */ Poul
Poul Møller Hansen wrote: > I'm trying to setup a database for 10000 concurrent users for a test. > I have a system with 1GB of RAM where I will use 512MB for PostgreSQL. > It is running SuSE 9.3 I think you're being horribly optimistic if you actually want 10000 concurrent connections, with users all doing things. Even if you only allow 1MB for each connection that's 10GB of RAM you'd want. Plus a big chunk more to actually cache your database files and do work in. Then, if you had 10,000 concurrent queries you'd probably want a mainframe to handle all the concurrency, or perhaps a 64-CPU box would suffice... You probably want to investigate connection pooling, but if you say what you want to achieve then people will be able to suggest the best approach. -- Richard Huxton Archonet Ltd
> > I think you're being horribly optimistic if you actually want 10000 > concurrent connections, with users all doing things. Even if you only > allow 1MB for each connection that's 10GB of RAM you'd want. Plus a big > chunk more to actually cache your database files and do work in. Then, > if you had 10,000 concurrent queries you'd probably want a mainframe to > handle all the concurrency, or perhaps a 64-CPU box would suffice... > > You probably want to investigate connection pooling, but if you say what > you want to achieve then people will be able to suggest the best approach. > I know I'm on thin ice :) Actually it was a max limit, I want to test how far I can tweak the server. The clients are doing almost nothing most of the time, maybe one insert every 2 minutes. Of course that is still more than 80 inserts per second. I'm connecting the database via JDBC where connection pooling is possible and also considered. I haven't been able to find how much memory I can expect the client to consume, so I thought testing was more accurate than calculating. Is it really necessary with 1MB RAM for one connection ? Poul
On Mon, 5 Sep 2005, [UTF-8] Poul Mц╦ller Hansen wrote: > I'm trying to setup a database for 10000 concurrent users for a test. > I have a system with 1GB of RAM where I will use 512MB for PostgreSQL. > It is running SuSE 9.3 > > I have changed SHMMAX & SHMALL > echo "536870912" >/proc/sys/kernel/shmmax > echo "536870912" >/proc/sys/kernel/shmall > > and max_connections = 10000 in postgresql.conf > > When trying to start the database server it leaves this in the log. > Calculated the values should be > SEMMNI = 10000 / 16 > SEMMNS = (10000 / 16) * 17 "plus room for other applications" How much > should that be ? > And where can I change those values on a 2.6 kernel ? > The file /proc/sys/kernel/sem contains 4 numbers SEMMSL The maximum semaphores per semaphore set. SEMMNS A system-wide limit on the number of semaphores in all semaphore sets. SEMOPM The maximum number of operations that may be specified in a semop(2) call. SEMMNI A system-wide limit on the maximum number of semaphore identifiers. Look "man proc" So just do something like echo "250 32000 32 16000" > /proc/sys/kernel/sem (compute the exact numbers by yourself) But I really doubt that it it possible/reasonable to have 10000 simultaneous connections. Also you can setup the semaphore numbers using sysctl sysctl -w kernel.sem="250 32000 32 16000" Regards, Sergey ***************************************************** Sergey E. Koposov Max-Planck Institut fuer Astronomie Web: http://lnfm1.sai.msu.ru/~math E-mail: math@sai.msu.ru
"Sergey E. Koposov" <math@sai.msu.ru> writes: > But I really doubt that it it possible/reasonable to have 10000 > simultaneous connections. You're going to need a heck of a beefy machine to do it, anyway. I would expect that after fixing the semaphore configuration problem, the next thing that'll be an issue is the size of the kernel's open files table. Do you have the kernel configured to support several hundred thousand open files? Also, as already noted, you need to figure at least a megabyte or two of working storage per connection, so even if it runs it'll probably swap like mad. Put a connection pooler in front, instead... regards, tom lane
At 09:45 PM 9/5/2005 +0100, Richard Huxton wrote: >Poul Møller Hansen wrote: >>I'm trying to setup a database for 10000 concurrent users for a test. >>I have a system with 1GB of RAM where I will use 512MB for PostgreSQL. >>It is running SuSE 9.3 > >I think you're being horribly optimistic if you actually want 10000 >concurrent connections, with users all doing things. Even if you only >allow 1MB for each connection that's 10GB of RAM you'd want. Plus a big >chunk more to actually cache your database files and do work in. Then, if >you had 10,000 concurrent queries you'd probably want a mainframe to >handle all the concurrency, or perhaps a 64-CPU box would suffice... 10GB of RAM isn't that farfetched nowadays. However I/O might be a problem. A single drive can typically write/read about 10MB a second (64KB chunks random access - not sure if you'd want to bet on getting sequential throughput ;) ). Anyway, it'll be something interesting to see ;). Link.
> > 10GB of RAM isn't that farfetched nowadays. > > However I/O might be a problem. A single drive can typically write/read > about 10MB a second (64KB chunks random access - not sure if you'd want > to bet on getting sequential throughput ;) ). > > Anyway, it'll be something interesting to see ;). > > Link. > The database server is started now with max_connections = 10000 and 100MB RAM is used /dev/sda: Timing buffered disk reads: 162 MB in 3.04 seconds = 53.32 MB/sec It is not that bad :) Yes I know there should be more disk arms. Now I'm curious to see how the many threads will be handled in Java ... Poul
Hello, Please let me know how The database server is started with max_connections = 10000 ??? I have same issue, but i have a SAN storage where Postgresql is installed. Sanjeet -- View this message in context: http://postgresql.nabble.com/Setting-up-a-database-for-10000-concurrent-users-tp1848801p5847891.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
On 05/04/2015 02:02 AM, sanjeetkamble wrote: > Hello, > > Please let me know how The database server is started with max_connections = > 10000 ??? > > I have same issue, but i have a SAN storage where Postgresql is installed. > > > Sanjeet > > No doubt that would be a problem. Its bad idea. set max_connections to core count * 2, then put pg_pool in front, and setpg_pools max count to 10000. -Andy
I suggest pg_bouncer as opposed to pg_pool. My testing showed it handled connections better. Ultimately the choice is yours, but with 10000 connections, you absolutely need a connection manger.
On Mon, May 4, 2015 at 10:08 AM, Andy Colson <andy@squeakycode.net> wrote:
On 05/04/2015 02:02 AM, sanjeetkamble wrote:Hello,
Please let me know how The database server is started with max_connections =
10000 ???
I have same issue, but i have a SAN storage where Postgresql is installed.
Sanjeet
No doubt that would be a problem. Its bad idea. set max_connections to core count * 2, then put pg_pool in front, and set pg_pools max count to 10000.
-Andy
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Melvin Davidson
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
> > On 05/04/2015 02:02 AM, sanjeetkamble wrote: > > Hello, > > Please let me know how The database server is started with max_connections = > 10000 ??? > > I have same issue, but i have a SAN storage where Postgresql is installed. > > > Sanjeet > > > On Mon, May 4, 2015 at 10:08 AM, Andy Colson <andy@squeakycode.net <mailto:andy@squeakycode.net>> wrote: > > No doubt that would be a problem. Its bad idea. set max_connections to core count * 2, then put pg_pool in front,and set pg_pools max count to 10000. > > -Andy > > On 05/04/2015 09:22 AM, Melvin Davidson wrote: > I suggest pg_bouncer as opposed to pg_pool. My testing showed it handled connections better. Ultimately the choice is yours,but with 10000 connections, you absolutely need a connection manger. > Oops. I meant pg_bouncer too. (I haven't had caffeine yet). -Andy
Hello, Please explain in details im not able to understand. -- View this message in context: http://postgresql.nabble.com/Setting-up-a-database-for-10000-concurrent-users-tp1848801p5848004.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
On 05/04/2015 11:11 PM, sanjeetkamble wrote: > Hello, > > Please explain in details im not able to understand. You are going to have to be more specific. Do you want to know why setting max_connections=10000 will not work? Do you want to know about connection pooling? Or do you want to know how to set up a particular connection pooler? > > > > -- > View this message in context: http://postgresql.nabble.com/Setting-up-a-database-for-10000-concurrent-users-tp1848801p5848004.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > -- Adrian Klaver adrian.klaver@aklaver.com
sanjeetkamble <sanjeetkamble@rediffmail.com> wrote: > Please explain in details im not able to understand. https://wiki.postgresql.org/wiki/Number_Of_Database_Connections http://stackoverflow.com/questions/10419665/how-does-pgbouncer-help-to-speed-up-django/10420469#10420469 https://wiki.postgresql.org/wiki/PgBouncer -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company