Thread: Tuning 8.3
<div class="Section1"><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">I need to run about 1000 PostgreSQL connections on a server that I can use about 4 GB of the total 16GB of total RAM. It seems that each session creates a process that uses about 15 MB of RAM just for connecting so I’mrunning out of RAM rather quickly.</span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial"> </span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">I have these non-default settings:</span></font><p class="MsoNormal"><font face="Arial" size="2"><spanstyle="font-size:10.0pt; font-family:Arial">shared_buffers = 30MB</span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">max_connections = 1000 </span></font><p class="MsoNormal"><font face="Arial" size="2"><spanstyle="font-size:10.0pt; font-family:Arial"> </span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">I tried decreasing the work_mem but the db wouldn’t start then.</span></font><p class="MsoNormal"><fontface="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial"> </span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">I’m running version 8.3 on Windows 2003 Server.</span></font><p class="MsoNormal"><font face="Arial" size="2"><spanstyle="font-size:10.0pt; font-family:Arial"> </span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">Any tips for reducing the memory footprint per session? There is pgBouncer but is there anything I cando in the configuration before I go with a connection pooler?</span></font><p class="MsoNormal"><font face="Arial" size="2"><spanstyle="font-size:10.0pt; font-family:Arial"> </span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial"> </span></font><p class="MsoNormal"><font face="Arial" size="2"><span style="font-size:10.0pt; font-family:Arial">Jon</span></font></div>
Roberts, Jon wrote: > > I need to run about 1000 PostgreSQL connections on a server that I can > use about 4 GB of the total 16 GB of total RAM. It seems that each > session creates a process that uses about 15 MB of RAM just for > connecting so I’m running out of RAM rather quickly. > > I have these non-default settings: > > shared_buffers = 30MB > > max_connections = 1000 > > I tried decreasing the work_mem but the db wouldn’t start then. > > I’m running version 8.3 on Windows 2003 Server. > > Any tips for reducing the memory footprint per session? There is > pgBouncer but is there anything I can do in the configuration before I > go with a connection pooler? > > Please ask usage questions on the appropriate list (in this case one of: pgsql-general, pgsql-performance or pgsql-admin). pgsql-hackers is for discussion of development of features, not for usage issues. cheers andrew
"Roberts, Jon" <Jon.Roberts@asurion.com> writes: > I need to run about 1000 PostgreSQL connections on a server that I can > use about 4 GB of the total 16 GB of total RAM. It seems that each > session creates a process that uses about 15 MB of RAM just for > connecting so I'm running out of RAM rather quickly. I think you're being bitten by a different problem than it appears. Windows has a fixed size per-session shared memory pool which runs out rather quickly. You can raise that parameter though. (The 125 mentioned there is raised to about 300 with Pg 8.3.) See: http://www.postgresql.org/docs/faqs.FAQ_windows.html#4.4 > Any tips for reducing the memory footprint per session? There is > pgBouncer but is there anything I can do in the configuration before I > go with a connection pooler? I think at 1,000 you're probably into the domain where pgbouncer (or others like it) is a good idea. Or you could pool or batch at a higher level and have fewer sessions active at all. You don't win any performance by trying to do more things simultaneously if they're just competing for cpu timeslices or i/o bandwidth. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB'sPostgreSQL training!
> > I need to run about 1000 PostgreSQL connections on a server that I can > > use about 4 GB of the total 16 GB of total RAM. It seems that each > > session creates a process that uses about 15 MB of RAM just for > > connecting so I'm running out of RAM rather quickly. > > I think you're being bitten by a different problem than it appears. > Windows > has a fixed size per-session shared memory pool which runs out rather > quickly. > You can raise that parameter though. (The 125 mentioned there is raised to > about 300 with Pg 8.3.) > > See: > > http://www.postgresql.org/docs/faqs.FAQ_windows.html#4.4 > > Thanks for the tip and I'll be moving this to the performance forum. Although, with 8.3, it seems that the FAQ is out of date? Jon
I posted earlier about how to tune my server and I think the real problem is how many connections pgAgent creates for my job needs. I basically need to run hundreds of jobs daily all to be executed at 4:00 AM. To keep the jobs from killing the other systems, I am throttling this with a queue table. With pgAgent, it creates 2 connections (one to the maintenance db and one to the target db) and then my queue throttling makes a third connection every 10 seconds checking the job queue to see if there is an available queue to execute. A better solution would be to incorporate job throttling in pgAgent. Currently, pgAgent will spawn as many jobs as required and it creates a minimum of two database connections per job. I think a solution would be for pgAgent to not create the connection and execute my job steps unless the current number of jobs running is less than a result from a function. Sort of like this: select count(*) into v_count from queue where status = 'Processing'; while v_count >= fn_get_max_jobs() loop pg_sleep(fn_get_sleep_time()); select count(*) into v_count from queue where status = 'Processing'; end loop; I'm doing this now but inside a function being executed by pgAgent. This means I have two connections open for each job. Plus, I use a function that uses a dblink to lock the queue table and then update the status so that is a third connection that lasts just for a millisecond. So if 200 jobs are queued to run at 4:00 AM, then I have 400 connections open and then it will spike a little bit as each queued job checks to see if it can run. Do you guys think it is a good idea to add job throttling to pgAgent to limit the number of connections? Setting the value to -1 could be the default value which would allow an unlimited number of jobs to run at a time (like it is now) but a value greater than -1 would be the max number of jobs that can run concurrently. Jon
"Get thee to a connection pooler ASAP." We've got systems where we establish ~1K connections, but that's on UNIX, where the handling of large systems is *WAY* more mature than Windows. Any time those kinds of quantities of connections appear necessary, it seems highly preferable to be using connection pooling so as to try to reduce the number of actual connections and to increase the per-connection usage. -- http://linuxfinances.info/info/linuxdistributions.html "The definition of insanity is doing the same thing over and over and expecting different results." -- assortedly attributed to Albert Einstein, Benjamin Franklin, Rita Mae Brown, and Rudyard Kipling