Thread: Would my postgresql 8.4.12 profit from doubling RAM?
Hello, I run CentOS 6.3 server with 16 GB RAM and: postgresql-8.4.12-1.el6_2.x86_64 pgbouncer-1.3.4-1.rhel6.x86_64 The modified params in postgresql.conf are: max_connections = 100 shared_buffers = 4096MB and the pgbouncer runs with: pool_mode = session server_reset_query = DISCARD ALL; The main app is a card game with 30-500 simultaneous users for which I save some playing stats into the db + PHP scripts to display those stats again. I have an option to double the RAM for EUR 180,- but wonder if it will improve any performance and also what to do on the PostgreSQL side once I've doubled the RAM (like double shared_buffers? but how do I find out if it's needed, maybe they're empty?) Below is a typical top output, the pref.pl is my game daemon: top - 13:40:30 up 21 days, 5:11, 1 user, load average: 0.61, 1.14, 1.31 Tasks: 232 total, 1 running, 231 sleeping, 0 stopped, 0 zombie Cpu0 : 14.6%us, 0.3%sy, 0.0%ni, 84.4%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Cpu1 : 3.0%us, 0.0%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 7.3%us, 0.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 7.3%us, 0.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 10.0%us, 0.0%sy, 0.0%ni, 90.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 2.3%us, 0.0%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 1.7%us, 0.0%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16243640k total, 14091172k used, 2152468k free, 621072k buffers Swap: 2096056k total, 0k used, 2096056k free, 8929900k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19992 postgres 20 0 4378m 782m 775m S 11.0 4.9 2:28.58 postmaster 16184 nobody 20 0 116m 21m 3908 S 9.0 0.1 22:01.32 pref.pl 16187 postgres 20 0 4375m 502m 497m S 7.3 3.2 37:13.48 postmaster 20229 postgres 20 0 4377m 426m 420m S 6.3 2.7 0:07.01 postmaster 20201 postgres 20 0 4378m 512m 505m S 4.7 3.2 0:23.65 postmaster 20135 postgres 20 0 4378m 771m 764m S 2.7 4.9 2:14.57 postmaster 20209 postgres 20 0 4377m 571m 564m S 2.0 3.6 1:14.34 postmaster 20030 postgres 20 0 4376m 890m 883m S 1.7 5.6 3:39.64 postmaster 20171 apache 20 0 370m 30m 16m S 0.7 0.2 0:01.87 httpd 18986 apache 20 0 371m 43m 28m S 0.3 0.3 0:11.47 httpd 19523 apache 20 0 370m 32m 18m S 0.3 0.2 0:07.18 httpd 19892 apache 20 0 380m 37m 19m S 0.3 0.2 0:04.86 httpd 20129 apache 20 0 376m 37m 16m S 0.3 0.2 0:02.39 httpd 20335 root 20 0 15148 1416 996 R 0.3 0.0 0:00.13 top Thank you Alex
On 08/30/2012 07:42 PM, Alexander Farber wrote: > Hello, > > I run CentOS 6.3 server with 16 GB RAM and: > postgresql-8.4.12-1.el6_2.x86_64 > pgbouncer-1.3.4-1.rhel6.x86_64 > > The modified params in postgresql.conf are: > max_connections = 100 > shared_buffers = 4096MB > > and the pgbouncer runs with: > pool_mode = session > server_reset_query = DISCARD ALL; > > The main app is a card game with 30-500 > simultaneous users for which I save some > playing stats into the db + > PHP scripts to display those stats again. > > I have an option to double the RAM for EUR 180,- > but wonder if it will improve any performance and > also what to do on the PostgreSQL side once > I've doubled the RAM (like double shared_buffers? > but how do I find out if it's needed, maybe they're empty?) > > Below is a typical top output, the pref.pl is my game daemon What you really want to look at is the iowait%, which you aren't showing, and at vmstat output to see what kind of disk read rates you're doing. Looking at `free -m` can be informative too; if your server has more than a few hundred MB of free memory there's unlikely to be much to gain by adding more RAM 'cos it can't find anything useful to do with what it already has. In practice this is rare.
On Thu, Aug 30, 2012 at 6:42 AM, Alexander Farber <alexander.farber@gmail.com> wrote: > Hello, > > I run CentOS 6.3 server with 16 GB RAM and: > postgresql-8.4.12-1.el6_2.x86_64 > pgbouncer-1.3.4-1.rhel6.x86_64 > > The modified params in postgresql.conf are: > max_connections = 100 > shared_buffers = 4096MB > > and the pgbouncer runs with: > pool_mode = session > server_reset_query = DISCARD ALL; > > The main app is a card game with 30-500 > simultaneous users for which I save some > playing stats into the db + > PHP scripts to display those stats again. > > I have an option to double the RAM for EUR 180,- > but wonder if it will improve any performance and > also what to do on the PostgreSQL side once > I've doubled the RAM (like double shared_buffers? > but how do I find out if it's needed, maybe they're empty?) > > Below is a typical top output, the pref.pl is my game daemon: > > top - 13:40:30 up 21 days, 5:11, 1 user, load average: 0.61, 1.14, 1.31 > Tasks: 232 total, 1 running, 231 sleeping, 0 stopped, 0 zombie > Cpu0 : 14.6%us, 0.3%sy, 0.0%ni, 84.4%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st > Cpu1 : 3.0%us, 0.0%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu2 : 7.3%us, 0.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu3 : 7.3%us, 0.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu4 : 10.0%us, 0.0%sy, 0.0%ni, 90.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu5 : 2.3%us, 0.0%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu6 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Cpu7 : 1.7%us, 0.0%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Mem: 16243640k total, 14091172k used, 2152468k free, 621072k buffers > Swap: 2096056k total, 0k used, 2096056k free, 8929900k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 19992 postgres 20 0 4378m 782m 775m S 11.0 4.9 2:28.58 postmaster > 16184 nobody 20 0 116m 21m 3908 S 9.0 0.1 22:01.32 pref.pl > 16187 postgres 20 0 4375m 502m 497m S 7.3 3.2 37:13.48 postmaster > 20229 postgres 20 0 4377m 426m 420m S 6.3 2.7 0:07.01 postmaster > 20201 postgres 20 0 4378m 512m 505m S 4.7 3.2 0:23.65 postmaster > 20135 postgres 20 0 4378m 771m 764m S 2.7 4.9 2:14.57 postmaster > 20209 postgres 20 0 4377m 571m 564m S 2.0 3.6 1:14.34 postmaster > 20030 postgres 20 0 4376m 890m 883m S 1.7 5.6 3:39.64 postmaster > 20171 apache 20 0 370m 30m 16m S 0.7 0.2 0:01.87 httpd > 18986 apache 20 0 371m 43m 28m S 0.3 0.3 0:11.47 httpd > 19523 apache 20 0 370m 32m 18m S 0.3 0.2 0:07.18 httpd > 19892 apache 20 0 380m 37m 19m S 0.3 0.2 0:04.86 httpd > 20129 apache 20 0 376m 37m 16m S 0.3 0.2 0:02.39 httpd > 20335 root 20 0 15148 1416 996 R 0.3 0.0 0:00.13 top I would say no -- things are fine. That's a top from a perfectly healthy server. Are you experiencing poor performance? What problem are you trying to solve exactly? One thing to possibly explore if you're seeing unpredictable query latency when lots of users are logged on is pgbouncer transaction mode. merlin
On Thu, Aug 30, 2012 at 5:42 AM, Alexander Farber <alexander.farber@gmail.com> wrote: > Hello, > > I run CentOS 6.3 server with 16 GB RAM and: > postgresql-8.4.12-1.el6_2.x86_64 > pgbouncer-1.3.4-1.rhel6.x86_64 > > The modified params in postgresql.conf are: > max_connections = 100 > shared_buffers = 4096MB That's probably plenty. Remember Postgresql uses memory in other ways so handing it all over to shared buffers is not a good idea anyway. You might consider upping work_mem a bit, depending on your workload. > Below is a typical top output, the pref.pl is my game daemon: > > Mem: 16243640k total, 14091172k used, 2152468k free, 621072k buffers > Swap: 2096056k total, 0k used, 2096056k free, 8929900k cached This is the important stuff right here. Note you've got 2G free mem, and almost 9G of cache. That's good. There's no memory pressure on your server right now. Let's say that you average about 100 active users, and currently work_mem is set to 1M (the default.) If you increase that to 16M, that'd be max 1.6G of memory, which you have free anyway right now. I've found that small increases of work_mem into the 8 to 16M zone often increase performance because they allow bigger queries to fit results into hash_* operations which are a pretty fast way of joining larger data sets. If pgbouncer keeps your actual connections in the range of a few dozen at a time then you can look at going high on work_mem, but often with workloads like the one you're talking about you won't see any increase in performance. Once the data set used for hash_* operations fits in memory, more work_mem won't help. Note that work_mem is PER op, not per connections, per user or per database, so it can add up REAL fast, hence why it's so small to start with (1M). Overdoing it can result in a server that falls to its knees during heavy load.
On Thu, Aug 30, 2012 at 8:42 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > users, and currently work_mem is set to 1M (the default.) If you > increase that to 16M, that'd be max 1.6G of memory, which you have > free anyway right now. Self correction here. Of course that's assuming 1 sort on average per query. My experience is that you generally use less than that. Except when you don't. :) If your typical query has 6 or 12 sorts in it, your memory can run out fast. But judging by your current free RAM, I'm guessing it doesn't do that. Only thorough testing can tell you the sweet spot tho.
Hello, thank you for your replies and sorry for the delay in my replying - On Thu, Aug 30, 2012 at 4:45 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Thu, Aug 30, 2012 at 8:42 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> users, and currently work_mem is set to 1M (the default.) If you >> increase that to 16M, that'd be max 1.6G of memory, which you have >> free anyway right now. I did look at the vmstat output, but can't deduce anything from it: # free -m total used free shared buffers cached Mem: 15862 13289 2573 0 588 8407 -/+ buffers/cache: 4293 11569 Swap: 2046 0 2046 # vmstat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 5 0 256 2610788 602424 8605344 0 0 0 112 0 0 10 0 88 1 0 I'll try changing work_mem to 2MB first - once I upgrade the RAM. Regards Alex
On Tue, Sep 4, 2012 at 10:59 AM, Alexander Farber <alexander.farber@gmail.com> wrote: > I'll try changing work_mem to 2MB first - once I upgrade the RAM. And then I'll increase it up to 16MB every day as Scott proposed.
On Tue, Sep 4, 2012 at 2:59 AM, Alexander Farber <alexander.farber@gmail.com> wrote: > Hello, thank you for your replies and sorry for the delay in my replying - > > On Thu, Aug 30, 2012 at 4:45 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Thu, Aug 30, 2012 at 8:42 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >>> users, and currently work_mem is set to 1M (the default.) If you >>> increase that to 16M, that'd be max 1.6G of memory, which you have >>> free anyway right now. > > I did look at the vmstat output, but can't deduce anything from it: Looks like you're not real familiar with vmstat. It shows you averages of things like context switches, blocks read and written and so on, over time. The first line is the average since the server booted up so isn't all that useful. vmstat 10 and let it run for a few minutes during heavy load then cut and paste. For instance here's some output from a production server: vmstat 10 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 6 0 468388 3756476 1445228 94339352 0 0 3 116 0 0 15 2 83 0 4 0 468388 3742288 1445244 94328832 0 0 53 2992 18907 18320 12 2 86 0 5 0 468388 3731272 1445264 94334640 0 0 54 2826 23835 24463 17 3 80 0 2 0 468388 3722556 1445276 94341456 0 0 77 3638 23207 24297 17 3 80 0 Here's a page with a quick description of each field: http://www.lazysystemadmin.com/2011/04/understanding-vmstat-output-explained.html The ones to look for are b (high b means blocking IO ops) so/si which means active swapping, in/cs interrupts / context switches per second. On big servers numbers into the 10s of k are fine. 100s of k are getting a bit much. It means how often your machine is switching tasks. Too much switching and it spends all its time switching and not doing anything. us sy id wa are all % of the cpu(s) doing each of: user space work, system work, idle, or waiting. Note that if you have 4 cores, and are 25% wa(it) that you have one core doing nothing but waiting. I.e. the % is for the total of cores, not of 1 core. > # free -m > total used free shared buffers cached > Mem: 15862 13289 2573 0 588 8407 > -/+ buffers/cache: 4293 11569 > Swap: 2046 0 2046 So you're not using swap, you have 2.5G free memory and 8.4G cached. I'm assuming there's nothing to be gained adding memory unless you could use larger work_mem for some of your work loads. > I'll try changing work_mem to 2MB first - once I upgrade the RAM. Yep, small steps, measured afterwards for impact is the way to go.
Here is a bash script I wrote to print out mem config ffrom postgresconf.sql and os (centos 5.5 in this case). According to Gregory Smith in Postgresql 9.0 shared buffers should be appx 25-40% of avail Physical RAM. Also considerPostgres uses the OS Buffer as it access the physical data and log files and while doing so has the potential to double buffer blocks. WorkMEM is suggested at 5% but if you do alot of CLUSTER/ANALYZE/VACUUM you will want to up this, I usually round off to the highest power of 2 is 5% is 328mb i'll set it to 512. Most of the conversions are done in the script and the Shared Memory checks are just that, checks, a modern OS should be way above pg required kernel settings. also look at your ipcs -m this will show you the shared memory in use and is you have other processes aside from postgres using shared memory. I also have a 9.0 script if anyone wants it. # # Postgresql Memory Configuration and Sizing Script # By: James Morton # Last Updated 06/18/2012 # # Note This script is meant to be used with by the postgres user with a configured .pgpass file # It is for Postgres version 8 running on Linux and only tested on Centos 5 # # Reference - http://www.postgresql.org/docs/8.0/static/kernel-resources.html # # This script should be run after changing any of the following in the postgresconf.sql # # maximum_connections # block_size # shared_buffers # # or after changing the following OS kernel values # # SHMMAX # SHMALL # SHMMNI # SEMMNS # SEMMNI # SEMMSL #!/bin/bash #Input Variables DBNAME=$1 USERNAME=$2 clear echo echo "Postgresql Shared Memory Estimates" echo echo echo "Local Postgres Configuration settings" echo #Postgresql Version PSQL="psql "$DBNAME" -U "$USERNAME PG_VERSION=$($PSQL --version) echo "PG_VERSION:"$PG_VERSION #Postgresql Block Size PG_BLKSIZ=$($PSQL -t -c "show block_size;") echo "PG_BLKSIZ:"$PG_BLKSIZ #Maximum Connections PG_MAXCON=$($PSQL -t -c "show max_connections;") echo "PG_MAXCON:"$PG_MAXCON #Shared Buffers PG_SHABUF=$($PSQL -t -c "show shared_buffers;") echo "PG_SHABUF:" $PG_SHABUF #maintainance_work_mem PG_MNTWKM=$($PSQL -t -c "show maintenance_work_mem;") echo "PG_MNTWKM:"$PG_MNTWKM #work_mem PG_WRKMEM=$($PSQL -t -c "show work_mem;") echo "PG_WRKMEM:"$PG_WRKMEM echo echo echo "Kernel Shared Memory Settings" echo CUR_SHMMAX_IN_B=$(cat /proc/sys/kernel/shmmax) #echo "CUR_SHMMAX_IN_B:" $CUR_SHMMAX_IN_B CUR_SHMMAX_IN_MB=$(( (CUR_SHMMAX_IN_B / 1024) / 1024 )) echo "CUR_SHMMAX_IN_MB:" $CUR_SHMMAX_IN_MB #Estimate SHMMAX per Postgresql 8.0 table 16-2 SHMMAX_MAXCON=$(( PG_MAXCON * 14541 )) #echo "SHMMAX_MAXCON:" $SHMMAX_MAXCON SHMMAX_SHABUF=$(( PG_SHABUF * 9832 )) #echo "SHMMAX_SHABUF:" $SHMMAX_SHABUF PG_REC_SHMMAX_TOTAL_B=$(( 256000 + SHMMAX_MAXCON + SHMMAX_SHABUF )) #echo "PG_REC_SHMMAX_TOTAL_B:" $PG_REC_SHMMAX_TOTAL_B PG_REC_SHMMAX_TOTAL_MB=$(( (PG_REC_SHMMAX_TOTAL_B / 1024) / 1024 )) echo "PG_REC_SHMMAX_TOTAL_MB:" $PG_REC_SHMMAX_TOTAL_MB if [ "$PG_REC_SHMMAX_TOTAL_B" -lt "$CUR_SHMMAX_IN_B" ]; then echo "SHMMAX is within Postgresql's needs" elif [ "$PG_REC_SHMMAX_TOTAL_B" -ge "$CUR_SHMMAX_IN_B" ]; then echo "SHMMAX should be set greater than $PG_REC_SHMMAX_TOTAL_B" else echo "SHHMAX setting cannot be determined" fi echo CUR_SHMALL=$(cat /proc/sys/kernel/shmall) #note: SHMALL on CENTOS is in Bytes #echo "CUR_SHMALL:" $CUR_SHMALL CUR_SHMALL_IN_MB=$(( (CUR_SHMALL / 1024) / 1024 )) echo "CUR_SHMALL_IN_MB:" $CUR_SHMALL_IN_MB if [ "$PG_REC_SHMMAX_TOTAL_B" -lt "$CUR_SHMALL" ]; then echo "SHMALL is within Postgresql's needs" elif [ "$PG_REC_SHMMAX_TOTAL_B" -ge "$CUR_SHMALL" ]; then echo "SHMALL should be set greater than $PG_REC_SHMMAX_TOTAL_B" else echo "SHMALL setting cannot be determined" fi echo CUR_SHMMNI=$(cat /proc/sys/kernel/shmmni) echo "CUR_SHMMNI:" $CUR_SHMMNI if [ "$CUR_SHMMNI" -ge 1 ]; then echo "SHMMNI is within Postgresql's needs" elif [ "$CUR_SHMMNI" -lt 1 ]; then echo "SHMMNI should be set greater than 1" else echo "SHMMNI setting cannot be determined" fi echo echo echo "Kernel Semaphore Settings" echo CUR_SEMMNI=$( cat /proc/sys/kernel/sem | awk '{print $4}' ) echo "CUR_SEMMNI:" $CUR_SEMMNI PG_RECSET_SEMMNI=$(printf "%.0f" $(echo "scale=2;($PG_MAXCON) / 16" | bc)) echo "PG_RECSET_SEMMNI:" $PG_RECSET_SEMMNI if [ "$CUR_SEMMNI" -ge "$PG_RECSET_SEMMNI" ]; then echo "SEMMNI is within Postgresql's needs" elif [ "$CUR_SEMMNI" -lt "$PG_RECSET_SEMMNI" ]; then echo "SEMMNI should be set greater than or equal to $PG_RECSET_SEMMNI" else echo "SEMMNI setting cannot be determined" fi echo CUR_SEMMNS=$( cat /proc/sys/kernel/sem | awk '{print $2}' ) echo "CUR_SEMMNS:" $CUR_SEMMNS PG_RECSET_SEMMNS=$(printf "%.0f" $(echo "scale=2;(($PG_MAXCON) / 16)*17" | bc)) echo "PG_RECSET_SEMMNS:" $PG_RECSET_SEMMNS if [ "$CUR_SEMMNS" -ge "$PG_RECSET_SEMMNS" ]; then echo "SEMMNS is within Postgresql's needs" elif [ "$CUR_SEMMNS" -lt "$PG_RECSET_SEMMNS" ]; then echo "SEMMNS should be set greater than or equal to $PG_RECSET_SEMMNS" else echo "SEMMNS setting cannot be determined" fi echo CUR_SEMMSL=$( cat /proc/sys/kernel/sem | awk '{print $1}' ) echo "CUR_SEMMSL:" $CUR_SEMMSL if [ "$CUR_SEMMSL" -ge 17 ]; then echo "SEMMSL is within Postgresql's needs" elif [ "$CUR_SEMMSL" -lt 17 ]; then echo "SEMMSL should be set greater than or equal to 17" else echo "SEMMSL setting cannot be determined" fi echo echo echo echo "Memory Sizing" echo OS_PAGE_SIZE=$(getconf PAGE_SIZE) echo "OS_PAGE_SIZE:" $OS_PAGE_SIZE OS_PHYS_PAGES=$(getconf _PHYS_PAGES) echo "OS_PHYS_PAGES:" $OS_PHYS_PAGES OS_TOTAL_MEM_IN_MB=$(( ((OS_PAGE_SIZE * OS_PHYS_PAGES) / 1024) / 1024 )) echo "OS_TOTAL_MEM_IN_MB:" $OS_TOTAL_MEM_IN_MB echo CUR_SHABUF_MB=$(( PG_SHABUF * 8192 / 1024 / 1024)) echo "CUR_SHABUF_MB:" $CUR_SHABUF_MB SHABUF_RECSET_IN_MB=$(printf "%.0f" $(echo "scale=2;$OS_TOTAL_MEM_IN_MB*.25" | bc)) echo "SHABUF_RECSET_IN_MB:"$SHABUF_RECSET_IN_MB SHABUF_RECSET=$(( SHABUF_RECSET_IN_MB * 1024 / 8 )) echo "SHABUF_RECSET:" $SHABUF_RECSET echo PG_MNTWKM_IN_MB=$(( PG_MNTWKM / 1024 )) echo "PG_MNTWKM_IN_MB:" $PG_MNTWKM_IN_MB RECSET_MNTWKM_MB=$(printf "%.0f" $(echo "scale=2;$OS_TOTAL_MEM_IN_MB*.05" | bc)) echo "RECSET_MNTWKM_MB:" $RECSET_MNTWKM_MB RECSET_MNTWKM=$(( RECSET_MNTWKM_MB * 1024 )) echo "RECSET_MNTWKM:" $RECSET_MNTWKM echo IPCS_STRING=$(ipcs -m | grep postgres | awk '{print $5}') let CUR_IPCS_PG_SHAMEMSEG_MB=0 for token in $IPCS_STRING; do CUR_IPCS_PG_SHAMEMSEG_MB=$(( CUR_IPCS_PG_SHAMEMSEG_MB + token ));done; CUR_IPCS_PG_SHAMEMSEG_MB=$(( $CUR_IPCS_PG_SHAMEMSEG_MB / 1024 / 1024 )) echo "CUR_IPCS_PG_SHAMEMSEG_MB:" $CUR_IPCS_PG_SHAMEMSEG_MB POTENTIAL_WORKMEM_MB=$(( PG_WRKMEM * PG_MAXCON / 1024 )) echo "POTENTIAL_WORKMEM_MB:" $POTENTIAL_WORKMEM_MB echo PG_TOTAL_CUR_MEM_MB=$(( CUR_IPCS_PG_SHAMEMSEG_MB + POTENTIAL_WORKMEM_MB + PG_MNTWKM_IN_MB )) echo "PG_TOTAL_CUURENT_POTENTIAL_MEM_MB:" $PG_TOTAL_CUR_MEM_MB PG_TOTAL_RECSET_MEM_MB=$(( SHABUF_RECSET_IN_MB + POTENTIAL_WORKMEM_MB + RECSET_MNTWKM_MB )) echo "PG_TOTAL_RECSET_POTENTIAL_MEM_MB:" $PG_TOTAL_RECSET_MEM_MB -- View this message in context: http://postgresql.1045698.n5.nabble.com/Would-my-postgresql-8-4-12-profit-from-doubling-RAM-tp5721879p5722887.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
On Wed, Sep 5, 2012 at 2:16 PM, jam3 <jamorton3@gmail.com> wrote: > Here is a bash script I wrote to print out mem config ffrom postgresconf.sql > and os (centos 5.5 in this case). According to Gregory Smith in Postgresql > 9.0 shared buffers should be appx 25-40% of avail Physical RAM. Also > considerPostgres uses the OS Buffer as it access the physical data and log > files and while doing so has the potential to double buffer blocks. > > WorkMEM is suggested at 5% but if you do alot of CLUSTER/ANALYZE/VACUUM you > will want to up this, I usually round off to the highest power of 2 is 5% is > 328mb i'll set it to 512. I think you mean maintenance_work_mem here, as regular vacuum or analyze don't use work_mem.
Here is the 9.0 versionand yes I meant maintenance_work_mem # Postgresql Memory Configuration and Sizing Script # By: James Morton # Last Updated 05/16/2012 # # Note This script is meant to be used with by the postgres user with a configured .pgpass file # It is for Postgres version 9 running on Linux and only tested on Centos 5 # # Reference - http://eee.postgresql.org/docs/9.1/static/kernel-resources.html # # This script should be run after changing any of the following in the postgresconf.sql # # maximum_connections # block_size # wal_block_size # wal_buffers # max_locks_per_transaction # max_prepared_transactions # shared_buffers # # or after changing the following OS kernel values # # SHMMAX # SHMALL # SHMMNI # SEMMNS # SEMMNI # SEMMSL #!/bin/bash #Input Variables DBNAME=$1 USERNAME=$2 clear echo echo "Postgresql Shared Memory Estimates" echo echo echo "Local Postgres Configuration settings" echo #Postgresql Version PSQL="psql "$DBNAME" -U "$USERNAME PG_VERSION=$($PSQL --version) echo "PG_VERSION:"$PG_VERSION #Postgresql Block Size PG_BLKSIZ=$($PSQL -t -c "show block_size;") echo "PG_BLKSIZ:"$PG_BLKSIZ #Maximum Connections PG_MAXCON=$($PSQL -t -c "show max_connections;") echo "PG_MAXCON:"$PG_MAXCON #Maximum Locks per Tansaction PG_MAXLPT=$($PSQL -t -c "show max_locks_per_transaction;") echo "PG_MAXLPT:"$PG_MAXLPT #Maximum Prepared Transactions, 2 phase commit, might not configured in postresql.conf let PG_MAXPRT=0 PG_MAXPRT=$($PSQL -t -c "show max_prepared_transactions;") echo "PG_MAXPRT:"$PG_MAXPRT #Shared Buffers PG_SHABUF=$($PSQL -t -c "show shared_buffers;") PG_SHABUF=$(echo $PG_SHABUF | sed s/MB//) echo "PG_SHABUF:" $PG_SHABUF PG_SHABUF_IN_B=$(( $PG_SHABUF * 1024 * 1024 )) echo "PG_SHABUF_IN_B:"$PG_SHABUF_IN_B PG_SHABUF_NUMOFBUF=$(($PG_SHABUF_IN_B / $PG_BLKSIZ)) echo "PG_SHABUF_NUMOFBUF:"$PG_SHABUF_NUMOFBUF #WAL Block Size PG_WALBLK=$($PSQL -t -c "show wal_block_size;") echo "PG_WALBLK:"$PG_WALBLK #WAL Buffers PG_WALBUF=$($PSQL -t -c "show wal_buffers;") PG_WALBUF=$(echo $PG_WALBUF | sed s/MB//) echo "PG_WALBUF:" $PG_WALBUF PG_WALBUF_IN_B=$(( $PG_WALBUF * 1024 * 1024 )) echo "PG_WALBUF_IN_B:"$PG_WALBUF_IN_B PG_WALBUF_NUMOFBUF=$(($PG_WALBUF_IN_B / $PG_WALBLK)) echo "PG_WALBUF_NUMOFBUF:"$PG_WALBUF_NUMOFBUF #Autovacuum workers PG_ATVWRK=$($PSQL -t -c "show autovacuum_max_workers;") echo "PG_ATVWRK:"$PG_ATVWRK #maintainance_work_mem PG_MNTWKM=$($PSQL -t -c "show maintenance_work_mem;") PG_MNTWKM=$(echo $PG_MNTWKM | sed s/MB//) echo "PG_MNTWKM:"$PG_MNTWKM #effective_cache_size PG_EFCHSZ=$($PSQL -t -c "show effective_cache_size;") echo "PG_EFCHSZ:"$PG_EFCHSZ echo echo "OS Memory settings" echo PAGE_SIZE=$(getconf PAGE_SIZE) echo "PAGE_SIZE:"$PAGE_SIZE PHYS_PAGES=$(getconf _PHYS_PAGES) echo "PHYS_PAGES:"$PHYS_PAGES TOTAL_MEM_IN_MB=$(( ((PAGE_SIZE * PHYS_PAGES) / 1024) / 1024 )) echo "TOTAL_MEM_IN_MB:"$TOTAL_MEM_IN_MB echo echo "Current Kernel Shared Memory Settings" echo #get os mem settings into vars CUR_SHMMAX_IN_B=$(cat /proc/sys/kernel/shmmax) echo "CUR_SHMMAX_IN_B:"$CUR_SHMMAX_IN_B CUR_SHMMAX_IN_MB=$(( (CUR_SHMMAX_IN_B / 1024) / 1024 )) echo "CUR_SHMMAX_IN_MB:"$CUR_SHMMAX_IN_MB CUR_SHMALL=$(cat /proc/sys/kernel/shmall) echo "CUR_SHMALL:" $CUR_SHMALL CUR_SHMALL_IN_MB=$(( (CUR_SHMALL / 1024) / 1024 )) echo "CUR_SHMALL_IN_MB:"$CUR_SHMALL_IN_MB CUR_SHMMNI=$(cat /proc/sys/kernel/shmmni) echo "CUR_SHMMNI:" $CUR_SHMMNI echo echo "Current Kernel Semaphore Settings" echo CUR_SEMMNI=$( cat /proc/sys/kernel/sem | awk '{print $4}' ) echo "CUR_SEMMNI:"$CUR_SEMMNI CUR_SEMMNS=$( cat /proc/sys/kernel/sem | awk '{print $2}' ) echo "CUR_SEMMNS:"$CUR_SEMMNS CUR_SEMMSL=$( cat /proc/sys/kernel/sem | awk '{print $1}' ) echo "CUR_SEMMSL:"$CUR_SEMMSL PG_RECSET_SEMMNI=$(printf "%.0f" $(echo "scale=2;($PG_MAXCON+$PG_ATVWRK+4) / 16" | bc)) echo "PG_RECSET_SEMMNI:"$PG_RECSET_SEMMNI PG_RECSET_SEMMNS=$(printf "%.0f" $(echo "scale=2;(($PG_MAXCON+$PG_ATVWRK+4) / 16)*17" | bc)) echo "PG_RECSET_SEMMNS:"$PG_RECSET_SEMMNS echo echo "Estimate SHMMAX per Postgresql 9.1 Doc - Table 17-2" echo SHMMAX_MAXCON=$(( PG_MAXCON * (1800 + 270 * PG_MAXLPT) )) echo "SHMMAX_MAXCON:"$SHMMAX_MAXCON SHMMAX_ATVWRK=$(( PG_ATVWRK * (1800 + 270 * PG_MAXLPT) )) echo "SHMMAX_ATVWRK:"$SHMMAX_ATVWRK SHMMAX_MAXPRT=$(( PG_MAXPRT * (770 + 270 * PG_MAXLPT) )) echo "SHMMAX_MAXPRT:"$SHMMAX_MAXPRT SHMMAX_SHABUF=$(( PG_SHABUF_NUMOFBUF * (PG_BLKSIZ + 208) )) echo "SHMMAX_SHABUF:"$SHMMAX_SHABUF SHMMAX_WALBUF=$(( PG_WALBUF_NUMOFBUF * (PG_WALBLK + 8) )) echo "SHMMAX_WALBUF:"$SHMMAX_WALBUF PG_REC_SHMMAX_TOTAL_B=$(( 788480 + SHMMAX_MAXCON + SHMMAX_ATVWRK + SHMMAX_MAXPRT + SHMMAX_SHABUF + SHMMAX_WALBUF )) echo "PG_REC_SHMMAX_TOTAL_B:"$PG_REC_SHMMAX_TOTAL_B PG_REC_SHMMAX_TOTAL_MB=$(( (PG_REC_SHMMAX_TOTAL_B / 1024) / 1024 )) echo "PG_REC_SHMMAX_TOTAL_MB:"$PG_REC_SHMMAX_TOTAL_MB echo echo "-checking ipcs -m, postgres should be running" CUR_IPCS_PG_SHAMEMSEG=$(ipcs -m | grep postgres | awk '{print $5}') CUR_IPCS_PG_SHAMEMSEG_MB=$(( $CUR_IPCS_PG_SHAMEMSEG / 1024 / 1024 )) echo "CUR_IPCS_PG_SHAMEMSEG_MB:" $CUR_IPCS_PG_SHAMEMSEG_MB "Should be equal or very close to the recommended SHMMAX" echo echo "Shared Memory Kernel Checks" echo #SHMMAX if [ "$PG_REC_SHMMAX_TOTAL_B" -lt "$CUR_SHMMAX_IN_B" ]; then echo "SHMMAX is within Postgresql's needs" elif [ "$PG_REC_SHMMAX_TOTAL_B" -ge "$CUR_SHMMAX_IN_B" ]; then echo "SHMMAX should be set greater than $PG_REC_SHMMAX_TOTAL_B" else echo "SHHMAX setting cannot be determined" fi #SHMALL - note: SHMALL on CENTOS is in Bytes if [ "$PG_REC_SHMMAX_TOTAL_B" -lt "$CUR_SHMALL" ]; then echo "SHMALL is within Postgresql's needs" elif [ "$PG_REC_SHMMAX_TOTAL_B" -ge "$CUR_SHMALL" ]; then echo "SHMALL should be set greater than $PG_REC_SHMMAX_TOTAL_B" else echo "SHMALL setting cannot be determined" fi #SHMMNI if [ "$CUR_SHMMNI" -ge 1 ]; then echo "SHMMNI is within Postgresql's needs" elif [ "$CUR_SHMMNI" -lt 1 ]; then echo "SHMMNI should be set greater than 1" else echo "SHMMNI setting cannot be determined" fi #SEMMNI if [ "$CUR_SEMMNI" -ge "$PG_RECSET_SEMMNI" ]; then echo "SEMMNI is within Postgresql's needs" elif [ "$CUR_SEMMNI" -lt "$PG_RECSET_SEMMNI" ]; then echo "SEMMNI should be set greater than or equal to $PG_RECSET_SEMMNI" else echo "SEMMNI setting cannot be determined" fi #SEMMNS if [ "$CUR_SEMMNS" -ge "$PG_RECSET_SEMMNS" ]; then echo "SEMMNS is within Postgresql's needs" elif [ "$CUR_SEMMNS" -lt "$PG_RECSET_SEMMNS" ]; then echo "SEMMNS should be set greater than or equal to $PG_RECSET_SEMMNS" else echo "SEMMNS setting cannot be determined" fi #SEMMSL if [ "$CUR_SEMMSL" -ge 17 ]; then echo "SEMMSL is within Postgresql's needs" elif [ "$CUR_SEMMSL" -lt 17 ]; then echo "SEMMSL should be set greater than or equal to 17" else echo "SEMMSL setting cannot be determined" fi echo echo "Estimate Total Memory Sizing" echo echo "Note: Postgres should be running for these numbers to be meaningful" echo echo $TOTAL_MEM_IN_MB "MB Total Physical System RAM" echo CUR_IPCS_SHAMEM=$(ipcs -m | grep postgres | awk '{print $5}') CUR_IPCS_SHAMEM_IN_MB=$(( $CUR_IPCS_SHAMEM / 1024 / 1024 )) #echo "CUR_IPCS_SHAMEM_IN_MB:"$CUR_IPCS_SHAMEM_IN_MB SHABUF_RECSET_IN_MB=$(printf "%.0f" $(echo "scale=2;$TOTAL_MEM_IN_MB*.25" | bc)) #echo "SHABUF_RECSET_IN_MB:"$SHABUF_RECSET_IN_MB SHABUF_RECSET_IN_MB_MAX=$(printf "%.0f" $(echo "scale=2;$TOTAL_MEM_IN_MB*.4" | bc)) #echo "SHABUF_RECSET_IN_MB_MAX:"$SHABUF_RECSET_IN_MB_MAX echo shared_buffers: $PG_SHABUF" MB current postgresql.conf setting" echo shared_buffers: $SHABUF_RECSET_IN_MB" MB Normal recomended setting (25% of Physical Ram)" echo shared_buffers: $SHABUF_RECSET_IN_MB_MAX" MB Agressive setting (40% of Physical Ram)" echo MNTWKM_RECSET_IN_MB=$(printf "%.0f" $(echo "scale=2;$TOTAL_MEM_IN_MB*.05" | bc)) #echo "MNTWKM_RECSET_IN_MB:"$MNTWKM_RECSET_IN_MB echo maintainance_worker_mem: $PG_MNTWKM "MB current postgresql.conf setting" echo maintainance_worker_mem: $MNTWKM_RECSET_IN_MB "MB Normal recommended setting (5% of physical Ram)" #OS_MEM_FREE=$(cat /proc/meminfo | grep MemFree | awk '{print $2}') #echo "OS_MEM_FREE:"$OS_MEM_FREE #OS_MEM_CACHED=$(cat /proc/meminfo | grep Cached | awk '{print $2}' | head -n1) #echo "OS_MEM_CACHED:"$OS_MEM_CACHED #EFCHSZ_RECSET=$(( (OS_MEM_FREE + OS_MEM_CACHED) / 1024 + PG_SHABUF )) #echo effective_cache_size: -- View this message in context: http://postgresql.1045698.n5.nabble.com/Would-my-postgresql-8-4-12-profit-from-doubling-RAM-tp5721879p5724078.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
Hello, I've finally doubled up RAM to 32 GB for my Quad core CentOS 6.3 server and have changed postgresql.conf to max_connections = 100 shared_buffers = 4096MB work_mem = 16M But don't see any speed improvement and also 27 GB of memory aren't used.... Please see more info + vmstat at http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server Thank you for any suggestions Alex
On Sat, Sep 29, 2012 at 10:27 AM, Alexander Farber <alexander.farber@gmail.com> wrote: > Hello, > > I've finally doubled up RAM to 32 GB for my Quad core > CentOS 6.3 server and have changed postgresql.conf to > > max_connections = 100 > shared_buffers = 4096MB > work_mem = 16M > > But don't see any speed improvement and also 27 GB of > memory aren't used.... Please see more info + vmstat at Is there a specific problem you are trying to solve, or is this more a curiosity-driven exercise? Your server was not suffering from shortage of RAM to start out, so adding more would not be expected to improve performance. > http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server If you expand the "COMMAND" field of the "top" display (by hitting "c" on my linux), you can probably see what the top "postmaster" process is doing. Anyway, it looks like your server is mostly bored. If there is a specific performance problem, it might be in the client side, or network latency. Cheers, Jeff
On 09/29/12 11:43 AM, Jeff Janes wrote: >> >http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server > If you expand the "COMMAND" field of the "top" display (by hitting "c" > on my linux), you can probably see what the top "postmaster" process > is doing. > > Anyway, it looks like your server is mostly bored. If there is a > specific performance problem, it might be in the client side, or > network latency. assuming the PP stands for PHP + Postgres, another likely candidate for performance issues is PHP opening (and closing) a new connection for each page. use pgbouncer as a connection pool and watch things speed up considerably. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
On Sat, Sep 29, 2012 at 11:27 AM, Alexander Farber <alexander.farber@gmail.com> wrote: > Hello, > > I've finally doubled up RAM to 32 GB for my Quad core > CentOS 6.3 server and have changed postgresql.conf to > > max_connections = 100 > shared_buffers = 4096MB > work_mem = 16M > > But don't see any speed improvement and also 27 GB of > memory aren't used.... Please see more info + vmstat at > > http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server I'd suggest turning on persistent connections because you DO use pgbouncer. It'll reduce connection time and give slightly better performance. But from reading that page, I don't think you've given us (or yourself really) enough data to tell you how to improve performance. The first thing to do is some simple performance profiling in your php script. Just add error_log() or whatever it's called in php, with some timing info in them to see where your time is being spent. If it's mostly on the db side, we head there, if it's mostly in the php we look there. At first just put in a couple statements throughout your script (include things like pid etc so you can trawl your logs for this later) to get an idea where in general you're spending your time. Once we get a handle on where most of it is going we'll go from there.
Hello Scott and others, On Sat, Sep 29, 2012 at 9:38 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Sat, Sep 29, 2012 at 11:27 AM, Alexander Farber > <alexander.farber@gmail.com> wrote: >> I've finally doubled up RAM to 32 GB for my Quad core >> CentOS 6.3 server and have changed postgresql.conf to >> >> max_connections = 100 >> shared_buffers = 4096MB >> work_mem = 16M >> >> http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server > > I'd suggest turning on persistent connections because you DO use > pgbouncer. It'll reduce connection time and give slightly better > performance. But from reading that page, I don't think you've given > us (or yourself really) enough data to tell you how to improve > performance. > > The first thing to do is some simple performance profiling in your php > script. Just add error_log() or whatever it's called in php, with > some timing info in them to see where your time is being spent. If > it's mostly on the db side, we head there, if it's mostly in the php > we look there. At first just put in a couple statements throughout > your script (include things like pid etc so you can trawl your logs > for this later) to get an idea where in general you're spending your > time. Once we get a handle on where most of it is going we'll go from > there. actually that's what I tried yesterday, right after the server was upgraded - I've set in postgresql.conf max_connections = 600 (to match the 500 MaxClients in httpd.conf) and then in /etc/php.ini pgsql.allow_persistent = On and added the ..., array(PDO::ATTR_PERSISTENT => true); to my PHP scripts - and suddenly my scripts stopped fetching any data from the database, were only returning empty values... I have to retry this with d/b logs on... About not giving enough information - how much information do you want? If I list all my databases + source code of the scripts, I doubt anyone will read my mail. I still hope that someone will mention a cool way to make a picture of bottlenecks of my PostgreSQL database - like select * from pg_smth and then see what is it waiting for... Thanks Alex
On Sat, Sep 29, 2012 at 2:21 PM, Alexander Farber <alexander.farber@gmail.com> wrote: > Hello Scott and others, > > On Sat, Sep 29, 2012 at 9:38 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Sat, Sep 29, 2012 at 11:27 AM, Alexander Farber >> <alexander.farber@gmail.com> wrote: >>> I've finally doubled up RAM to 32 GB for my Quad core >>> CentOS 6.3 server and have changed postgresql.conf to >>> >>> max_connections = 100 >>> shared_buffers = 4096MB >>> work_mem = 16M >>> >>> http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server >> >> I'd suggest turning on persistent connections because you DO use >> pgbouncer. It'll reduce connection time and give slightly better >> performance. But from reading that page, I don't think you've given >> us (or yourself really) enough data to tell you how to improve >> performance. >> >> The first thing to do is some simple performance profiling in your php >> script. Just add error_log() or whatever it's called in php, with >> some timing info in them to see where your time is being spent. If >> it's mostly on the db side, we head there, if it's mostly in the php >> we look there. At first just put in a couple statements throughout >> your script (include things like pid etc so you can trawl your logs >> for this later) to get an idea where in general you're spending your >> time. Once we get a handle on where most of it is going we'll go from >> there. > > actually that's what I tried yesterday, > right after the server was upgraded - > > I've set in postgresql.conf > > max_connections = 600 > > (to match the 500 MaxClients in httpd.conf) > and then in /etc/php.ini Whoa aren't you running pg bouncer? If so then leave pg alone, adjust pg bouncer. Revert that db side change, examine pgbouncer config etc. > pgsql.allow_persistent = On > > and added the ..., array(PDO::ATTR_PERSISTENT => true); > to my PHP scripts - > > and suddenly my scripts stopped fetching any > data from the database, were only returning empty values... > > I have to retry this with d/b logs on... After fixing above mentioned change. > About not giving enough information - > how much information do you want? > If I list all my databases + source code > of the scripts, I doubt anyone will read my mail. Re-read my previous post about profiling.
On Sat, Sep 29, 2012 at 4:21 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > Whoa aren't you running pg bouncer? If so then leave pg alone, adjust > pg bouncer. Revert that db side change, examine pgbouncer config etc. Let me expand a bit on that point. The reason to use pgbouncer is that you can have hundreds of connects (or more) from your apache/php server to pgbouncer, while pgbouncer will funnel those connections down to a dozen or so connections on the other side. Since pgbouncer can hold open those hundreds of connections on the app side cheaply, you may as well use persistent php connections. Meanwhile, since pgbouncer is holding the much more expensive pgsql connections open so they don't have to close / open each time you get good performance. So it goes like this: apache/php -> (500 persistent conns, cheap) -> pgbouncer -> (20 persistent pgsql conns, expensive) -> pgsql. So you shouldn't need to reconfigure anything on the pg side, just on the apache/php -> pgbouncer side. Note that if your app is doing hinky things like not resetting connections / transactions you might need to investigate taht.
On Sun, Sep 30, 2012 at 2:36 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> Whoa aren't you running pg bouncer? If so then leave pg alone, adjust >> pg bouncer. Revert that db side change, examine pgbouncer config etc. > > > apache/php -> (500 persistent conns, cheap) -> pgbouncer -> (20 > persistent pgsql conns, expensive) -> pgsql. > I will try that, thanks
On Sat, Sep 29, 2012 at 1:21 PM, Alexander Farber <alexander.farber@gmail.com> wrote: > > About not giving enough information - > how much information do you want? > If I list all my databases + source code > of the scripts, I doubt anyone will read my mail. Probably not if you just copied and pasted into the body of the email. But if you provided an easy way for someone to replicate your entire set up (including a driver to emulate the client) someone might look into that. > I still hope that someone will mention > a cool way to make a picture of bottlenecks > of my PostgreSQL database - like select * from pg_smth > and then see what is it waiting for... The amount of overhead of collecting that amount of timing information would probably remove whatever the bottleneck used to be by transferring it to the timing collector. I bet it is mostly waiting for the connection on the other end to give it something to do. If not, pg_stat_statements and autoexplain could be useful. You could try oprofile to get a more system-wide snapshot. Cheers, Jeff