Thread: postgresql.conf recommendations
Just out of curiosity, are you using transparent huge pages?
Server specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296vm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:listen_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)shared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MBwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0max_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disableshot_standby = on # "on" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queriesmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partitionlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,log_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles willlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statementslog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeoutautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300msper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.The problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12But periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.Inbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issueBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).And according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.But we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.Any ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.johnny
Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" <johnnydtan@gmail.com> wrote:Server specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296vm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:listen_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)shared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MBwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0max_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disableshot_standby = on # "on" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queriesmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partitionlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,log_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles willlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statementslog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeoutautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300msper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.The problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12But periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.Inbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issueBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).And according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.But we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.Any ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.johnny
What's the output of this command?
egrep 'trans|thp|compact_' /proc/vmstat
compact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.
# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag[always] neverOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <jkrupka@gmail.com> wrote:Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" <johnnydtan@gmail.com> wrote:Server specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296vm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:listen_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)shared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MBwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0max_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disableshot_standby = on # "on" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queriesmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partitionlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,log_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles willlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statementslog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeoutautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300msper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.The problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12But periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.Inbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issueBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).And according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.But we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.Any ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.johnny
On Wed, Feb 6, 2013 at 3:32 AM, Johnny Tan <johnnydtan@gmail.com> wrote: > > maintenance_work_mem = 24GB # min 1MB I'm quite astonished by this setting. Not that it explains the problem at hand, but I wonder if this is a plain mistake in configuration. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee
On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <johnnydtan@gmail.com> wrote:shared_buffers = 48GB # min 128kB
"ac@hsk.hk" <ac@hsk.hk> wrote: > Johnny Tan <johnnydtan@gmail.com> wrote: >>shared_buffers = 48GB# min 128kB > From the postgresql.conf, I can see that the shared_buffers is > set to 48GB which is not small, it would be possible that the > large buffer cache could be "dirty", when a checkpoint starts, it > would cause a checkpoint I/O spike. > > > I would like to suggest you about using pgtune to get recommended > conf for postgresql. I have seen symptoms like those described which were the result of too many dirty pages accumulating inside PostgreSQL shared_buffers. It might be something else entirely in this case, but it would at least be worth trying a reduced shared_buffers setting combined with more aggressive bgwriter settings. I might try something like the following changes, as an experiment: shared_buffers = 8GB bgwriter_lru_maxpages = 1000 bgwriter_lru_multiplier = 4 -Kevin
I've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate one that controls if the compaction is done in khugepaged, and a separate one that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)
What's the output of this command?
egrep 'trans|thp|compact_' /proc/vmstat
compact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <johnnydtan@gmail.com> wrote:# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag[always] neverOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <jkrupka@gmail.com> wrote:Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" <johnnydtan@gmail.com> wrote:Server specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296vm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:listen_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)shared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MBwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0max_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disableshot_standby = on # "on" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queriesmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partitionlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,log_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles willlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statementslog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeoutautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300msper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.The problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12But periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.Inbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issueBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).And according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.But we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.Any ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.johnny
Josh/Johnny,We've been seeing a similar problem as well, and had also figured THP was involved. We found this in syslog: https://gist.github.com/davewhittaker/4723285, which led us to disable THP 2 days ago. At first the results seemed good. In particular, our issues always seemed interrupt related and the average interrupts/sec immediately dropped from 7k to around 3k after restarting. The good news is that we didn't see any spike in system CPU time yesterday. The bad news is that we did see a spike in app latency that originated from the DB, but now the spike is in user CPU time and seems to be spread across all of the running postgres processes. Interrupts still blew up to 21k/sec when it happened. We are still diagnosing, but I'd be curious to see if either of you get similar results from turning THP off.On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <jkrupka@gmail.com> wrote:I've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate one that controls if the compaction is done in khugepaged, and a separate one that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)
What's the output of this command?
egrep 'trans|thp|compact_' /proc/vmstat
compact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <johnnydtan@gmail.com> wrote:# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag[always] neverOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <jkrupka@gmail.com> wrote:Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" <johnnydtan@gmail.com> wrote:Server specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296vm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:listen_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)shared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MBwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0max_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disableshot_standby = on # "on" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queriesmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partitionlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,log_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles willlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statementslog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeoutautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300msper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.The problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12But periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.Inbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issueBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).And according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.But we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.Any ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.johnny
David,Interesting observations. I had not been tracking the interrupts but perhaps I should take a look. How are you measuring them over a period of time, or are you just getting them real time?
Did you turn off THP all together or just the THP defrag?
On Wed, Feb 6, 2013 at 10:42 AM, David Whittaker <dave@iradix.com> wrote:Josh/Johnny,We've been seeing a similar problem as well, and had also figured THP was involved. We found this in syslog: https://gist.github.com/davewhittaker/4723285, which led us to disable THP 2 days ago. At first the results seemed good. In particular, our issues always seemed interrupt related and the average interrupts/sec immediately dropped from 7k to around 3k after restarting. The good news is that we didn't see any spike in system CPU time yesterday. The bad news is that we did see a spike in app latency that originated from the DB, but now the spike is in user CPU time and seems to be spread across all of the running postgres processes. Interrupts still blew up to 21k/sec when it happened. We are still diagnosing, but I'd be curious to see if either of you get similar results from turning THP off.On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <jkrupka@gmail.com> wrote:I've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate one that controls if the compaction is done in khugepaged, and a separate one that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)
What's the output of this command?
egrep 'trans|thp|compact_' /proc/vmstat
compact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <johnnydtan@gmail.com> wrote:# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag[always] neverOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <jkrupka@gmail.com> wrote:Just out of curiosity, are you using transparent huge pages?
On Feb 5, 2013 5:03 PM, "Johnny Tan" <johnnydtan@gmail.com> wrote:Server specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296vm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:listen_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)shared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MBwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0max_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disableshot_standby = on # "on" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queriesmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partitionlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvloglog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,log_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles willlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statementslog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeoutautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300msper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.The problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12But periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.Inbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issueBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).And according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.But we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.Any ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.johnny
"ac@hsk.hk" <ac@hsk.hk> wrote:
> Johnny Tan <johnnydtan@gmail.com> wrote:
>>shared_buffers = 48GB# min 128kB> From the postgresql.conf, I can see that the shared_buffers isI have seen symptoms like those described which were the result of
> set to 48GB which is not small, it would be possible that the
> large buffer cache could be "dirty", when a checkpoint starts, it
> would cause a checkpoint I/O spike.
>
>
> I would like to suggest you about using pgtune to get recommended
> conf for postgresql.
too many dirty pages accumulating inside PostgreSQL shared_buffers.
It might be something else entirely in this case, but it would at
least be worth trying a reduced shared_buffers setting combined
with more aggressive bgwriter settings. I might try something like
the following changes, as an experiment:
shared_buffers = 8GB
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 4
We disabled THP all together, with the thought that we might re-enable without defrag if we got positive results. At this point I don't think THP is the root cause though, so I'm curious to see if anyone else gets positive results from disabling it. We definitely haven't seen any performance hit from turning it off.
I've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate one that controls if the compaction is done in khugepaged, and a separate one that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)
What's the output of this command?
egrep 'trans|thp|compact_' /proc/vmstat
compact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.
On Tue, Feb 5, 2013 at 2:02 PM, Johnny Tan <johnnydtan@gmail.com> wrote: > checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0 I always set this to 0.9. I don't know why the default is 0.5. > But periodically, there are spikes in our app's db response time. Normally, > the app's db response time hovers in the 100ms range for most of the day. > During the spike times, it can go up to 1000ms or 1500ms, and the number of > pg connections goes to 140 (maxed out to pgbouncer's limit, where normally > it's only about 20-40 connections). What if you lower the pgbouncer limit to 40? It is hard to know if the latency spikes cause the connection build up, or if the connection build up cause the latency spikes, or if they reinforce each other in a vicious circle. But making the connections wait in pgbouncer's queue rather than in the server should do no harm, and very well might help. > Also, during these times, which usually > last less than 2 minutes, we will see several thousand queries in the pg log > (this is with log_min_duration_statement = 500), compared to maybe one or > two dozen 500ms+ queries in non-spike times. Is the nature of the queries the same, just the duration that changes? Or are the queries of a different nature? Cheers, Jeff
http://www.olivierdoucet.info/blog/2012/05/19/debugging-a-mysql-stall/
http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/
https://gist.github.com/fgbreel/4454559
The kernel docs in Documentation/vm/transhuge.txt have an explanation of the metrics
Johnny Tan <johnnydtan@gmail.com> wrote: > Wouldn't this be controlled by our checkpoint settings, though? Spread checkpoints made the issue less severe, but on servers with a lot of RAM I've had to make the above changes (or even go lower with shared_buffers) to prevent a burst of writes from overwhelming the RAID controllers battery-backed cache. There may be other things which could cause these symptoms, so I'm not certain that this will help; but I have seen this as the cause and seen the suggested changes help. -Kevin
shared_buffers = 8GB
checkpoint_completion_target = 0.9
Strahinja Kustudić | System Engineer | Nordeus
Johnny Tan <johnnydtan@gmail.com> wrote:Spread checkpoints made the issue less severe, but on servers with
> Wouldn't this be controlled by our checkpoint settings, though?
a lot of RAM I've had to make the above changes (or even go lower
with shared_buffers) to prevent a burst of writes from overwhelming
the RAID controllers battery-backed cache. There may be other
things which could cause these symptoms, so I'm not certain that
this will help; but I have seen this as the cause and seen the
suggested changes help.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
shared_buffers = 37GB
effective_cache_size = 38GB
Having a small number and depending on OS caching is unpredictable, if the server is dedicated to postgres you want make sure postgres has the memory. A random unrelated process doing a cat /dev/sda1 should not destroy postgres buffers.
I agree your problem is most related to dirty background ration, where buffers are READ only and have nothing to do with disk writes.
From: strahinjak@nordeus.com
Date: Thu, 7 Feb 2013 13:06:53 +0100
Subject: Re: [PERFORM] postgresql.conf recommendations
To: kgrittn@ymail.com
CC: johnnydtan@gmail.com; ac@hsk.hk; jkrupka@gmail.com; alex@paperlesspost.com; pgsql-performance@postgresql.org
shared_buffers = 8GB
checkpoint_completion_target = 0.9
Strahinja Kustudić | System Engineer | Nordeus
Johnny Tan <johnnydtan@gmail.com> wrote:Spread checkpoints made the issue less severe, but on servers with
> Wouldn't this be controlled by our checkpoint settings, though?
a lot of RAM I've had to make the above changes (or even go lower
with shared_buffers) to prevent a burst of writes from overwhelming
the RAID controllers battery-backed cache. There may be other
things which could cause these symptoms, so I'm not certain that
this will help; but I have seen this as the cause and seen the
suggested changes help.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Will be interested to see what you find Johnny.So far when I run the script for a short period of time that I know THP compactions are happening, I have been able to match up the compaction duration collected via systemtap with a query in the pg logs that took that amount of time or slightly longer (as expected). A lot of these are only a second or so, so I haven't been able to catch everything, but at least the data I am getting is consistent.- at the end spit out the collected info.- keeps track of the number of calls to it and aggregate time spent in it by process- probes the compaction functionJust as an update from my angle on the THP side... I put together a systemtap script last night and so far it's confirming my theory (at least in our environment). I want to go through some more data and make some changes on our test box to see if we can make it go away before declaring success - it's always possible two problems are intertwined or that the THP thing is only showing up because of the *real* problem... you know how it goes.Basically the systemtap script does this:
Hi,May I know what is your setting for OS cache?
- better to analyze large joins and sequential scan, and turn this parameter, e.g. reduce the size of effective_cache_size in postgresql.conf and change it for big queries.
On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> wrote: > I've benchmarked shared_buffers with high and low settings, in a server > dedicated to postgres with 48GB my settings are: > shared_buffers = 37GB > effective_cache_size = 38GB > > Having a small number and depending on OS caching is unpredictable, if the > server is dedicated to postgres you want make sure postgres has the memory. > A random unrelated process doing a cat /dev/sda1 should not destroy postgres > buffers. > I agree your problem is most related to dirty background ration, where > buffers are READ only and have nothing to do with disk writes. You make an assertion here but do not tell us of your benchmarking methods. My testing in the past has show catastrophic performance with very large % of memory as postgresql buffers with heavy write loads, especially transactional ones. Many others on this list have had the same thing happen. Also you supposed PostgreSQL has a better / smarter caching algorithm than the OS kernel, and often times this is NOT the case. In this particular instance the OP may not be seeing an issue from too large of a pg buffer, my point still stands, large pg_buffer can cause problems with heavy or even moderate write loads.
Sure thing, here's the system tap script:
#! /usr/bin/env stap
global pauses, counts
probe begin {
printf("%s\n", ctime(gettimeofday_s()))
}
probe kernel.function("compaction_alloc@mm/compaction.c").return {
elapsed_time = gettimeofday_us() - @entry(gettimeofday_us())
key = sprintf("%d-%s", pid(), execname())
pauses[key] = pauses[key] + elapsed_time
counts[key]++
}
probe end {
printf("%s\n", ctime(gettimeofday_s()))
foreach (pid in pauses) {
printf("pid %s : %d ms %d pauses\n", pid, pauses[pid]/1000, counts[pid])
}
}
On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> wrote: >> I've benchmarked shared_buffers with high and low settings, in a server >> dedicated to postgres with 48GB my settings are: >> shared_buffers = 37GB >> effective_cache_size = 38GB >> >> Having a small number and depending on OS caching is unpredictable, if the >> server is dedicated to postgres you want make sure postgres has the memory. >> A random unrelated process doing a cat /dev/sda1 should not destroy postgres >> buffers. >> I agree your problem is most related to dirty background ration, where >> buffers are READ only and have nothing to do with disk writes. > > You make an assertion here but do not tell us of your benchmarking > methods. Well, he is not the only one committing that sin. > My testing in the past has show catastrophic performance > with very large % of memory as postgresql buffers with heavy write > loads, especially transactional ones. Many others on this list have > had the same thing happen. People also have problems by setting it too low. For example, doing bulk loads into indexed tables becomes catastrophically bad when the size of the index exceeds shared_buffers by too much (where "too much" depends on kernel, IO subsystem, and settings of vm.dirty* ) , and increasing shared_buffers up to 80% of RAM fixes that (if 80% of RAM is large enough to hold the indexes being updated). Of course when doing bulk loads into truncated tables, you should drop the indexes. But if bulk loading into live tables, that is often a cure worse than the disease. > Also you supposed PostgreSQL has a better > / smarter caching algorithm than the OS kernel, and often times this > is NOT the case. Even if it is not smarter as an algorithm, it might still be better to use it. For example, "heap_blks_read", "heap_blks_hit", and friends become completely useless if most block "reads" are not actually coming from disk. Also, vacuum_cost_page_miss is impossible to tune if some unknown but potentially large fraction of those misses are not really misses, and that fraction changes from table to table, and from wrap-around scan to vm scan on the same table. > In this particular instance the OP may not be seeing an issue from too > large of a pg buffer, my point still stands, large pg_buffer can cause > problems with heavy or even moderate write loads. Sure, but that can go the other way as well. What additional instrumentation is needed so that people can actually know which is the case for them? Cheers, Jeff
On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <jeff.janes@gmail.com> wrote: > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> wrote: >>> I've benchmarked shared_buffers with high and low settings, in a server >>> dedicated to postgres with 48GB my settings are: >>> shared_buffers = 37GB >>> effective_cache_size = 38GB >>> >>> Having a small number and depending on OS caching is unpredictable, if the >>> server is dedicated to postgres you want make sure postgres has the memory. >>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres >>> buffers. >>> I agree your problem is most related to dirty background ration, where >>> buffers are READ only and have nothing to do with disk writes. >> >> You make an assertion here but do not tell us of your benchmarking >> methods. > > Well, he is not the only one committing that sin. I'm not asking for a complete low level view. but it would be nice to know if he's benchmarking heavy read or write loads, lots of users, a few users, something. All we get is "I've benchmarked a lot" followed by "don't let the OS do the caching." At least with my testing I was using a large transactional system (heavy write) and there I KNOW from testing that large shared_buffers do nothing but get in the way. all the rest of the stuff you mention is why we have effective cache size which tells postgresql about how much of the data CAN be cached. In short, postgresql is designed to use and / or rely on OS cache.
On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> wrote:
>>> I've benchmarked shared_buffers with high and low settings, in a server
>>> dedicated to postgres with 48GB my settings are:
>>> shared_buffers = 37GB
>>> effective_cache_size = 38GB
>>>
>>> Having a small number and depending on OS caching is unpredictable, if the
>>> server is dedicated to postgres you want make sure postgres has the memory.
>>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres
>>> buffers.
>>> I agree your problem is most related to dirty background ration, where
>>> buffers are READ only and have nothing to do with disk writes.
>>
>> You make an assertion here but do not tell us of your benchmarking
>> methods.
>
> Well, he is not the only one committing that sin.
I'm not asking for a complete low level view. but it would be nice to
know if he's benchmarking heavy read or write loads, lots of users, a
few users, something. All we get is "I've benchmarked a lot" followed
by "don't let the OS do the caching." At least with my testing I was
using a large transactional system (heavy write) and there I KNOW from
testing that large shared_buffers do nothing but get in the way.
all the rest of the stuff you mention is why we have effective cache
size which tells postgresql about how much of the data CAN be cached.
> Subject: Re: [PERFORM] postgresql.conf recommendations
> From: scott.marlowe@gmail.com
> To: jeff.janes@gmail.com
> CC: charlesrg@outlook.com; strahinjak@nordeus.com; kgrittn@ymail.com; johnnydtan@gmail.com; ac@hsk.hk; jkrupka@gmail.com; alex@paperlesspost.com; pgsql-performance@postgresql.org
>
> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> wrote:
> >>> I've benchmarked shared_buffers with high and low settings, in a server
> >>> dedicated to postgres with 48GB my settings are:
> >>> shared_buffers = 37GB
> >>> effective_cache_size = 38GB
> >>>
> >>> Having a small number and depending on OS caching is unpredictable, if the
> >>> server is dedicated to postgres you want make sure postgres has the memory.
> >>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres
> >>> buffers.
> >>> I agree your problem is most related to dirty background ration, where
> >>> buffers are READ only and have nothing to do with disk writes.
> >>
> >> You make an assertion here but do not tell us of your benchmarking
> >> methods.
> >
> > Well, he is not the only one committing that sin.
>
> I'm not asking for a complete low level view. but it would be nice to
> know if he's benchmarking heavy read or write loads, lots of users, a
> few users, something. All we get is "I've benchmarked a lot" followed
> by "don't let the OS do the caching." At least with my testing I was
> using a large transactional system (heavy write) and there I KNOW from
> testing that large shared_buffers do nothing but get in the way.
>
> all the rest of the stuff you mention is why we have effective cache
> size which tells postgresql about how much of the data CAN be cached.
> In short, postgresql is designed to use and / or rely on OS cache.
>
Hello Scott
I've tested using 8 bulk writers in a 8 core machine (16 Threads).
I've loaded a database with 17 partitions, total 900 million rows and later executed single queries on it.
In my case the main point of having postgres manage memory is because postgres is the single and most important application running on the server.
If Linux would manage the Cache it would not know what is important and what should be discarded, it would simply discard the oldest least accessed entry.
Let's say a DBA logs in the server and copies a 20GB file. If you leave Linux to decide, it will decide that the 20GB file is more important than the old not so heavily accessed postgres entries.
This may be looked in a case by case, in my case I need PostgreSQL to perform FAST and I also don't want cron jobs taking my cache out. For example (locate, logrotate, prelink, makewhatis).
If postgres was unable to manage 40GB of RAM, we would get into major problems because nowadays it's normal to buy 64GB servers, and many of Us have dealt with 512GB Ram Servers.
By the way, I've tested this same scenario with Postgres, Mysql and Oracle. And Postgres have given the best results overall. Especially with symmetric replication turned on.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
Johnny,
Sure thing, here's the system tap script:
- I think you already started looking at this, but the linux dirty memory settings may have to be tuned as well (see Greg's post http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html). Ours haven't been changed from the defaults, but that's another thing to test for next week. Have you had any luck tuning these yet?
On Mon, Feb 11, 2013 at 7:57 AM, Charles Gomes <charlesrg@outlook.com> wrote: > > >> Date: Sat, 9 Feb 2013 14:03:35 -0700 > >> Subject: Re: [PERFORM] postgresql.conf recommendations >> From: scott.marlowe@gmail.com >> To: jeff.janes@gmail.com >> CC: charlesrg@outlook.com; strahinjak@nordeus.com; kgrittn@ymail.com; >> johnnydtan@gmail.com; ac@hsk.hk; jkrupka@gmail.com; alex@paperlesspost.com; >> pgsql-performance@postgresql.org > >> >> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <jeff.janes@gmail.com> wrote: >> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <scott.marlowe@gmail.com> >> > wrote: >> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@outlook.com> >> >> wrote: >> >>> I've benchmarked shared_buffers with high and low settings, in a >> >>> server >> >>> dedicated to postgres with 48GB my settings are: >> >>> shared_buffers = 37GB >> >>> effective_cache_size = 38GB >> >>> >> >>> Having a small number and depending on OS caching is unpredictable, if >> >>> the >> >>> server is dedicated to postgres you want make sure postgres has the >> >>> memory. >> >>> A random unrelated process doing a cat /dev/sda1 should not destroy >> >>> postgres >> >>> buffers. >> >>> I agree your problem is most related to dirty background ration, where >> >>> buffers are READ only and have nothing to do with disk writes. >> >> >> >> You make an assertion here but do not tell us of your benchmarking >> >> methods. >> > >> > Well, he is not the only one committing that sin. >> >> I'm not asking for a complete low level view. but it would be nice to >> know if he's benchmarking heavy read or write loads, lots of users, a >> few users, something. All we get is "I've benchmarked a lot" followed >> by "don't let the OS do the caching." At least with my testing I was >> using a large transactional system (heavy write) and there I KNOW from >> testing that large shared_buffers do nothing but get in the way. >> >> all the rest of the stuff you mention is why we have effective cache >> size which tells postgresql about how much of the data CAN be cached. >> In short, postgresql is designed to use and / or rely on OS cache. >> > Hello Scott > > I've tested using 8 bulk writers in a 8 core machine (16 Threads). > > I've loaded a database with 17 partitions, total 900 million rows and later > executed single queries on it. > > In my case the main point of having postgres manage memory is because > postgres is the single and most important application running on the server. > > > > If Linux would manage the Cache it would not know what is important and what > should be discarded, it would simply discard the oldest least accessed > entry. Point taken however, > Let's say a DBA logs in the server and copies a 20GB file. If you leave > Linux to decide, it will decide that the 20GB file is more important than > the old not so heavily accessed postgres entries. The linux kernel (and most other unix kernels) don't cache that way. They're usually quite smart about caching. While some older things might get pushed out, it doesn't generally make room for larger files that have been accessed just once. But on a mixed load server this may not be the case. > If postgres was unable to manage 40GB of RAM, we would get into major > problems because nowadays it's normal to buy 64GB servers, and many of Us > have dealt with 512GB Ram Servers. It's not that postgres can't hadndle large cache, it's that quite often the kernel is simply better at it. > By the way, I've tested this same scenario with Postgres, Mysql and Oracle. > And Postgres have given the best results overall. Especially with symmetric > replication turned on. Good to know. In the past PostgreSQL has had some performance issues with large shared_buffer values, and this is still apparently the case when run on windows. With dedicated linux servers running just postgres, letting the kernel handle cache has yielded very good results. Most of the negative aspects on large buffers I've seen has been on heavy write / high transactional dbs.
We will probably tweak this knob some more -- i.e., what is the sweet spot between 1 and 100? Would it be higher than 50 but less than 100? Or is it somewhere lower than 50?
On Mon, Feb 11, 2013 at 4:29 PM, Will Platnick <wplatnick@gmail.com> wrote: > We will probably tweak this knob some more -- i.e., what is the sweet spot > between 1 and 100? Would it be higher than 50 but less than 100? Or is it > somewhere lower than 50? > > I would love to know the answer to this as well. We have a similar > situation, pgbouncer with transaction log pooling with 140 connections. > What is the the right value to size pgbouncer connections to? Is there a > formula that takes the # of cores into account? If you can come up with a synthetic benchmark that's similar to what your real load is (size, mix etc) then you can test it and see at what number your throughput peaks and you have good behavior from the server. On a server I built a few years back with 48 AMD cores and 24 Spinners in a RAID-10 for data and 4 drives for a RAID-10 for pg_xlog (no RAID controller in this one as the chassis cooked them) my throughput peaked at ~60 connections. What you'll wind up with is a graph where the throughput keeps climbing as you add clients and at some point it will usually drop off quickly when you pass it. The sharper the drop the more dangerous it is to run your server in such an overloaded situation. -- To understand recursion, one must first understand recursion.