Re: Dumping a database that is not accepting commands? - Mailing list pgsql-admin
From | Natalie Wenz |
---|---|
Subject | Re: Dumping a database that is not accepting commands? |
Date | |
Msg-id | E80F8320-A041-48E5-BD94-5693D75606DB@ebureau.com Whole thread Raw |
In response to | Re: Dumping a database that is not accepting commands? (Natalie Wenz <nataliewenz@ebureau.com>) |
List | pgsql-admin |
It occurs to me that asking for feedback on the tuning, I am asking about two separate things:
Was there anything in the tuning below that contributed to the database getting into trouble? And is there anything I should change in that tuning to make the single-user vacuum as fast as it can be for optimal recovery time?
version | PostgreSQL 9.1.9 on x86_64-unknown-freebsd9.1, compiled by gcc (GCC) 4.2.1 20070831 patched [FreeBSD], 64-bitautovacuum | onautovacuum_analyze_scale_factor | 0.1autovacuum_freeze_max_age | 800000000autovacuum_max_workers | 3autovacuum_vacuum_cost_delay | 0autovacuum_vacuum_scale_factor | 0.1checkpoint_segments | 128effective_cache_size | 12GBlisten_addresses | *log_autovacuum_min_duration | 10slog_destination | stderrlog_filename | logfile-%A.loglog_line_prefix | %t:%u:%r:[%p]:log_rotation_age | 1dlog_rotation_size | 1GBlog_truncate_on_rotation | onlogging_collector | onmaintenance_work_mem | 10GBmax_connections | 500max_stack_depth | 2MBrandom_page_cost | 1seq_page_cost | 1shared_buffers | 128MBsynchronous_commit | offtemp_buffers | 128MBTimeZone | US/Centralvacuum_cost_limit | 500wal_buffers | 32MBwork_mem | 256MBThis is the tuning of the original database, anything changed from the default settings. The machine it was running on had 48GB of memory. The database was 36TB, with 2 tables taking up the bulk of that (about 14TB each), and about 10 other tables and a few large indexes making up the rest. Our typical usage pattern is mostly inserts, with a some hourly summaries (which take maybe 5 minutes), some daily summaries (which take about 20-40 minutes), and a couple of end of month queries that take several hours. We have the same setup and tuning in production, which is about the same size, with an additional end of month query that runs off one of the 14TB tables, which can take 4-7 days.
As far as ideal tuning for the new database, running on 9.3, which will eventually hold all the data from the sad, recovering original database with the usage patterns described below, how is this for a starting point? I tried to follow the basic guidelines in the High Performance book, but sometimes I feel like I'm largely guessing.
name | current_setting
---------------------------------+---------------------------------------------------------------------------------------------------------------
version | PostgreSQL 9.3.0 on x86_64-unknown-freebsd9.1, compiled by gcc (GCC) 4.2.1 20070831 patched [FreeBSD], 64-bit
autovacuum | on
autovacuum_analyze_scale_factor | 0.1
autovacuum_freeze_max_age | 800000000
autovacuum_max_workers | 3
autovacuum_vacuum_cost_delay | 0
autovacuum_vacuum_scale_factor | 0.1
checkpoint_segments | 128
effective_cache_size | 12GB
lc_collate | C
lc_ctype | C
listen_addresses | *
log_autovacuum_min_duration | 1min
log_destination | stderr
log_filename | logfile-%A.log
log_line_prefix | %t:%u:%r:[%p]:
log_min_duration_statement | 1min
log_rotation_age | 1d
log_rotation_size | 1GB
log_truncate_on_rotation | on
logging_collector | on
maintenance_work_mem | 4GB
max_connections | 500
max_stack_depth | 2MB
random_page_cost | 1
seq_page_cost | 1
shared_buffers | 12GB
synchronous_commit | off
temp_buffers | 128MB
TimeZone | US/Central
vacuum_cost_limit | 500
wal_buffers | 16MB
work_mem | 256MB
pgsql-admin by date: