Thread: sloooow query
Hello all, I am experiencing slow db performance. I have vacuumed, analyzed, reindexed using the force option and performance remains the same - dog-slow :( If I drop and recreate the database, performance is normal, so this suggests a problem with the indexes? I also took a look at the postgresql.conf and all appears fine. There are many instances of the same database running on different servers and not all servers are experiencing the problem. Thanks in advance for any suggestions. Marie
Marie, > I am experiencing slow db performance. I have vacuumed, analyzed, reindexed > using the force option and performance remains the same - dog-slow :( If I > drop and recreate the database, performance is normal, so this suggests a > problem with the indexes? I also took a look at the postgresql.conf and all > appears fine. There are many instances of the same database running on > different servers and not all servers are experiencing the problem. Please post the following: 1) A copy of the relevant portions of your database schema. 2) The query that is running slowly. 3) The results of running EXPLAIN on that query. 4) Your PostgreSQL version and operating system 5) Any other relevant information about your databases, such as the quantity of inserts and deletes on the relevant tables. -- -Josh Berkus Aglio Database Solutions San Francisco
Josh Berkus wrote: > > Marie, > > > I am experiencing slow db performance. I have vacuumed, analyzed, reindexed > > using the force option and performance remains the same - dog-slow :( If I > > drop and recreate the database, performance is normal, so this suggests a > > problem with the indexes? I also took a look at the postgresql.conf and all > > appears fine. There are many instances of the same database running on > > different servers and not all servers are experiencing the problem. > > Please post the following: > 1) A copy of the relevant portions of your database schema. > 2) The query that is running slowly. > 3) The results of running EXPLAIN on that query. > 4) Your PostgreSQL version and operating system > 5) Any other relevant information about your databases, such as the quantity > of inserts and deletes on the relevant tables. 6) And the sort_mem, shared_buffers, vacuum_mem, wal_buffers, and wal_files settings from your postgresql.conf file, if possible. :-) Regards and best wishes, Justin Clift > -- > -Josh Berkus > Aglio Database Solutions > San Francisco > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
Josh, Thanks for the reply. I pg_dumped the first database having performance problems and reloaded it into a new database on the same server. The query ran normally when I reloaded it. There is no difference in hardware, schema or anything else. project=# select version(); version ------------------------------------------------------------- PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96 (1 row) [mtuite@area52 mtuite]$ uname -a Linux area52.spacedock.com 2.4.7-10 #1 Thu Sep 6 17:27:27 EDT 2001 i686 unknown Below is the explain for the reload. bm221=# \i bad.qry psql:bad.qry:78: NOTICE: QUERY PLAN: Sort (cost=273.71..273.71 rows=1 width=237) (actual time=143.82..143.96 rows=181 loops=1) -> Group (cost=273.53..273.70 rows=1 width=237) (actual time=136.98..140.78 rows=181 loops=1) -> Sort (cost=273.53..273.53 rows=7 width=237) (actual time=136.95..137.11 rows=181 loops=1) -> Merge Join (cost=273.37..273.43 rows=7 width=237) (actual time=124.41..129.72 rows=181 loops=1) -> Sort (cost=162.24..162.24 rows=7 width=216) (actual time=51.83..52.00 rows=181 loops=1) -> Subquery Scan student_set (cost=161.09..162.14 rows=7 width=216) (actual time=48.12..50.49 rows=181 loops=1) -> Unique (cost=161.09..162.14 rows=7 width=216) (actual time=48.10..49.45 rows=181 loops=1) -> Sort (cost=161.09..161.09 rows=70 width=216) (actual time=48.09..48.26 rows=181 loops=1) -> Hash Join (cost=130.58..158.96 rows=70 width=216) (actual time=43.26..47.11 rows=181 loops=1) -> Seq Scan on classes c (cost=0.00..20.00 rows=1000 width=72) (actual time=0.12..1.78 rows=332 loops=1) -> Hash (cost=130.55..130.55 rows=14 width=144) (actual time=43.02..43.02 rows=0 loops=1) -> Hash Join (cost=105.38..130.55 rows=14 width=144) (actual time=31.13..42.44 rows=181 loops=1) -> Seq Scan on user_common uc (cost=0.00..20.00 rows=1000 width=80) (actual time=0.12..7.07 rows=1045 loops=1) -> Hash (cost=105.37..105.37 rows=3 width=64) (actual time=30.91..30.91 rows=0 loops=1) -> Hash Join (cost=77.46..105.37 rows=3 width=64) (actual time=4.79..30.46 rows=181 loops=1) -> Seq Scan on student_class_rlt scr (cost=0.00..22.50 rows=995 width=24) (actual time=0.25..23.74 rows=527 loops=1) -> Hash (cost=77.45..77.45 rows=5 width=40) (actual time=4.02..4.02 rows=0 loops=1) -> Hash Join (cost=52.38..77.45 rows=5 width=40) (actual time=3.28..3.96 rows=27 loops=1) -> Seq Scan on bm_subscriptions_rlt bsr (cost=0.00..20.00 rows=1000 width=8) (actual time=0.11..0.47 rows=114 loops=1) -> Hash (cost=52.38..52.38 rows=1 width=32) (actual time=3.10..3.10 rows=0 loops=1) -> Hash Join (cost=4.83..52.38 rows=1 width=32) (actual time=2.23..3.07 rows=11 loops=1) -> Seq Scan on bm_publications bp (cost=0.00..47.50 rows=11 width=12) (actual time=1.49..2.25 rows=11 loops=1) -> Hash (cost=4.82..4.82 rows=1 width=20) (actual time=0.63..0.63 rows=0 loops=1) -> Index Scan using bm_publication_events_pkey on bm_publication_events bpe (cost=0.00..4.82 rows=1 width=20) (actual time=0.60..0.61 rows=1 loops=1) -> Sort (cost=111.13..111.13 rows=18 width=21) (actual time=72.51..73.15 rows=770 loops=1) -> Subquery Scan participation_set (cost=22.51..110.75 rows=18 width=21) (actual time=1.32..57.28 rows=809 loops=1) -> Hash Join (cost=22.51..110.75 rows=18 width=21) (actual time=1.30..52.21 rows=809 loops=1) -> Seq Scan on bm_user_results bur (cost=0.00..70.01 rows=3601 width=17) (actual time=0.14..18.53 rows=3601 loops=1) -> Hash (cost=22.50..22.50 rows=5 width=4) (actual time=0.91..0.91 rows=0 loops=1) -> Seq Scan on bm_publications bp (cost=0.00..22.50 rows=5 width=4) (actual time=0.33..0.71 rows=98 loops=1) Total runtime: 145.69 msec EXPLAIN bm221=# Here is the explain from the original database: project=# \i bad.qry psql:bad.qry:78: NOTICE: QUERY PLAN: Sort (cost=337.23..337.23 rows=1 width=237) (actual time=14903.87..14904.05 rows=181 loops=1) -> Group (cost=337.19..337.22 rows=1 width=237) (actual time=14895.90..14900.55 rows=181 loops=1) -> Sort (cost=337.19..337.19 rows=1 width=237) (actual time=14895.87..14896.09 rows=181 loops=1) -> Nested Loop (cost=214.62..337.18 rows=1 width=237) (actual time=149.50..14886.63 rows=181 loops=1) -> Subquery Scan student_set (cost=208.82..208.84 rows=1 width=115) (actual time=64.03..69.44 rows=181 loops=1) -> Unique (cost=208.82..208.84 rows=1 width=115) (actual time=64.02..67.25 rows=181 loops=1) -> Sort (cost=208.82..208.82 rows=1 width=115) (actual time=64.01..64.36 rows=181 loops=1) -> Nested Loop (cost=16.54..208.81 rows=1 width=115) (actual time=5.21..62.66 rows=181 loops=1) -> Nested Loop (cost=16.54..203.55 rows=1 width=88) (actual time=5.11..52.60 rows=181 loops=1) -> Hash Join (cost=16.54..197.63 rows=1 width=64) (actual time=4.55..37.75 rows=181 loops=1) -> Seq Scan on student_class_rlt scr (cost=0.00..178.16 rows=574 width=24) (actual time=0.02..29.59 rows=527 loops=1) -> Hash (cost=16.54..16.54 rows=2 width=40) (actual time=3.84..3.84 rows=0 loops=1) -> Hash Join (cost=13.80..16.54 rows=2 width=40) (actual time=2.91..3.77 rows=27 loops=1) -> Seq Scan on bm_subscriptions_rlt bsr (cost=0.00..2.14 rows=114 width=8) (actual time=0.01..0.50 rows=114 loops=1) -> Hash (cost=13.80..13.80 rows=2 width=32) (actual time=2.81..2.81 rows=0 loops=1) -> Hash Join (cost=1.06..13.80 rows=2 width=32) (actual time=1.74..2.78 rows=11 loops=1) -> Seq Scan on bm_publications bp (cost=0.00..12.65 rows=11 width=12) (actual time=1.56..2.51 rows=11 loops=1) -> Hash (cost=1.06..1.06 rows=1 width=20) (actual time=0.06..0.06 rows=0 loops=1) -> Seq Scan on bm_publication_events bpe (cost=0.00..1.06 rows=1 width=20) (actual time=0.04..0.05 rows=1 loops=1) -> Index Scan using user_common_pkey on user_common uc (cost=0.00..5.90 rows=1 width=24) (actual time=0.05..0.06 rows=1 loops=181) -> Index Scan using class_pkey on classes c (cost=0.00..5.25 rows=1 width=27) (actual time=0.03..0.04 rows=1 loops=181) -> Subquery Scan participation_set (cost=5.79..109.63 rows=1248 width=21) (actual time=1.19..78.18 rows=816 loops=181) -> Hash Join (cost=5.79..109.63 rows=1248 width=21) (actual time=1.18..71.10 rows=816 loops=181) -> Seq Scan on bm_user_results bur (cost=0.00..70.16 rows=3616 width=17) (actual time=0.01..20.96 rows=3620 loops=181) -> Hash (cost=5.55..5.55 rows=98 width=4) (actual time=1.05..1.05 rows=0 loops=181) -> Seq Scan on bm_publications bp (cost=0.00..5.55 rows=98 width=4) (actual time=0.33..0.82 rows=98 loops=181) Total runtime: 14905.87 msec EXPLAIN project=# Here is the query: explain analyze select student_set.pub_id as pub_id, student_set.class_id as class, student_set.class_name as class_name, student_set.user_id as student, student_set.first_name, student_set.last_name, participation_set.started, participation_set.complete, day,month from ( select distinct scr.user_id, scr.class_id, uc.first_name, uc.last_name, bp.bm_publication_id as pub_id, c.class_name from student_class_rlt scr, user_common uc, bm_subscriptions_rlt bsr, bm_publications bp CROSS JOIN bm_publication_events bpe, classes c where bpe.bm_publication_event_id = 4 and bpe.bm_publication_event_id = bp.bm_publication_event_id and bp.bm_series_id = bsr.bm_series_id and bsr.class_id = scr.class_id and scr.class_id = c.class_id and (scr.end_date is null or scr.end_date >= bpe.due_date) and scr.start_date <= bpe.publication_date and scr.status_id != 2 and scr.user_id = uc.user_id and bp.bm_publication_id in ( 4,25,1,3,26,19, ,11,27,90,20,28 ) ) student_set left join ( select user_id, initial_timestmp as started, to_char( initial_timestmp, 'MM/DD' ) as day, to_char( initial_timestmp, 'Month YYYY' ) as month, complete, bur.bm_publication_id as pub_id from bm_publications bp, bm_user_results bur where bp.bm_publication_event_id = 4 and bp.bm_publication_id = bur.bm_publication_id ) participation_set on ( student_set.user_id = participation_set.user_id and student_set.pub_id = participation_set.pub_id ) group by student_set.pub_id, class, class_name, student, last_name, first_name, started, complete, day, month order by student_set.pub_id, class, last_name, month, day ; Thanks. > -----Original Message----- > From: pgsql-performance-owner@postgresql.org > [mailto:pgsql-performance-owner@postgresql.org]On Behalf Of Josh Berkus > Sent: Monday, October 07, 2002 2:29 PM > To: marie.tuite@edisonaffiliates.com; pgsql-performance@postgresql.org > Subject: Re: [pgsql-performance] sloooow query > > > > Marie, > > > I am experiencing slow db performance. I have vacuumed, > analyzed, reindexed > > using the force option and performance remains the same - > dog-slow :( If I > > drop and recreate the database, performance is normal, so this > suggests a > > problem with the indexes? I also took a look at the > postgresql.conf and all > > appears fine. There are many instances of the same database running on > > different servers and not all servers are experiencing the problem. > > Please post the following: > 1) A copy of the relevant portions of your database schema. > 2) The query that is running slowly. > 3) The results of running EXPLAIN on that query. > 4) Your PostgreSQL version and operating system > 5) Any other relevant information about your databases, such as > the quantity > of inserts and deletes on the relevant tables. > > -- > -Josh Berkus > Aglio Database Solutions > San Francisco > > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html >
Here is a show all: Thanks, project-# ; NOTICE: enable_seqscan is on NOTICE: enable_indexscan is on NOTICE: enable_tidscan is on NOTICE: enable_sort is on NOTICE: enable_nestloop is on NOTICE: enable_mergejoin is on NOTICE: enable_hashjoin is on NOTICE: ksqo is off NOTICE: geqo is on NOTICE: tcpip_socket is on NOTICE: ssl is off NOTICE: fsync is on NOTICE: silent_mode is off NOTICE: log_connections is off NOTICE: log_timestamp is off NOTICE: log_pid is off NOTICE: debug_print_query is off NOTICE: debug_print_parse is off NOTICE: debug_print_rewritten is off NOTICE: debug_print_plan is off NOTICE: debug_pretty_print is off NOTICE: show_parser_stats is off NOTICE: show_planner_stats is off NOTICE: show_executor_stats is off NOTICE: show_query_stats is off NOTICE: stats_start_collector is on NOTICE: stats_reset_on_server_start is on NOTICE: stats_command_string is off NOTICE: stats_row_level is off NOTICE: stats_block_level is off NOTICE: trace_notify is off NOTICE: hostname_lookup is off NOTICE: show_source_port is off NOTICE: sql_inheritance is on NOTICE: australian_timezones is off NOTICE: fixbtree is on NOTICE: password_encryption is off NOTICE: transform_null_equals is off NOTICE: geqo_threshold is 11 NOTICE: geqo_pool_size is 0 NOTICE: geqo_effort is 1 NOTICE: geqo_generations is 0 NOTICE: geqo_random_seed is -1 NOTICE: deadlock_timeout is 1000 NOTICE: syslog is 0 NOTICE: max_connections is 64 NOTICE: shared_buffers is 128 NOTICE: port is 5432 NOTICE: unix_socket_permissions is 511 NOTICE: sort_mem is 1024 NOTICE: vacuum_mem is 8192 NOTICE: max_files_per_process is 1000 NOTICE: debug_level is 0 NOTICE: max_expr_depth is 10000 NOTICE: max_fsm_relations is 100 NOTICE: max_fsm_pages is 10000 NOTICE: max_locks_per_transaction is 64 NOTICE: authentication_timeout is 60 NOTICE: pre_auth_delay is 0 NOTICE: checkpoint_segments is 3 NOTICE: checkpoint_timeout is 300 NOTICE: wal_buffers is 8 NOTICE: wal_files is 0 NOTICE: wal_debug is 0 NOTICE: commit_delay is 0 NOTICE: commit_siblings is 5 NOTICE: effective_cache_size is 1000 NOTICE: random_page_cost is 4 NOTICE: cpu_tuple_cost is 0.01 NOTICE: cpu_index_tuple_cost is 0.001 NOTICE: cpu_operator_cost is 0.0025 NOTICE: geqo_selection_bias is 2 NOTICE: default_transaction_isolation is read committed NOTICE: dynamic_library_path is $libdir NOTICE: krb_server_keyfile is FILE:/etc/pgsql/krb5.keytab NOTICE: syslog_facility is LOCAL0 NOTICE: syslog_ident is postgres NOTICE: unix_socket_group is unset NOTICE: unix_socket_directory is unset NOTICE: virtual_host is unset NOTICE: wal_sync_method is fdatasync NOTICE: DateStyle is ISO with US (NonEuropean) conventions NOTICE: Time zone is unset NOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED NOTICE: Current client encoding is 'SQL_ASCII' NOTICE: Current server encoding is 'SQL_ASCII' NOTICE: Seed for random number generator is unavailable SHOW VARIABLE project=# > -----Original Message----- > From: pgsql-performance-owner@postgresql.org > [mailto:pgsql-performance-owner@postgresql.org]On Behalf Of Justin Clift > Sent: Monday, October 07, 2002 2:30 PM > To: josh@agliodbs.com > Cc: marie.tuite@edisonaffiliates.com; pgsql-performance@postgresql.org > Subject: Re: [pgsql-performance] sloooow query > > > Josh Berkus wrote: > > > > Marie, > > > > > I am experiencing slow db performance. I have vacuumed, > analyzed, reindexed > > > using the force option and performance remains the same - > dog-slow :( If I > > > drop and recreate the database, performance is normal, so > this suggests a > > > problem with the indexes? I also took a look at the > postgresql.conf and all > > > appears fine. There are many instances of the same database > running on > > > different servers and not all servers are experiencing the problem. > > > > Please post the following: > > 1) A copy of the relevant portions of your database schema. > > 2) The query that is running slowly. > > 3) The results of running EXPLAIN on that query. > > 4) Your PostgreSQL version and operating system > > 5) Any other relevant information about your databases, such as > the quantity > > of inserts and deletes on the relevant tables. > > 6) And the sort_mem, shared_buffers, vacuum_mem, wal_buffers, and > wal_files settings from your postgresql.conf file, if possible. > > :-) > > Regards and best wishes, > > Justin Clift > > > > -- > > -Josh Berkus > > Aglio Database Solutions > > San Francisco > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 5: Have you checked our extensive FAQ? > > > > http://www.postgresql.org/users-lounge/docs/faq.html > > -- > "My grandfather once told me that there are two kinds of people: those > who work and those who take the credit. He told me to try to be in the > first group; there was less competition there." > - Indira Gandhi > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org >
Marie, > I pg_dumped the first database having performance problems and reloaded it > into a new database on the same server. The query ran normally when I > reloaded it. There is no difference in hardware, schema or anything else. That's a pretty brutal query. From the comparison between the two queries, it looks like you have a lot of discarded rows cluttering up the original database, just like I did. What happens if you run VACUUM FULL VERBOSE on the Bad database? Does it report lots of rows taken up? -- -Josh Berkus ______AGLIO DATABASE SOLUTIONS___________________________ Josh Berkus Complete information technology josh@agliodbs.com and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco
Hi Marie, Ok, not sure about the SQL side of things (got scared just *looking* at that query), but if this is at least a mostly-dedicated database server then you might want to bump up some of those buffer values. They look like defaults (except the max_connections and shared buffers). Initial thought is making just sort_mem = 8192 or so as a minimum (it could go a lot higher, but not sure of your memory configuration), as see if that makes a difference. Not sure the wal_files = 0 bit is good either. Haven't seen that set to 0 before. Might not assist with your present crisis, but am guessing PostgreSQL is chewing a lot of CPU and being slow in general with the present settings. :-) Regards and best wishes, Justin Clift "Marie G. Tuite" wrote: > > Here is a show all: > > Thanks, > > project-# ; > NOTICE: enable_seqscan is on > NOTICE: enable_indexscan is on > NOTICE: enable_tidscan is on > NOTICE: enable_sort is on > NOTICE: enable_nestloop is on > NOTICE: enable_mergejoin is on > NOTICE: enable_hashjoin is on > NOTICE: ksqo is off > NOTICE: geqo is on > NOTICE: tcpip_socket is on > NOTICE: ssl is off > NOTICE: fsync is on > NOTICE: silent_mode is off > NOTICE: log_connections is off > NOTICE: log_timestamp is off > NOTICE: log_pid is off > NOTICE: debug_print_query is off > NOTICE: debug_print_parse is off > NOTICE: debug_print_rewritten is off > NOTICE: debug_print_plan is off > NOTICE: debug_pretty_print is off > NOTICE: show_parser_stats is off > NOTICE: show_planner_stats is off > NOTICE: show_executor_stats is off > NOTICE: show_query_stats is off > NOTICE: stats_start_collector is on > NOTICE: stats_reset_on_server_start is on > NOTICE: stats_command_string is off > NOTICE: stats_row_level is off > NOTICE: stats_block_level is off > NOTICE: trace_notify is off > NOTICE: hostname_lookup is off > NOTICE: show_source_port is off > NOTICE: sql_inheritance is on > NOTICE: australian_timezones is off > NOTICE: fixbtree is on > NOTICE: password_encryption is off > NOTICE: transform_null_equals is off > NOTICE: geqo_threshold is 11 > NOTICE: geqo_pool_size is 0 > NOTICE: geqo_effort is 1 > NOTICE: geqo_generations is 0 > NOTICE: geqo_random_seed is -1 > NOTICE: deadlock_timeout is 1000 > NOTICE: syslog is 0 > NOTICE: max_connections is 64 > NOTICE: shared_buffers is 128 > NOTICE: port is 5432 > NOTICE: unix_socket_permissions is 511 > NOTICE: sort_mem is 1024 > NOTICE: vacuum_mem is 8192 > NOTICE: max_files_per_process is 1000 > NOTICE: debug_level is 0 > NOTICE: max_expr_depth is 10000 > NOTICE: max_fsm_relations is 100 > NOTICE: max_fsm_pages is 10000 > NOTICE: max_locks_per_transaction is 64 > NOTICE: authentication_timeout is 60 > NOTICE: pre_auth_delay is 0 > NOTICE: checkpoint_segments is 3 > NOTICE: checkpoint_timeout is 300 > NOTICE: wal_buffers is 8 > NOTICE: wal_files is 0 > NOTICE: wal_debug is 0 > NOTICE: commit_delay is 0 > NOTICE: commit_siblings is 5 > NOTICE: effective_cache_size is 1000 > NOTICE: random_page_cost is 4 > NOTICE: cpu_tuple_cost is 0.01 > NOTICE: cpu_index_tuple_cost is 0.001 > NOTICE: cpu_operator_cost is 0.0025 > NOTICE: geqo_selection_bias is 2 > NOTICE: default_transaction_isolation is read committed > NOTICE: dynamic_library_path is $libdir > NOTICE: krb_server_keyfile is FILE:/etc/pgsql/krb5.keytab > NOTICE: syslog_facility is LOCAL0 > NOTICE: syslog_ident is postgres > NOTICE: unix_socket_group is unset > NOTICE: unix_socket_directory is unset > NOTICE: virtual_host is unset > NOTICE: wal_sync_method is fdatasync > NOTICE: DateStyle is ISO with US (NonEuropean) conventions > NOTICE: Time zone is unset > NOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED > NOTICE: Current client encoding is 'SQL_ASCII' > NOTICE: Current server encoding is 'SQL_ASCII' > NOTICE: Seed for random number generator is unavailable > SHOW VARIABLE > project=# > > > -----Original Message----- > > From: pgsql-performance-owner@postgresql.org > > [mailto:pgsql-performance-owner@postgresql.org]On Behalf Of Justin Clift > > Sent: Monday, October 07, 2002 2:30 PM > > To: josh@agliodbs.com > > Cc: marie.tuite@edisonaffiliates.com; pgsql-performance@postgresql.org > > Subject: Re: [pgsql-performance] sloooow query > > > > > > Josh Berkus wrote: > > > > > > Marie, > > > > > > > I am experiencing slow db performance. I have vacuumed, > > analyzed, reindexed > > > > using the force option and performance remains the same - > > dog-slow :( If I > > > > drop and recreate the database, performance is normal, so > > this suggests a > > > > problem with the indexes? I also took a look at the > > postgresql.conf and all > > > > appears fine. There are many instances of the same database > > running on > > > > different servers and not all servers are experiencing the problem. > > > > > > Please post the following: > > > 1) A copy of the relevant portions of your database schema. > > > 2) The query that is running slowly. > > > 3) The results of running EXPLAIN on that query. > > > 4) Your PostgreSQL version and operating system > > > 5) Any other relevant information about your databases, such as > > the quantity > > > of inserts and deletes on the relevant tables. > > > > 6) And the sort_mem, shared_buffers, vacuum_mem, wal_buffers, and > > wal_files settings from your postgresql.conf file, if possible. > > > > :-) > > > > Regards and best wishes, > > > > Justin Clift > > > > > > > -- > > > -Josh Berkus > > > Aglio Database Solutions > > > San Francisco > > > > > > ---------------------------(end of broadcast)--------------------------- > > > TIP 5: Have you checked our extensive FAQ? > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html > > > > -- > > "My grandfather once told me that there are two kinds of people: those > > who work and those who take the credit. He told me to try to be in the > > first group; there was less competition there." > > - Indira Gandhi > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org > > -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
> What happens if you run VACUUM FULL VERBOSE on the Bad database? Does it > report lots of rows taken up? I ran the vacuum for selected tables. It looks fine, I think, but I amn't always sure what I am reading in output. project=# vacuum full verbose classes; NOTICE: --Relation classes-- NOTICE: Pages 5: Changed 0, reaped 2, Empty 0, New 0; Tup 332: Vac 0, Keep/VTL 0/0, UnUsed 33, MinLen 93, MaxLen 117; Re-using: Free/Avail. Space 3020/2832; EndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Index class_pkey: Pages 5; Tuples 332: Deleted 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Rel classes: Pages: 5 --> 5; Tuple(s) moved: 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: --Relation pg_toast_595650-- NOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Index pg_toast_595650_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. VACUUM project=# vacuum full verbose bm_publications; NOTICE: --Relation bm_publications-- NOTICE: Pages 2: Changed 0, reaped 1, Empty 0, New 0; Tup 284: Vac 0, Keep/VTL 0/0, UnUsed 6, MinLen 52, MaxLen 52; Re-using: Free/Avail. Space 416/416; EndEmpty/Avail. Pages 0/2. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Index bm_publications_pkey: Pages 4; Tuples 284: Deleted 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Rel bm_publications: Pages: 2 --> 2; Tuple(s) moved: 0. CPU 0.00s/0.01u sec elapsed 0.00 sec. VACUUM project=# vacuum full verbose user_common; NOTICE: --Relation user_common-- NOTICE: Pages 21: Changed 0, reaped 19, Empty 0, New 0; Tup 1045: Vac 0, Keep/VTL 0/0, UnUsed 103, MinLen 117, MaxLen 221; Re-using: Free/Avail. Space 4080/2968; EndEmpty/Avail. Pages 0/2. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Index user_common_pkey: Pages 20; Tuples 1045: Deleted 0. CPU 0.01s/0.00u sec elapsed 0.00 sec. NOTICE: Rel user_common: Pages: 21 --> 21; Tuple(s) moved: 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: --Relation pg_toast_474892-- NOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Index pg_toast_474892_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. VACUUM project=# vacuum full verbose bm_subscriptions_rlt; NOTICE: --Relation bm_subscriptions_rlt-- NOTICE: Pages 1: Changed 0, reaped 1, Empty 0, New 0; Tup 114: Vac 0, Keep/VTL 0/0, UnUsed 1, MinLen 57, MaxLen 57; Re-using: Free/Avail. Space 872/872; EndEmpty/Avail. Pages 0/1. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Index bm_subscriptions_rlt_pkey: Pages 2; Tuples 114: Deleted 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. NOTICE: Rel bm_subscriptions_rlt: Pages: 1 --> 1; Tuple(s) moved: 0. CPU 0.00s/0.00u sec elapsed 0.00 sec. VACUUM project=#
Marie, > I ran the vacuum for selected tables. It looks fine, I think, but I amn't > always sure what I am reading in output. So much for the easy answer. The reason I wanted to see a VACUUM FULL is that the query on the "bad" database is taking a long time to return even the first row of many of its sub-parts. This is usually the result of not running VACUUM FULL after a lot of deletions. However, your problem apparently is something else. Is is possible that there is some kind of disk access problem for the bad database copy? Is there a difference in where its files are physically located? -- -Josh Berkus Aglio Database Solutions San Francisco
On Mon, 2002-10-07 at 16:15, Justin Clift wrote: > Hi Marie, > > Not sure the wal_files = 0 bit is good either. Haven't seen that set to > 0 before. > This is the default value, and I don't recall anything in the docs that would suggest to change it. Also IIRC the back end will auto adjust the # of wal_files as needed in newer versions. Unless your seeing messages like "DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES" I think you can leave this alone. Can you point me to something that says different? Robert Treat
On Mon, 2002-10-07 at 16:34, Marie G. Tuite wrote: > > What happens if you run VACUUM FULL VERBOSE on the Bad database? Does it > > report lots of rows taken up? > > I ran the vacuum for selected tables. It looks fine, I think, but I amn't > always sure what I am reading in output. > Is this vacuum being done on a system that is currently running slow, or was this system recently dropped/reloaded? Robert Treat
"Marie G. Tuite" <marie.tuite@edisonaffiliates.com> writes: > I pg_dumped the first database having performance problems and reloaded it > into a new database on the same server. The query ran normally when I > reloaded it. There is no difference in hardware, schema or anything else. Have you done an ANALYZE or VACUUM ANALYZE in either database? The statistics the planner is working from seem to be quite different in the two plans. regards, tom lane
Robert Treat wrote: > > On Mon, 2002-10-07 at 16:15, Justin Clift wrote: > > Hi Marie, > > > > Not sure the wal_files = 0 bit is good either. Haven't seen that set to > > 0 before. > > > > This is the default value, and I don't recall anything in the docs that > would suggest to change it. Also IIRC the back end will auto adjust the > # of wal_files as needed in newer versions. Unless your seeing messages > like "DEBUG: XLogWrite: new log file created - consider increasing > WAL_FILES" I think you can leave this alone. Can you point me to > something that says different? Ahh... that makes sense. Have been doing almost nothing else recently except for setting up new PostgreSQL databases, loading in data, then doing load testing for things. Have totally become so used to having wal_files being other than 0 that it didn't even register that this is the default. ;-> Sorry about that, and thanks for the heads up. :-) Regards and best wishes, Justin Clift > Robert Treat -- "My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there." - Indira Gandhi
I have analyzed, vacuumed and reindexed. > -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > Sent: Monday, October 07, 2002 3:58 PM > To: marie.tuite@edisonaffiliates.com > Cc: josh@agliodbs.com; pgsql-performance@postgresql.org > Subject: Re: [pgsql-performance] sloooow query > > > "Marie G. Tuite" <marie.tuite@edisonaffiliates.com> writes: > > I pg_dumped the first database having performance problems and > reloaded it > > into a new database on the same server. The query ran normally when I > > reloaded it. There is no difference in hardware, schema or > anything else. > > Have you done an ANALYZE or VACUUM ANALYZE in either database? The > statistics the planner is working from seem to be quite different > in the two plans. > > regards, tom lane >
> However, your problem apparently is something else. Is is possible that > there is some kind of disk access problem for the bad database copy? Is > there a difference in where its files are physically located? Both are in default storage - /var/lib/pgsql/data.
> Is this vacuum being done on a system that is currently running slow, or > was this system recently dropped/reloaded? Currently slow.
> Is this vacuum being done on a system that is currently running slow, or > was this system recently dropped/reloaded? Currently slow.
> Have you done an ANALYZE or VACUUM ANALYZE in either database? The > statistics the planner is working from seem to be quite different > in the two plans. I have vacuumed, analysed and reindexed.
On Mon, Oct 07, 2002 at 02:22:09PM -0500, Marie G. Tuite wrote: > Hello all, > > I am experiencing slow db performance. I have vacuumed, analyzed, reindexed > using the force option and performance remains the same - dog-slow :( If I > drop and recreate the database, performance is normal, so this suggests a > problem with the indexes? I also took a look at the postgresql.conf and all > appears fine. There are many instances of the same database running on > different servers and not all servers are experiencing the problem. We need more details if you wish to receive useful answers. Query/EXPLAIN output/schema, etc -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > There are 10 kinds of people in the world, those that can do binary > arithmetic and those that can't.