Thread: Performance problem from migrating between versions!
Hi, I have the following problem. A week ago we've migrated from PGv7.2.3 to 7.4.6. There were a lot of things in the apps to chenge but we made them. But one query doesn't want to run. In the old PGv7.2.3 it passes for 10 min. In the new one it gaves: DBD::Pg::st execute failed: ERROR: out of memory So the Server was not upgrated or preconfigured, so I suppose that the problem is somewhere in the configuration of the Postgres. Here I post the query and the explain. I can't post the explain analyze, because:))... the query can't execute:) I also post the result of SHOW ALL to give a view of the server configuration. Thanks in advance for all sugestions. Kaloyan Iliev SHOW ALL name setting add_missing_from on australian_timezones off authentication_timeout 60 check_function_bodies on checkpoint_segments 16 checkpoint_timeout 300 checkpoint_warning 30 client_encoding SQL_ASCII client_min_messages notice commit_delay 0 commit_siblings 5 cpu_index_tuple_cost 0.001 cpu_operator_cost 0.0025 cpu_tuple_cost 0.01 DateStyle ISO, DMY db_user_namespace off deadlock_timeout 1000 debug_pretty_print off debug_print_parse off debug_print_plan off debug_print_rewritten off default_statistics_target 10 default_transaction_isolation read committed default_transaction_read_only off dynamic_library_path $libdir effective_cache_size 13000 enable_hashagg on enable_hashjoin on enable_indexscan on enable_mergejoin on enable_nestloop on enable_seqscan on enable_sort on enable_tidscan on explain_pretty_print on extra_float_digits 0 from_collapse_limit 8 fsync on geqo on geqo_effort 1 geqo_generations 0 geqo_pool_size 0 geqo_selection_bias 2 geqo_threshold 11 join_collapse_limit 8 krb_server_keyfile unset lc_collate C lc_ctype CP1251 lc_messages C lc_monetary C lc_numeric C lc_time C log_connections off log_duration off log_error_verbosity default log_executor_stats off log_hostname off log_min_duration_statement -1 log_min_error_statement panic log_min_messages notice log_parser_stats off log_pid off log_planner_stats off log_source_port off log_statement off log_statement_stats off log_timestamp on max_connections 256 max_expr_depth 10000 max_files_per_process 1000 max_fsm_pages 20000 max_fsm_relations 1000 max_locks_per_transaction 64 password_encryption on port 5432 pre_auth_delay 0 preload_libraries unset random_page_cost 4 regex_flavor advanced rendezvous_name unset search_path $user,public server_encoding SQL_ASCII server_version 7.4.6 shared_buffers 1000 silent_mode off sort_mem 1024 sql_inheritance off ssl off statement_timeout 0 stats_block_level on stats_command_string on stats_reset_on_server_start off stats_row_level on stats_start_collector on superuser_reserved_connections 2 syslog 0 syslog_facility LOCAL0 syslog_ident postgres tcpip_socket on TimeZone unknown trace_notify off transaction_isolation read committed transaction_read_only off transform_null_equals off unix_socket_directory unset unix_socket_group unset unix_socket_permissions 511 vacuum_mem 8192 virtual_host unset wal_buffers 8 wal_debug 0 wal_sync_method fsync zero_damaged_pages off (113 rows) And now the query: explain select UNPAID.ino, I.idate, round(UNPAID.saldo - ( select round(coalesce(sum(total),0),5) from invoices I1 where I1.iino = I.ino AND I1.istatus = 0 AND I1.itype = 2 ) ,2) AS saldo, C.name AS client_name, SC.branch AS client_branch, I.total, I.nomenclature_no AS nom, I.subnom_no AS subnom, OF.description AS office, coalesce((select 1.2 * sum(AD.bgl_amount)::float / AC.amount from acc_clients AC, config C, acc_debts AD, debts_desc D where C.office = OF.officeid AND not AC.credit AND AC.ino = I.ino AND AC.transact_no = AD.transact_no AND AD.credit AND AD.debtid = D.debtid AND C.confid = D.refid AND C.oid = (select max(oid) from config where confid=D.refid ) group by AC.amount ),0) AS perc, 1 from invoices I, offices OF, ( select nomenclature_no, subnom_no, ino, sum(saldo) as saldo from ( select nomenclature_no, subnom_no, ino, round(sum(saldo_sign(not credit)*amount),5) AS saldo from acc_clients group by ino, nomenclature_no, subnom_no UNION ALL select c.nomenclature_no, c.subnom_no, c.ino, round(COALESCE(sum(p.bgl_amount), 0),5) AS saldo from acc_clients c, acc_payments p where c.transact_no = p.transact_no AND p.fisc_status = 4 group by c.ino, c.nomenclature_no, c.subnom_no ) TTUNPAID group by ino, nomenclature_no, subnom_no ) UNPAID, clients C, subnom SC where I.idate >= '01-01-2004' AND I.idate <= '01-01-2005' AND UNPAID.ino = I.ino AND I.istatus = 0 AND I.itype <> 2 AND I.nomenclature_no = C.nomenclature_no AND I.nomenclature_no = SC.nomenclature_no AND I.subnom_no = SC.subnom_no union all select UNPAID.ino, I.idate, round(UNPAID.saldo - ( select round(coalesce(sum(total),0),5) from invoices I1 where I1.iino = I.ino AND I1.istatus = 0 AND I1.itype = 2 ) ,2) AS saldo, C.name AS client_name, SC.branch AS client_branch, I.total, I.nomenclature_no AS nom, I.subnom_no AS subnom, 'Íåèçâåñòåí' AS office, coalesce((select 1.2 * sum(AD.bgl_amount)::float / AC.amount from acc_clients AC, acc_debts AD, debts_desc D where not AC.credit AND AC.ino = I.ino AND AC.transact_no = AD.transact_no AND AD.credit AND AD.debtid = D.debtid AND D.refid is null group by AC.amount ),0) AS perc, 1 from invoices I, ( select nomenclature_no, subnom_no, ino, round(sum(saldo_sign(not credit)*amount),5) AS saldo from acc_clients group by ino, nomenclature_no, subnom_no ) UNPAID, clients C, subnom SC where I.idate >= '01-01-2004' AND I.idate <= '01-01-2005' AND UNPAID.ino = I.ino AND I.istatus = 0 AND I.itype <> 2 AND I.nomenclature_no = C.nomenclature_no AND I.nomenclature_no = SC.nomenclature_no AND exists (select 1 from acc_clients AC, acc_debts AD, debts_desc DD where AC.ino = I.ino AND AD.transact_no = AC.transact_no AND AD.debtid = DD.debtid AND DD.refid is null ) AND I.subnom_no = SC.subnom_no order by office, ino DESC QUERY PLAN Sort (cost=453579405.72..453585516.16 rows=2444177 width=108) Sort Key: office, ino -> Append (cost=93725.37..452807307.33 rows=2444177 width=108) -> Subquery Scan "*SELECT* 1" (cost=93725.37..447433349.67 rows=2418773 width=108) -> Nested Loop (cost=93725.37..447409161.94 rows=2418773 width=108) -> Merge Join (cost=93723.86..101789.54 rows=50867 width=94) Merge Cond: ("outer".ino = "inner".ino) -> Subquery Scan unpaid (cost=82961.98..89647.68 rows=267428 width=36) -> GroupAggregate (cost=82961.98..86973.40 rows=267428 width=44) -> Sort (cost=82961.98..83630.55 rows=267428 width=44) Sort Key: ino, nomenclature_no, subnom_no -> Subquery Scan ttunpaid (cost=35143.93..49845.48 rows=267428 width=44) -> Append (cost=35143.93..47171.20 rows=267428 width=21) -> Subquery Scan "*SELECT* 1" (cost=35143.93..44492.88 rows=267113 width=21) -> GroupAggregate (cost=35143.93..41821.75 rows=267113 width=21) -> Sort (cost=35143.93..35811.71 rows=267113 width=21) Sort Key: ino, nomenclature_no, subnom_no -> Seq Scan on acc_clients (cost=0.00..4758.13 rows=267113 width=21) -> Subquery Scan "*SELECT* 2" (cost=2672.80..2678.32 rows=315 width=20) -> HashAggregate (cost=2672.80..2675.17 rows=315 width=20) -> Nested Loop (cost=0.00..2669.65 rows=315 width=20) -> Index Scan using acc_payments_fisc_status_idx on acc_payments p (cost=0.00..892.52 rows=315 width=12) Index Cond: (fisc_status = 4) -> Index Scan using acc_clients_transact_no_uidx on acc_clients c (cost=0.00..5.63 rows=1 width=16) Index Cond: (c.transact_no = "outer".transact_no) -> Sort (cost=10761.89..10817.21 rows=22128 width=58) Sort Key: i.ino -> Hash Join (cost=1774.86..8710.88 rows=22128 width=58) Hash Cond: (("outer".nomenclature_no = "inner".nomenclature_no) AND ("outer".subnom_no = "inner".subnom_no)) -> Seq Scan on invoices i (cost=0.00..5556.52 rows=22292 width=24) Filter: ((idate >= '2004-01-01'::date) AND (idate <= '2005-01-01'::date) AND (istatus = 0) AND (itype <> 2)) -> Hash (cost=1592.90..1592.90 rows=13193 width=46) -> Hash Join (cost=577.25..1592.90 rows=13193 width=46) Hash Cond: ("outer".nomenclature_no = "inner".nomenclature_no) -> Seq Scan on subnom sc (cost=0.00..393.93 rows=13193 width=19) -> Hash (cost=463.20..463.20 rows=12820 width=27) -> Seq Scan on clients c (cost=0.00..463.20 rows=12820 width=27) -> Materialize (cost=1.51..2.02 rows=51 width=14) -> Seq Scan on offices "of" (cost=0.00..1.51 rows=51 width=14) SubPlan -> HashAggregate (cost=179.30..179.31 rows=1 width=19) -> Nested Loop (cost=0.00..179.30 rows=1 width=19) Join Filter: ("inner".oid = (subplan)) -> Nested Loop (cost=0.00..77.57 rows=2 width=23) -> Nested Loop (cost=0.00..66.58 rows=2 width=23) -> Index Scan using acc_clients_ino on acc_clients ac (cost=0.00..25.47 rows=4 width=12) Index Cond: (ino = $0) Filter: (NOT credit) -> Index Scan using acc_debts_transact_no_idx on acc_debts ad (cost=0.00..9.71 rows=45 width=19) Index Cond: ("outer".transact_no = ad.transact_no) Filter: credit -> Index Scan using debts_desc_pkey on debts_desc d (cost=0.00..5.48 rows=1 width=8) Index Cond: ("outer".debtid = d.debtid) -> Index Scan using config_confid_idx on config c (cost=0.00..25.42 rows=1 width=8) Index Cond: (c.confid = "outer".refid) Filter: (office = $2) SubPlan -> Aggregate (cost=25.43..25.43 rows=1 width=4) -> Index Scan using config_confid_idx on config (cost=0.00..25.40 rows=9 width=4) Index Cond: (confid = $1) -> Aggregate (cost=5.59..5.59 rows=1 width=8) -> Index Scan using invoices_iino_idx on invoices i1 (cost=0.00..5.58 rows=1 width=8) Index Cond: (iino = $0) Filter: ((istatus = 0) AND (itype = 2)) -> Subquery Scan "*SELECT* 2" (cost=3250111.65..5373957.66 rows=25404 width=94) -> Merge Join (cost=3250111.65..5373703.62 rows=25404 width=94) Merge Cond: ("outer".ino = "inner".ino) -> Subquery Scan unpaid (cost=35143.93..44492.88 rows=267113 width=36) -> GroupAggregate (cost=35143.93..41821.75 rows=267113 width=21) -> Sort (cost=35143.93..35811.71 rows=267113 width=21) Sort Key: ino, nomenclature_no, subnom_no -> Seq Scan on acc_clients (cost=0.00..4758.13 rows=267113 width=21) -> Sort (cost=3214967.73..3214995.39 rows=11064 width=58) Sort Key: i.ino -> Hash Join (cost=3212283.98..3214224.58 rows=11064 width=58) Hash Cond: ("outer".nomenclature_no = "inner".nomenclature_no) -> Merge Join (cost=3211706.73..3213082.65 rows=11867 width=39) Merge Cond: ("outer".nomenclature_no = "inner".nomenclature_no) Join Filter: ("inner".subnom_no = "outer".subnom_no) -> Index Scan using subnom_nom_idx on subnom sc (cost=0.00..1135.01 rows=13193 width=19) -> Sort (cost=3211706.73..3211734.59 rows=11146 width=24) Sort Key: i.nomenclature_no -> Index Scan using invoices_idate_idx on invoices i (cost=0.00..3210957.48 rows=11146 width=24) Index Cond: ((idate >= '2004-01-01'::date) AND (idate <= '2005-01-01'::date)) Filter: ((istatus = 0) AND (itype <> 2) AND (subplan)) SubPlan -> Nested Loop (cost=0.00..140.00 rows=1 width=0) -> Nested Loop (cost=0.00..101.54 rows=7 width=4) -> Index Scan using acc_clients_ino on acc_clients ac (cost=0.00..25.47 rows=7 width=4) Index Cond: (ino = $0) -> Index Scan using acc_debts_transact_no_idx on acc_debts ad (cost=0.00..9.71 rows=93 width=8) Index Cond: (ad.transact_no = "outer".transact_no) -> Index Scan using debts_desc_pkey on debts_desc dd (cost=0.00..5.48 rows=1 width=4) Index Cond: ("outer".debtid = dd.debtid) Filter: (refid IS NULL) -> Hash (cost=463.20..463.20 rows=12820 width=27) -> Seq Scan on clients c (cost=0.00..463.20 rows=12820 width=27) SubPlan -> HashAggregate (cost=77.58..77.59 rows=1 width=19) -> Nested Loop (cost=0.00..77.57 rows=1 width=19) -> Nested Loop (cost=0.00..66.58 rows=2 width=23) -> Index Scan using acc_clients_ino on acc_clients ac (cost=0.00..25.47 rows=4 width=12) Index Cond: (ino = $0) Filter: (NOT credit) -> Index Scan using acc_debts_transact_no_idx on acc_debts ad (cost=0.00..9.71 rows=45 width=19) Index Cond: ("outer".transact_no = ad.transact_no) Filter: credit -> Index Scan using debts_desc_pkey on debts_desc d (cost=0.00..5.48 rows=1 width=4) Index Cond: ("outer".debtid = d.debtid) Filter: (refid IS NULL) -> Aggregate (cost=5.59..5.59 rows=1 width=8) -> Index Scan using invoices_iino_idx on invoices i1 (cost=0.00..5.58 rows=1 width=8) Index Cond: (iino = $0) Filter: ((istatus = 0) AND (itype = 2)) (114 rows)
Kaloyan Iliev Iliev <news1@faith.digsys.bg> writes: > I have the following problem. A week ago we've migrated from PGv7.2.3 to > 7.4.6. There were a lot of things in the apps to chenge but we made > them. But one query doesn't want to run. In the old PGv7.2.3 it passes > for 10 min. In the new one it gaves: > DBD::Pg::st execute failed: ERROR: out of memory Does setting enable_hashagg to OFF fix it? regards, tom lane
Thanks, It worked. I have read in the docs what this "enable_hashagg" do, but I couldn't understand it. What does it change? From the Doc: ------- enable_hashagg (boolean) Enables or disables the query planner's use of hashed aggregation plan types. The default is on. This is used for debugging the query planner. -------- How it is used to debug the query planner? And why it lower the mem usage? Thank you in advance. Kaloyan Iliev Tom Lane wrote: >Kaloyan Iliev Iliev <news1@faith.digsys.bg> writes: > > >>I have the following problem. A week ago we've migrated from PGv7.2.3 to >>7.4.6. There were a lot of things in the apps to chenge but we made >>them. But one query doesn't want to run. In the old PGv7.2.3 it passes >>for 10 min. In the new one it gaves: >>DBD::Pg::st execute failed: ERROR: out of memory >> >> > >Does setting enable_hashagg to OFF fix it? > > regards, tom lane > > > >
Hi,
I am asking the prev. question because there is no change in the query plan (as far as I see) but the mem usage decreases from 258M to 16M.
Kaloyan Iliev
Tom Lane wrote:
I am asking the prev. question because there is no change in the query plan (as far as I see) but the mem usage decreases from 258M to 16M.
Kaloyan Iliev
Tom Lane wrote:
Kaloyan Iliev Iliev <news1@faith.digsys.bg> writes:I have the following problem. A week ago we've migrated from PGv7.2.3 to 7.4.6. There were a lot of things in the apps to chenge but we made them. But one query doesn't want to run. In the old PGv7.2.3 it passes for 10 min. In the new one it gaves: DBD::Pg::st execute failed: ERROR: out of memoryDoes setting enable_hashagg to OFF fix it? regards, tom lane
Kaloyan Iliev Iliev <news1@faith.digsys.bg> writes: > It worked. I have read in the docs what this "enable_hashagg" do, but I > couldn't understand it. What does it change? Your original 7.4 query plan has several HashAgg steps in it, which are doing aggregate/GROUP BY operations. The planner thinks that they will use only nominal amounts of memory because there are only a few distinct groups in each case. Evidently that is wrong and at least one of them is dealing with so many groups as to run out of memory. So the next question is have you ANALYZEd all of these tables recently? I wouldn't recommend turning off hashagg as a permanent solution, it was just a quickie to verify my suspicion of where the memory was going. regards, tom lane
Tom Lane wrote: >I wouldn't recommend turning off hashagg as a permanent solution, it >was just a quickie to verify my suspicion of where the memory was going. > > > Hi, How to understant the upper sentence? I shouldn't turn "hashagg" off permanently for this query or for the entire database. For now I turn it off for this query, so it can work. If I shouldn't, then what should I do? Will ANALYZE resove this? Kaloyan Iliev
Kaloyan Iliev Iliev <news1@faith.digsys.bg> writes: > Will ANALYZE resove this? Try it and find out. regards, tom lane
Hi,
I try it and it doesn't resolve the problem:(
So, now what? To leave it that way for this query or .... There must be permanent solution because if other queries behave like that?
Kaloyan Iliev
Tom Lane wrote:
I try it and it doesn't resolve the problem:(
So, now what? To leave it that way for this query or .... There must be permanent solution because if other queries behave like that?
Kaloyan Iliev
Tom Lane wrote:
Kaloyan Iliev Iliev <news1@faith.digsys.bg> writes:Will ANALYZE resove this?Try it and find out. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend