=?ISO-8859-1?Q?=DCmit_=D6ztosun?= <umit@likyabilisim.com> writes:
> We are using PostgreSQL for our business application. Recently, during
> testing of our application with large volumes of data, we faced a weird
> problem. Our query performance dropped *dramatically* after "VACUUM FULL
> ANALYZE" command.
I think the problem is that the planner is underestimating the cost of
evaluating this complicated filter condition:
> -> Seq Scan on scf_irsaliye irs (cost=0.00..30.00 rows=1 width=20) (actual time=0.290..0.290 rows=0
loops=10000)
> Filter: (((karsifirma)::text = 'KENDI'::text) AND (((turu)::text = 'MAI'::text) OR
((turu)::text= 'KGI'::text) OR ((turu)::text = 'PS'::text) OR ((turu)::text = 'TS'::text) OR ((turu)::text =
'KC'::text)OR ((turu)::text = 'KCO'::text)) AND (((_key_sis_depo_dest)::text = '$$$$0000003l$1$$'::text) OR
((_key_sis_depo_dest)::text= '$$$$00000048$1$$'::text) OR ((_key_sis_depo_dest)::text = '$$$$0000004b$1$$'::text) OR
((_key_sis_depo_dest)::text= '$$$$0000004d$1$$'::text)) AND (tarih <= '2005-08-26'::date))
While you could attack that by raising the cpu_operator_cost parameter,
it would also be worth inquiring *why* the condition is so expensive to
evaluate. I am suspicious that you are running the database in a locale
in which strcoll() is really slow. Can you run it in C locale instead,
or do you really need locale-aware behavior? Can you switch to a
different database encoding? (A single-byte encoding such as Latin1
might be faster than UTF8, for example.)
Another possibility is to take a hard look at whether you can't simplify
the filter condition, but that'd require more knowledge of your
application than I have.
Or you could just play with the order of the filter conditions ... for
example, the date condition at the end is probably far cheaper to test
than the text comparisons, so if that's fairly selective it'd be worth
putting it first.
regards, tom lane