Thread: Performance Optimization for Dummies 2 - the SQL

Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Some very helpful people had asked that I post the troublesome code that was
generated by my import program.

I installed a SQL log feature in my import program. I have
posted samples of the SQL statements that cause the biggest delays.

Thanks for all of your help.

Carlo

----------
Sample 1:
This one is very expensive on my system.
----------
select
f.facility_id,
provider_practice_id
from
mdx_core.provider_practice as pp
join mdx_core.facility as f
on f.facility_id = pp.facility_id
join mdx_core.facility_address as fa
on fa.facility_id = pp.facility_id
join mdx_core.address as a
on a.address_id = fa.address_id
where
pp.provider_id = 1411311
and f.facility_type_code != 'P'
and (
pp.facility_address_id is not null
and a.state_code = 'NY'
and '10001-2382' = a.postal_code||'%'
and a.city = 'New York'
) or (
f.default_state_code = 'NY'
and '10001-2382' like f.default_postal_code||'%'
and f.default_city = 'New York'
)
limit 1

Limit  (cost=3899.18..32935.21 rows=1 width=8)
  ->  Hash Join  (cost=3899.18..91007.27 rows=3 width=8)
        Hash Cond: ("outer".address_id = "inner".address_id)
        Join Filter: ((("outer".provider_id = 1411311) AND
("outer".facility_type_code <> 'P'::bpchar) AND ("outer".facility_address_id
IS NOT NULL) AND (("inner".state_code)::text = 'NY'::text) AND
('10001-2382'::text = (("inner".postal_code)::text || '%'::text)) AND
(("inner".city)::text = 'New York'::text)) OR (("outer".default_state_code =
'NY'::bpchar) AND ('10001-2382'::text ~~
(("outer".default_postal_code)::text || '%'::text)) AND
(("outer".default_city)::text = 'New York'::text)))
        ->  Merge Join  (cost=0.00..50589.20 rows=695598 width=57)
              Merge Cond: ("outer".facility_id = "inner".facility_id)
              ->  Merge Join  (cost=0.00..16873.90 rows=128268 width=49)
                    Merge Cond: ("outer".facility_id = "inner".facility_id)
                    ->  Index Scan using facility_pkey on facility f
(cost=0.00..13590.18 rows=162525 width=41)
                    ->  Index Scan using facility_address_facility_idx on
facility_address fa  (cost=0.00..4254.46 rows=128268 width=8)
              ->  Index Scan using provider_practice_facility_idx on
provider_practice pp  (cost=0.00..28718.27 rows=452129 width=16)
        ->  Hash  (cost=3650.54..3650.54 rows=99454 width=36)
              ->  Seq Scan on address a  (cost=0.00..3650.54 rows=99454
width=36)

----------
Sample 2:
This one includes a call to a custom function which performs lexical
comparisons
and returns a rating on the likelihood that the company names refer to the
same
facility. Replacing the code:
    mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as
comp
with
  1 as comp
-- to avoid the function call only shaved a fragment off the execution time,
which leads me to believe my problem is in the SQL structure itself.

----------
select
mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as
comp,
facil.*
from (
select
f.facility_id,
fa.facility_address_id,
a.address_id,
f.facility_type_code,
f.name,
a.address,
a.city,
a.state_code,
a.postal_code,
a.country_code
from
mdx_core.facility as f
join mdx_core.facility_address as fa
on fa.facility_id = f.facility_id
join mdx_core.address as a
on a.address_id = fa.address_id
where
facility_address_id is not null
and a.country_code = 'US'
and a.state_code = 'IL'
and '60640-5759' like a.postal_code||'%'
union select
f.facility_id,
null as facility_address_id,
null as address_id,
f.facility_type_code,
f.name,
null as address,
f.default_city as city,
f.default_state_code as state_code,
f.default_postal_code as postal_code,
f.default_country_code as country_code
from
mdx_core.facility as f
left outer join mdx_core.facility_address as fa
on fa.facility_id = f.facility_id
where
facility_address_id is null
and f.default_country_code = 'US'
and '60640-5759' like f.default_postal_code||'%'
) as facil
order by comp

Sort  (cost=20595.92..20598.01 rows=834 width=236)
  Sort Key: mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3
Inc'::text, (name)::text)
  ->  Subquery Scan facil  (cost=20522.10..20555.46 rows=834 width=236)
        ->  Unique  (cost=20522.10..20545.03 rows=834 width=103)
              ->  Sort  (cost=20522.10..20524.18 rows=834 width=103)
                    Sort Key: facility_id, facility_address_id, address_id,
facility_type_code, name, address, city, state_code, postal_code,
country_code
                    ->  Append  (cost=4645.12..20481.63 rows=834 width=103)
                          ->  Nested Loop  (cost=4645.12..8381.36 rows=21
width=103)
                                ->  Hash Join  (cost=4645.12..8301.35
rows=21 width=72)
                                      Hash Cond: ("outer".address_id =
"inner".address_id)
                                      ->  Seq Scan on facility_address fa
(cost=0.00..3014.68 rows=128268 width=12)
                                            Filter: (facility_address_id IS
NOT NULL)
                                      ->  Hash  (cost=4645.08..4645.08
rows=16 width=64)
                                            ->  Seq Scan on address a
(cost=0.00..4645.08 rows=16 width=64)
                                                  Filter: ((country_code =
'US'::bpchar) AND ((state_code)::text = 'IL'::text) AND ('60640-5759'::text
~~ ((postal_code)::text || '%'::text)))
                                ->  Index Scan using facility_pkey on
facility f  (cost=0.00..3.80 rows=1 width=35)
                                      Index Cond: ("outer".facility_id =
f.facility_id)
                          ->  Subquery Scan "*SELECT* 2"
(cost=0.00..12100.07 rows=813 width=73)
                                ->  Nested Loop Left Join
(cost=0.00..12091.94 rows=813 width=73)
                                      Filter: ("inner".facility_address_id
IS NULL)
                                      ->  Seq Scan on facility f
(cost=0.00..8829.19 rows=813 width=73)
                                            Filter: ((default_country_code =
'US'::bpchar) AND ('60640-5759'::text ~~ ((default_postal_code)::text ||
'%'::text)))
                                      ->  Index Scan using
facility_address_facility_idx on facility_address fa  (cost=0.00..3.99
rows=2 width=8)
                                            Index Cond: (fa.facility_id =
"outer".facility_id)



Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/3/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> Some very helpful people had asked that I post the troublesome code that was
> generated by my import program.
>
> I installed a SQL log feature in my import program. I have
> posted samples of the SQL statements that cause the biggest delays.

explain analyze is more helpful because it prints the times.

sample 1, couple questions:
what is the purpose of limit 1?
if you break up the 'or' which checks facility and address into two
separate queries, are the two queries total times more, less, or same
as the large query.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
Alex Stapleton
Date:
On 3 Oct 2006, at 16:04, Merlin Moncure wrote:

> On 10/3/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
>> Some very helpful people had asked that I post the troublesome
>> code that was
>> generated by my import program.
>>
>> I installed a SQL log feature in my import program. I have
>> posted samples of the SQL statements that cause the biggest delays.
>
> explain analyze is more helpful because it prints the times.

You can always use the \timing flag in psql ;)

l1_historical=# \timing
Timing is on.
l1_historical=# select 1;
?column?
----------
         1
(1 row)

Time: 4.717 ms





Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> explain analyze is more helpful because it prints the times.

Sorry, this runs in-line in my code, and I didn't want to slow the
already-slow program with explain analyze. I have run it outside of the code
in its own query. The new results are below.

> sample 1, couple questions:
> what is the purpose of limit 1?

I don't need to know the results, I just need to know if any data which
meets this criteria exists.

> if you break up the 'or' which checks facility and address into two
> separate queries, are the two queries total times more, less, or same
> as the large query.

They are much less; I had assumed that SQL would use lazy evaluation in this
case, not bothering to perform one half of the OR condition if the other
half  But the single query is much heavier than the two seperate ones.

Carlo

>
> merlin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>
select
f.facility_id,
provider_practice_id
from
mdx_core.provider_practice as pp
join mdx_core.facility as f
on f.facility_id = pp.facility_id
join mdx_core.facility_address as fa
on fa.facility_id = pp.facility_id
join mdx_core.address as a
on a.address_id = fa.address_id
where
pp.provider_id = 1411311
and f.facility_type_code != 'P'
and (
pp.facility_address_id is not null
and a.state_code = 'NY'
and '10001-2382' = a.postal_code||'%'
and a.city = 'New York'
) or (
f.default_state_code = 'NY'
and '10001-2382' like f.default_postal_code||'%'
and f.default_city = 'New York'
)
limit 1

"Limit  (cost=3899.18..22561.46 rows=1 width=8) (actual
time=9410.970..9410.970 rows=0 loops=1)"
"  ->  Hash Join  (cost=3899.18..97210.58 rows=5 width=8) (actual
time=9410.966..9410.966 rows=0 loops=1)"
"        Hash Cond: ("outer".address_id = "inner".address_id)"
"        Join Filter: ((("outer".provider_id = 1411311) AND
("outer".facility_type_code <> 'P'::bpchar) AND ("outer".facility_address_id
IS NOT NULL) AND (("inner".state_code)::text = 'NY'::text) AND
('10001-2382'::text = (("inner".postal_code)::text || '%' (..)"
"        ->  Merge Join  (cost=0.00..51234.97 rows=801456 width=57) (actual
time=0.314..6690.241 rows=685198 loops=1)"
"              Merge Cond: ("outer".facility_id = "inner".facility_id)"
"              ->  Merge Join  (cost=0.00..15799.46 rows=128268 width=49)
(actual time=0.197..1637.553 rows=128268 loops=1)"
"                    Merge Cond: ("outer".facility_id =
"inner".facility_id)"
"                    ->  Index Scan using facility_pkey on facility f
(cost=0.00..13247.94 rows=176864 width=41) (actual time=0.145..591.219
rows=126624 loops=1)"
"                    ->  Index Scan using facility_address_facility_idx on
facility_address fa  (cost=0.00..4245.12 rows=128268 width=8) (actual
time=0.041..384.632 rows=128268 loops=1)"
"              ->  Index Scan using provider_practice_facility_idx on
provider_practice pp  (cost=0.00..30346.89 rows=489069 width=16) (actual
time=0.111..3031.675 rows=708714 loops=1)"
"        ->  Hash  (cost=3650.54..3650.54 rows=99454 width=36) (actual
time=478.509..478.509 rows=99454 loops=1)"
"              ->  Seq Scan on address a  (cost=0.00..3650.54 rows=99454
width=36) (actual time=0.033..251.203 rows=99454 loops=1)"
"Total runtime: 9412.654 ms"

----------
Sample 2:
This one includes a call to a custom function which performs lexical
comparisons
and returns a rating on the likelihood that the company names refer to the
same
facility. Replacing the code:
    mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as
comp
with
  1 as comp
-- to avoid the function call only shaved a fragment off the execution time,
which leads me to believe my problem is in the SQL structure itself.
----------
select
mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3 Inc', name) as
comp,
facil.*
from (
select
f.facility_id,
fa.facility_address_id,
a.address_id,
f.facility_type_code,
f.name,
a.address,
a.city,
a.state_code,
a.postal_code,
a.country_code
from
mdx_core.facility as f
join mdx_core.facility_address as fa
on fa.facility_id = f.facility_id
join mdx_core.address as a
on a.address_id = fa.address_id
where
facility_address_id is not null
and a.country_code = 'US'
and a.state_code = 'IL'
and '60640-5759' like a.postal_code||'%'
union select
f.facility_id,
null as facility_address_id,
null as address_id,
f.facility_type_code,
f.name,
null as address,
f.default_city as city,
f.default_state_code as state_code,
f.default_postal_code as postal_code,
f.default_country_code as country_code
from
mdx_core.facility as f
left outer join mdx_core.facility_address as fa
on fa.facility_id = f.facility_id
where
facility_address_id is null
and f.default_country_code = 'US'
and '60640-5759' like f.default_postal_code||'%'
) as facil
order by comp

"Sort  (cost=21565.50..21567.78 rows=909 width=236) (actual
time=1622.448..1622.456 rows=12 loops=1)"
"  Sort Key: mdx_lib.lex_compare('Vhs Acquisition Subsidiary Number 3
Inc'::text, (name)::text)"
"  ->  Subquery Scan facil  (cost=21484.47..21520.83 rows=909 width=236)
(actual time=1173.103..1622.134 rows=12 loops=1)"
"        ->  Unique  (cost=21484.47..21509.47 rows=909 width=103) (actual
time=829.747..829.840 rows=12 loops=1)"
"              ->  Sort  (cost=21484.47..21486.75 rows=909 width=103)
(actual time=829.744..829.761 rows=12 loops=1)"
"                    Sort Key: facility_id, facility_address_id, address_id,
facility_type_code, name, address, city, state_code, postal_code,
country_code"
"                    ->  Append  (cost=4645.12..21439.81 rows=909 width=103)
(actual time=146.952..829.517 rows=12 loops=1)"
"                          ->  Nested Loop  (cost=4645.12..8380.19 rows=22
width=103) (actual time=146.949..510.824 rows=12 loops=1)"
"                                ->  Hash Join  (cost=4645.12..8301.36
rows=22 width=72) (actual time=146.912..510.430 rows=12 loops=1)"
"                                      Hash Cond: ("outer".address_id =
"inner".address_id)"
"                                      ->  Seq Scan on facility_address fa
(cost=0.00..3014.68 rows=128268 width=12) (actual time=0.007..238.228
rows=128268 loops=1)"
"                                            Filter: (facility_address_id IS
NOT NULL)"
"                                      ->  Hash  (cost=4645.08..4645.08
rows=17 width=64) (actual time=131.827..131.827 rows=3 loops=1)"
"                                            ->  Seq Scan on address a
(cost=0.00..4645.08 rows=17 width=64) (actual time=3.555..131.797 rows=3
loops=1)"
"                                                  Filter: ((country_code =
'US'::bpchar) AND ((state_code)::text = 'IL'::text) AND ('60640-5759'::text
~~ ((postal_code)::text || '%'::text)))"
"                                ->  Index Scan using facility_pkey on
facility f  (cost=0.00..3.57 rows=1 width=35) (actual time=0.021..0.023
rows=1 loops=12)"
"                                      Index Cond: ("outer".facility_id =
f.facility_id)"
"                          ->  Subquery Scan "*SELECT* 2"
(cost=0.00..13059.40 rows=887 width=73) (actual time=318.669..318.669 rows=0
loops=1)"
"                                ->  Nested Loop Left Join
(cost=0.00..13050.53 rows=887 width=73) (actual time=318.664..318.664 rows=0
loops=1)"
"                                      Filter: ("inner".facility_address_id
IS NULL)"
"                                      ->  Seq Scan on facility f
(cost=0.00..9438.13 rows=887 width=73) (actual time=4.468..318.364 rows=10
loops=1)"
"                                            Filter: ((default_country_code
= 'US'::bpchar) AND ('60640-5759'::text ~~ ((default_postal_code)::text ||
'%'::text)))"
"                                      ->  Index Scan using
facility_address_facility_idx on facility_address fa  (cost=0.00..4.05
rows=2 width=8) (actual time=0.018..0.022 rows=1 loops=10)"
"                                            Index Cond: (fa.facility_id =
"outer".facility_id)"



Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Please ignore sample 1 - now that I have the logging feature, I can see that
my query generator algorithm made an error.

The SQL of concern is now script 2.



Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/3/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> Please ignore sample 1 - now that I have the logging feature, I can see that
> my query generator algorithm made an error.

can you do explain analyze on the two select queries on either side of
the union separatly?  the subquery is correctly written and unlikely
to be a problem (in fact, good style imo).  so lets have a look at
both sides of facil query and see where the problem is.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
Markus Schaber
Date:
Hi, Alex,

Alex Stapleton wrote:

>> explain analyze is more helpful because it prints the times.
>
> You can always use the \timing flag in psql ;)

Have you ever tried EXPLAIN ANALYZE?

\timing gives you one total timing, but EXPLAIN ANALYZE gives you
timings for sub-plans, including real row counts etc.

HTH,
Markus

--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> can you do explain analyze on the two select queries on either side of
> the union separatly?  the subquery is correctly written and unlikely
> to be a problem (in fact, good style imo).  so lets have a look at
> both sides of facil query and see where the problem is.

Sorry for the delay, the server was down yesterday and couldn't get
anything.

I have modified the sub-queries a little, trying to get the index scans to
fire - all the tables involved here are large enough to benefit from index
scans over sequential scans. I am mystified as to why PART 1 is giving me:

 "Seq Scan on facility_address fa  (cost=0.00..3014.68 rows=128268 width=12)
(actual time=0.007..99.033 rows=128268 loops=1)"

which I assume is for the:

"join mdx_core.facility_address as fa on fa.facility_id = f.facility_id"

Then again, I am not sure how to read the EXPLAIN ANALYSE performance
numbers.

The other part of the UNION (PART 2) I have also modified, I think it's
working nicely. Let me know if I'm mistaken on thinking that!

The one remaining problem is that the UNION of these two sub-queries has a
column which is a call to a custom TCL function that does a lexical analysis
on the results, ranking the result names by their proximity to the imported
name. his definitely eats up the performance and I hope that my decision to
call this function on the results of the union (assuming union deletes
redundent rows) is the correct one.

Thanks!

Carlo


/* PART 1.
  The redundant expression "facility_address_id is NULL" was removed because
  only an OUTER join would have made this meaningful. We use only INNER
joins in this sub-query
  Both facility_address and address have seq scans, even though there is an
index for
  facility_address(facility_id( and an index for address( country_code,
postal_code, address).
  The "like" operator appears to be making things expensive. This is used
because we have to take
   into account that perhaps the import row is using the 5-number US ZIP,
not the 9-number USZIP+4
  standard (although this is not the case in this sample).
/*
explain analyse select
      f.facility_id,
      fa.facility_address_id,
      a.address_id,
      f.facility_type_code,
      f.name,
      a.address,
      a.city,
      a.state_code,
      a.postal_code,
      a.country_code
   from
      mdx_core.facility as f
   join mdx_core.facility_address as fa
      on fa.facility_id = f.facility_id
   join mdx_core.address as a
      on a.address_id = fa.address_id
   where
      a.country_code = 'US'
      and a.state_code = 'IL'
      and a.postal_code like '60640-5759'||'%'
   order by facility_id

"Sort  (cost=6392.50..6392.50 rows=1 width=103) (actual
time=189.133..189.139 rows=12 loops=1)"
"  Sort Key: f.facility_id"
"  ->  Nested Loop  (cost=2732.88..6392.49 rows=1 width=103) (actual
time=14.006..188.967 rows=12 loops=1)"
"        ->  Hash Join  (cost=2732.88..6388.91 rows=1 width=72) (actual
time=13.979..188.748 rows=12 loops=1)"
"              Hash Cond: ("outer".address_id = "inner".address_id)"
"              ->  Seq Scan on facility_address fa  (cost=0.00..3014.68
rows=128268 width=12) (actual time=0.004..98.867 rows=128268 loops=1)"
"              ->  Hash  (cost=2732.88..2732.88 rows=1 width=64) (actual
time=6.430..6.430 rows=3 loops=1)"
"                    ->  Bitmap Heap Scan on address a  (cost=62.07..2732.88
rows=1 width=64) (actual time=2.459..6.417 rows=3 loops=1)"
"                          Recheck Cond: ((country_code = 'US'::bpchar) AND
((state_code)::text = 'IL'::text))"
"                          Filter: ((postal_code)::text ~~
'60640-5759%'::text)"
"                          ->  Bitmap Index Scan on
address_country_state_postal_code_address_idx  (cost=0.00..62.07 rows=3846
width=0) (actual time=1.813..1.813 rows=3554 loops=1)"
"                                Index Cond: ((country_code = 'US'::bpchar)
AND ((state_code)::text = 'IL'::text))"
"        ->  Index Scan using facility_pkey on facility f  (cost=0.00..3.56
rows=1 width=35) (actual time=0.012..0.013 rows=1 loops=12)"
"              Index Cond: ("outer".facility_id = f.facility_id)"
"Total runtime: 189.362 ms"

/* PART 2 - can you see anything that could work faster? */

explain analyse select
      f.facility_id,
      null as facility_address_id,
      null as address_id,
      f.facility_type_code,
      f.name,
      null as address,
      f.default_city as city,
      f.default_state_code as state_code,
      f.default_postal_code as postal_code,
      f.default_country_code as country_code
   from
      mdx_core.facility as f
   left outer join mdx_core.facility_address as fa
      on fa.facility_id = f.facility_id
   where
      fa.facility_address_id is null
      and f.default_country_code = 'US'
      and f.default_state_code = 'IL'
      and '60640-5759' like f.default_postal_code||'%'

"Nested Loop Left Join  (cost=0.00..6042.41 rows=32 width=73) (actual
time=14.923..14.923 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Index Scan using facility_country_state_postal_code_idx on facility f
(cost=0.00..5914.69 rows=32 width=73) (actual time=10.118..14.773 rows=10
loops=1)"
"        Index Cond: ((default_country_code = 'US'::bpchar) AND
(default_state_code = 'IL'::bpchar))"
"        Filter: ('60640-5759'::text ~~ ((default_postal_code)::text ||
'%'::text))"
"  ->  Index Scan using facility_address_facility_idx on facility_address fa
(cost=0.00..3.97 rows=2 width=8) (actual time=0.009..0.011 rows=1 loops=10)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 15.034 ms"



Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/4/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> > can you do explain analyze on the two select queries on either side of
> > the union separatly?  the subquery is correctly written and unlikely
> > to be a problem (in fact, good style imo).  so lets have a look at
> > both sides of facil query and see where the problem is.
>
> Sorry for the delay, the server was down yesterday and couldn't get
> anything.
>
> I have modified the sub-queries a little, trying to get the index scans to
> fire - all the tables involved here are large enough to benefit from index
> scans over sequential scans. I am mystified as to why PART 1 is giving me:
>

>  "Seq Scan on facility_address fa  (cost=0.00..3014.68 rows=128268 width=12)
> (actual time=0.007..99.033 rows=128268 loops=1)"

not sure on this, lets go back to that.

>    into account that perhaps the import row is using the 5-number US ZIP,
> not the 9-number USZIP+4


>    where
>       a.country_code = 'US'
>       and a.state_code = 'IL'
>       and a.postal_code like '60640-5759'||'%'
>    order by facility_id

1. create a small function, sql preferred which truncates the zip code
to 5 digits or reduces to so called 'fuzzy' matching criteria.  lets
call it zip_trunc(text) and make it immutable which it is. write this
in sql, not tcl if possible (trust me).

create index address_idx on address(country_code, state_code,
zip_trunc(postal_code));

rewrite above where clause as

where (a.country_code, a.state_code, zip_trunc(postal_code)) = ('US',
'IL', zip_trunc('60640-5759'));

try it out, then lets see how it goes and then we can take a look at
any seqscan issues.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Hi Merlin,

Here are the results. The query returned more rows (65 vs 12) because of the
vague postal_code.

In reality, we would have to modify the postal_code logic to take advantage
of full zip codes when they were avalable, not unconditionally truncate
them.

Carlo

explain analyze select
      f.facility_id,
      fa.facility_address_id,
      a.address_id,
      f.facility_type_code,
      f.name,
      a.address,
      a.city,
      a.state_code,
      a.postal_code,
      a.country_code
   from
      mdx_core.facility as f
   join mdx_core.facility_address as fa
      on fa.facility_id = f.facility_id
   join mdx_core.address as a
      on a.address_id = fa.address_id
   where
      (a.country_code, a.state_code, mdx_core.zip_trunc(a.postal_code)) =
('US', 'IL', mdx_core.zip_trunc('60640-5759'))
   order by facility_id

"Sort  (cost=6474.78..6474.84 rows=25 width=103) (actual
time=217.279..217.311 rows=65 loops=1)"
"  Sort Key: f.facility_id"
"  ->  Nested Loop  (cost=2728.54..6474.20 rows=25 width=103) (actual
time=35.828..217.059 rows=65 loops=1)"
"        ->  Hash Join  (cost=2728.54..6384.81 rows=25 width=72) (actual
time=35.801..216.117 rows=65 loops=1)"
"              Hash Cond: ("outer".address_id = "inner".address_id)"
"              ->  Seq Scan on facility_address fa  (cost=0.00..3014.68
rows=128268 width=12) (actual time=0.007..99.072 rows=128268 loops=1)"
"              ->  Hash  (cost=2728.50..2728.50 rows=19 width=64) (actual
time=33.618..33.618 rows=39 loops=1)"
"                    ->  Bitmap Heap Scan on address a  (cost=48.07..2728.50
rows=19 width=64) (actual time=2.569..33.491 rows=39 loops=1)"
"                          Recheck Cond: ((country_code = 'US'::bpchar) AND
((state_code)::text = 'IL'::text))"
"                          Filter: (mdx_core.zip_trunc(postal_code) =
'60640'::text)"
"                          ->  Bitmap Index Scan on
address_country_state_zip_trunc_idx  (cost=0.00..48.07 rows=3846 width=0)
(actual time=1.783..1.783 rows=3554 loops=1)"
"                                Index Cond: ((country_code = 'US'::bpchar)
AND ((state_code)::text = 'IL'::text))"
"        ->  Index Scan using facility_pkey on facility f  (cost=0.00..3.56
rows=1 width=35) (actual time=0.009..0.010 rows=1 loops=65)"
"              Index Cond: ("outer".facility_id = f.facility_id)"
"Total runtime: 217.520 ms"



""Merlin Moncure"" <mmoncure@gmail.com> wrote in message
news:b42b73150610041407y3554f311u1329c4f3bdc53999@mail.gmail.com...
> On 10/4/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
>> > can you do explain analyze on the two select queries on either side of
>> > the union separatly?  the subquery is correctly written and unlikely
>> > to be a problem (in fact, good style imo).  so lets have a look at
>> > both sides of facil query and see where the problem is.
>>
>> Sorry for the delay, the server was down yesterday and couldn't get
>> anything.
>>
>> I have modified the sub-queries a little, trying to get the index scans
>> to
>> fire - all the tables involved here are large enough to benefit from
>> index
>> scans over sequential scans. I am mystified as to why PART 1 is giving
>> me:
>>
>
>>  "Seq Scan on facility_address fa  (cost=0.00..3014.68 rows=128268
>> width=12)
>> (actual time=0.007..99.033 rows=128268 loops=1)"
>
> not sure on this, lets go back to that.
>
>>    into account that perhaps the import row is using the 5-number US ZIP,
>> not the 9-number USZIP+4
>
>
>>    where
>>       a.country_code = 'US'
>>       and a.state_code = 'IL'
>>       and a.postal_code like '60640-5759'||'%'
>>    order by facility_id
>
> 1. create a small function, sql preferred which truncates the zip code
> to 5 digits or reduces to so called 'fuzzy' matching criteria.  lets
> call it zip_trunc(text) and make it immutable which it is. write this
> in sql, not tcl if possible (trust me).
>
> create index address_idx on address(country_code, state_code,
> zip_trunc(postal_code));
>
> rewrite above where clause as
>
> where (a.country_code, a.state_code, zip_trunc(postal_code)) = ('US',
> 'IL', zip_trunc('60640-5759'));
>
> try it out, then lets see how it goes and then we can take a look at
> any seqscan issues.
>
> merlin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>



Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/5/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> Hi Merlin,
>
> Here are the results. The query returned more rows (65 vs 12) because of the
> vague postal_code.

right.  interestingly, the index didn't work properly anyways.
regardless, this is easily solvable but it looks like we might be
looking in the wrong place.  do we have an multi-column index on
facility_address(facility_id, address_id)?  did you run analyze?

> In reality, we would have to modify the postal_code logic to take advantage
> of full zip codes when they were avalable, not unconditionally truncate
> them.
>
> Carlo
>
> explain analyze select
>       f.facility_id,
>       fa.facility_address_id,
>       a.address_id,
>       f.facility_type_code,
>       f.name,
>       a.address,
>       a.city,
>       a.state_code,
>       a.postal_code,
>       a.country_code
>    from
>       mdx_core.facility as f
>    join mdx_core.facility_address as fa
>       on fa.facility_id = f.facility_id
>    join mdx_core.address as a
>       on a.address_id = fa.address_id
>    where
>       (a.country_code, a.state_code, mdx_core.zip_trunc(a.postal_code)) =
> ('US', 'IL', mdx_core.zip_trunc('60640-5759'))
>    order by facility_id
>
> "Sort  (cost=6474.78..6474.84 rows=25 width=103) (actual
> time=217.279..217.311 rows=65 loops=1)"
> "  Sort Key: f.facility_id"
> "  ->  Nested Loop  (cost=2728.54..6474.20 rows=25 width=103) (actual
> time=35.828..217.059 rows=65 loops=1)"
> "        ->  Hash Join  (cost=2728.54..6384.81 rows=25 width=72) (actual
> time=35.801..216.117 rows=65 loops=1)"
> "              Hash Cond: ("outer".address_id = "inner".address_id)"
> "              ->  Seq Scan on facility_address fa  (cost=0.00..3014.68
> rows=128268 width=12) (actual time=0.007..99.072 rows=128268 loops=1)"
> "              ->  Hash  (cost=2728.50..2728.50 rows=19 width=64) (actual
> time=33.618..33.618 rows=39 loops=1)"
> "                    ->  Bitmap Heap Scan on address a  (cost=48.07..2728.50
> rows=19 width=64) (actual time=2.569..33.491 rows=39 loops=1)"
> "                          Recheck Cond: ((country_code = 'US'::bpchar) AND
> ((state_code)::text = 'IL'::text))"
> "                          Filter: (mdx_core.zip_trunc(postal_code) =
> '60640'::text)"
> "                          ->  Bitmap Index Scan on
> address_country_state_zip_trunc_idx  (cost=0.00..48.07 rows=3846 width=0)
> (actual time=1.783..1.783 rows=3554 loops=1)"
> "                                Index Cond: ((country_code = 'US'::bpchar)
> AND ((state_code)::text = 'IL'::text))"
> "        ->  Index Scan using facility_pkey on facility f  (cost=0.00..3.56
> rows=1 width=35) (actual time=0.009..0.010 rows=1 loops=65)"
> "              Index Cond: ("outer".facility_id = f.facility_id)"
> "Total runtime: 217.520 ms"

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> do we have an multi-column index on
> facility_address(facility_id, address_id)?  did you run analyze?

There is an index on facility_address on facility_id.

I didn't create an index on facility_address.address_id because I expected
joins to go in the other direction (from facility_address to address).
Nor did I create a multi-column index on facility_id, address_id because I
had yet to come up with a query that required that.

However, I still have a lot to learn about how SQL chooses its indexes, how
multi-column indexes are used, and when to use them (other than the
obvious - i.e. sort orders or relational expressions which request those
columns in one search expression)

Analyse is actually run every time a page of imported data loads into the
client program. This is currently set at 500 rows.

Carlo

>> explain analyze select
>>       f.facility_id,
>>       fa.facility_address_id,
>>       a.address_id,
>>       f.facility_type_code,
>>       f.name,
>>       a.address,
>>       a.city,
>>       a.state_code,
>>       a.postal_code,
>>       a.country_code
>>    from
>>       mdx_core.facility as f
>>    join mdx_core.facility_address as fa
>>       on fa.facility_id = f.facility_id
>>    join mdx_core.address as a
>>       on a.address_id = fa.address_id
>>    where
>>       (a.country_code, a.state_code, mdx_core.zip_trunc(a.postal_code)) =
>> ('US', 'IL', mdx_core.zip_trunc('60640-5759'))
>>    order by facility_id
>>
>> "Sort  (cost=6474.78..6474.84 rows=25 width=103) (actual
>> time=217.279..217.311 rows=65 loops=1)"
>> "  Sort Key: f.facility_id"
>> "  ->  Nested Loop  (cost=2728.54..6474.20 rows=25 width=103) (actual
>> time=35.828..217.059 rows=65 loops=1)"
>> "        ->  Hash Join  (cost=2728.54..6384.81 rows=25 width=72) (actual
>> time=35.801..216.117 rows=65 loops=1)"
>> "              Hash Cond: ("outer".address_id = "inner".address_id)"
>> "              ->  Seq Scan on facility_address fa  (cost=0.00..3014.68
>> rows=128268 width=12) (actual time=0.007..99.072 rows=128268 loops=1)"
>> "              ->  Hash  (cost=2728.50..2728.50 rows=19 width=64) (actual
>> time=33.618..33.618 rows=39 loops=1)"
>> "                    ->  Bitmap Heap Scan on address a
>> (cost=48.07..2728.50
>> rows=19 width=64) (actual time=2.569..33.491 rows=39 loops=1)"
>> "                          Recheck Cond: ((country_code = 'US'::bpchar)
>> AND
>> ((state_code)::text = 'IL'::text))"
>> "                          Filter: (mdx_core.zip_trunc(postal_code) =
>> '60640'::text)"
>> "                          ->  Bitmap Index Scan on
>> address_country_state_zip_trunc_idx  (cost=0.00..48.07 rows=3846 width=0)
>> (actual time=1.783..1.783 rows=3554 loops=1)"
>> "                                Index Cond: ((country_code =
>> 'US'::bpchar)
>> AND ((state_code)::text = 'IL'::text))"
>> "        ->  Index Scan using facility_pkey on facility f
>> (cost=0.00..3.56
>> rows=1 width=35) (actual time=0.009..0.010 rows=1 loops=65)"
>> "              Index Cond: ("outer".facility_id = f.facility_id)"
>> "Total runtime: 217.520 ms"



Re: Performance Optimization for Dummies 2 - the SQL

From
Tom Lane
Date:
"Carlo Stonebanks" <stonec.register@sympatico.ca> writes:
> I didn't create an index on facility_address.address_id because I expected
> joins to go in the other direction (from facility_address to address).

Well, that's your problem right there ...

            regards, tom lane

Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/5/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> > do we have an multi-column index on
> > facility_address(facility_id, address_id)?  did you run analyze?
>
> There is an index on facility_address on facility_id.
>
> I didn't create an index on facility_address.address_id because I expected
> joins to go in the other direction (from facility_address to address).
> Nor did I create a multi-column index on facility_id, address_id because I
> had yet to come up with a query that required that.

right. well, since you are filtering on address, I would consider
added an index on address_id or a multi column on address_id,
facility_id (in addition to facility_id).  also, I'd consider removing
all the explicit joins like this:

explain analyze select
     f.facility_id,
     fa.facility_address_id,
     a.address_id,
     f.facility_type_code,
     f.name,
     a.address,
     a.city,
     a.state_code,
     a.postal_code,
     a.country_code
  from
    mdx_core.facility f,
    mdx_core.facility_address fa,
    mdx_core.address a
  where
    fa.facility_id = f.facility_id and
    a.address_id = fa.address_id and
    a.country_code = 'US' and
    a.state_code = 'IL' and
    a.postal_code like '60640-5759'||'%'
    order by facility_id;

yet another way to write that where clause is:

    (fa_address_id, fa.facility_id) = (a.address_id, f.facility_id)  and
    a.country_code = 'US' and
    a.state_code = 'IL' and
    a.postal_code like '60640-5759'||'%'
    order by facility_id;

I personally only use explicit joins when doing outer joins and even
them push them out as far as possible.

I like the row constructor style better because it shows the key
relationships more clearly.  I don't think it makes a difference in
execution (go ahead and try it).  If you do make a multi column key on
facility_address, though, make sure to put they key fields in left to
right order in the row constructor.   Try adding a multi key on
address_id and facility_id and run it this way.  In a proper design
you would have a primary key on these fields but with imported data
you obviously have to make compromises :).

> However, I still have a lot to learn about how SQL chooses its indexes, how
> multi-column indexes are used, and when to use them (other than the
> obvious - i.e. sort orders or relational expressions which request those
> columns in one search expression)

well, it's kind of black magic but if the database is properly laid
out the function usually follows form pretty well.

> Analyse is actually run every time a page of imported data loads into the
> client program. This is currently set at 500 rows.

ok.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Just to clarify: if I expect to join two tables that I expect to benfit from
indexed scans, I should create indexes on the joined columns on BOTH sides?

Carlo


"Tom Lane" <tgl@sss.pgh.pa.us> wrote in message
news:12239.1160054675@sss.pgh.pa.us...
> "Carlo Stonebanks" <stonec.register@sympatico.ca> writes:
>> I didn't create an index on facility_address.address_id because I
>> expected
>> joins to go in the other direction (from facility_address to address).
>
> Well, that's your problem right there ...
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faq
>



Re: Performance Optimization for Dummies 2 - the SQL

From
Tom Lane
Date:
"Carlo Stonebanks" <stonec.register@sympatico.ca> writes:
> Just to clarify: if I expect to join two tables that I expect to benfit from
> indexed scans, I should create indexes on the joined columns on BOTH sides?

Well, it all depends on the queries you plan to issue ... but for the
particular query shown here, the lack of that index is the performance
bottleneck.

            regards, tom lane

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Oh you hate explicit joins too? I started in Oracle and was dismayed to find
out what the SQL standard was. I especially miss the simplicity of += outer
joins.

I'll try adding the address_id index to facility_address and see what I get!

Carlo


""Merlin Moncure"" <mmoncure@gmail.com> wrote in message
news:b42b73150610050630t76e9eedeh45a382729d35f2f6@mail.gmail.com...
> On 10/5/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
>> > do we have an multi-column index on
>> > facility_address(facility_id, address_id)?  did you run analyze?
>>
>> There is an index on facility_address on facility_id.
>>
>> I didn't create an index on facility_address.address_id because I
>> expected
>> joins to go in the other direction (from facility_address to address).
>> Nor did I create a multi-column index on facility_id, address_id because
>> I
>> had yet to come up with a query that required that.
>
> right. well, since you are filtering on address, I would consider
> added an index on address_id or a multi column on address_id,
> facility_id (in addition to facility_id).  also, I'd consider removing
> all the explicit joins like this:
>
> explain analyze select
>     f.facility_id,
>     fa.facility_address_id,
>     a.address_id,
>     f.facility_type_code,
>     f.name,
>     a.address,
>     a.city,
>     a.state_code,
>     a.postal_code,
>     a.country_code
>  from
>    mdx_core.facility f,
>    mdx_core.facility_address fa,
>    mdx_core.address a
>  where
>    fa.facility_id = f.facility_id and
>    a.address_id = fa.address_id and
>    a.country_code = 'US' and
>    a.state_code = 'IL' and
>    a.postal_code like '60640-5759'||'%'
>    order by facility_id;
>
> yet another way to write that where clause is:
>
>    (fa_address_id, fa.facility_id) = (a.address_id, f.facility_id)  and
>    a.country_code = 'US' and
>    a.state_code = 'IL' and
>    a.postal_code like '60640-5759'||'%'
>    order by facility_id;
>
> I personally only use explicit joins when doing outer joins and even
> them push them out as far as possible.
>
> I like the row constructor style better because it shows the key
> relationships more clearly.  I don't think it makes a difference in
> execution (go ahead and try it).  If you do make a multi column key on
> facility_address, though, make sure to put they key fields in left to
> right order in the row constructor.   Try adding a multi key on
> address_id and facility_id and run it this way.  In a proper design
> you would have a primary key on these fields but with imported data
> you obviously have to make compromises :).
>
>> However, I still have a lot to learn about how SQL chooses its indexes,
>> how
>> multi-column indexes are used, and when to use them (other than the
>> obvious - i.e. sort orders or relational expressions which request those
>> columns in one search expression)
>
> well, it's kind of black magic but if the database is properly laid
> out the function usually follows form pretty well.
>
>> Analyse is actually run every time a page of imported data loads into the
>> client program. This is currently set at 500 rows.
>
> ok.
>
> merlin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>



Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
This didn't work right away, but DID work after running a VACUUM FULL. In
other words, i was still stuck with a sequential scan until after the
vacuum.

I turned autovacuum off in order to help with the import, but was perfoming
an ANALYZE with every 500 rows imported.

With autovacuum off for imports, how frequently should I VACUUM?



""Merlin Moncure"" <mmoncure@gmail.com> wrote in message
news:b42b73150610050630t76e9eedeh45a382729d35f2f6@mail.gmail.com...
> On 10/5/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
>> > do we have an multi-column index on
>> > facility_address(facility_id, address_id)?  did you run analyze?
>>
>> There is an index on facility_address on facility_id.
>>
>> I didn't create an index on facility_address.address_id because I
>> expected
>> joins to go in the other direction (from facility_address to address).
>> Nor did I create a multi-column index on facility_id, address_id because
>> I
>> had yet to come up with a query that required that.
>
> right. well, since you are filtering on address, I would consider
> added an index on address_id or a multi column on address_id,
> facility_id (in addition to facility_id).  also, I'd consider removing
> all the explicit joins like this:
>
> explain analyze select
>     f.facility_id,
>     fa.facility_address_id,
>     a.address_id,
>     f.facility_type_code,
>     f.name,
>     a.address,
>     a.city,
>     a.state_code,
>     a.postal_code,
>     a.country_code
>  from
>    mdx_core.facility f,
>    mdx_core.facility_address fa,
>    mdx_core.address a
>  where
>    fa.facility_id = f.facility_id and
>    a.address_id = fa.address_id and
>    a.country_code = 'US' and
>    a.state_code = 'IL' and
>    a.postal_code like '60640-5759'||'%'
>    order by facility_id;
>
> yet another way to write that where clause is:
>
>    (fa_address_id, fa.facility_id) = (a.address_id, f.facility_id)  and
>    a.country_code = 'US' and
>    a.state_code = 'IL' and
>    a.postal_code like '60640-5759'||'%'
>    order by facility_id;
>
> I personally only use explicit joins when doing outer joins and even
> them push them out as far as possible.
>
> I like the row constructor style better because it shows the key
> relationships more clearly.  I don't think it makes a difference in
> execution (go ahead and try it).  If you do make a multi column key on
> facility_address, though, make sure to put they key fields in left to
> right order in the row constructor.   Try adding a multi key on
> address_id and facility_id and run it this way.  In a proper design
> you would have a primary key on these fields but with imported data
> you obviously have to make compromises :).
>
>> However, I still have a lot to learn about how SQL chooses its indexes,
>> how
>> multi-column indexes are used, and when to use them (other than the
>> obvious - i.e. sort orders or relational expressions which request those
>> columns in one search expression)
>
> well, it's kind of black magic but if the database is properly laid
> out the function usually follows form pretty well.
>
>> Analyse is actually run every time a page of imported data loads into the
>> client program. This is currently set at 500 rows.
>
> ok.
>
> merlin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>



Re: Performance Optimization for Dummies 2 - the SQL

From
Scott Marlowe
Date:
On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
> This didn't work right away, but DID work after running a VACUUM FULL. In
> other words, i was still stuck with a sequential scan until after the
> vacuum.
>
> I turned autovacuum off in order to help with the import, but was perfoming
> an ANALYZE with every 500 rows imported.
>
> With autovacuum off for imports, how frequently should I VACUUM?

Basically once the query planner stops using seq scans is usually good
enough, although sometimes there's a bit of a period where it'll be
using nested loops and then switch to merge etc...

Every 500 is probably a bit much.  After the first few thousand rows,
run an analyze, and after about 5 to 10 thousand another analyze and you
should be set.

Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/6/06, Scott Marlowe <smarlowe@g2switchworks.com> wrote:
> On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
> > This didn't work right away, but DID work after running a VACUUM FULL. In
> > other words, i was still stuck with a sequential scan until after the
> > vacuum.
> >
> > I turned autovacuum off in order to help with the import, but was perfoming
> > an ANALYZE with every 500 rows imported.

how did you determine that it is done every 500 rows? this is the
default autovacuum paramater.  if you followed my earlier
recommendations, you are aware that autovacuum (which also analyzes)
is not running during bulk inserts, right?

imo, best way to do big data import/conversion is to:
1. turn off all extra features, like stats, logs, etc
2. use copy interface to load data into scratch tables with probably
all text fields
3. analyze (just once)
4. use big queries to transform, normalize, etc
5. drop scratch tables
6. set up postgresql.conf for production use, fsync, stats, etc

important feature of analyze is to tell the planner approx. how big
the tables are.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> how did you determine that it is done every 500 rows? this is the

The import program pages the import table - it is currently set at 500 rows
per page. With each page, I run an ANALYZE.

> default autovacuum paramater.  if you followed my earlier
> recommendations, you are aware that autovacuum (which also analyzes)
> is not running during bulk inserts, right?

It's intuitivly obvious, but I can't do bulk inserts. It's just not the
nature of what we are doing with the data.

> imo, best way to do big data import/conversion is to:
> 1. turn off all extra features, like stats, logs, etc

done

> 2. use copy interface to load data into scratch tables with probably
> all text fields

done

> 3. analyze (just once)

I think this doesn't apply in our case, because we aren't doing bulk
inserts.

> 4. use big queries to transform, normalize, etc

This is currently being done programmatically. The nature of what we're
doing is suited for imperitive, navigational logic rather than declarative,
data set logic; just the opposite of what SQL likes, I know! If there's some
way to replace thousands of lines of analysis and decision trees with
ultrafast queries - great...

> important feature of analyze is to tell the planner approx. how big
> the tables are.

But the tables grow as the process progresses - would you not want the
server to re-evaluate its strategy periodically?

Carlo

>
> merlin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
>               http://archives.postgresql.org
>



Re: Performance Optimization for Dummies 2 - the SQL

From
"Jim C. Nasby"
Date:
On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:
> I personally only use explicit joins when doing outer joins and even
> them push them out as far as possible.

I used to be like that too, until I actually started using join syntax.
I now find it's *way* easier to identify what the join conditions are,
and to seperate them from the rest of the where clause. It also makes it
pretty much impossible to mess up a join clause and get a cartesian
product.

If you are going to put the join clauses in the WHERE clause, at least
put a space between the join stuff and the rest of the WHERE clause.

In any case, this is nothing but a matter of taste in this case, unless
you set join_collapse_limit to less than 3 (or maybe 4).
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: Performance Optimization for Dummies 2 - the SQL

From
"Jim C. Nasby"
Date:
On Fri, Oct 06, 2006 at 02:53:35PM -0400, Merlin Moncure wrote:
> On 10/6/06, Scott Marlowe <smarlowe@g2switchworks.com> wrote:
> >On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
> >> This didn't work right away, but DID work after running a VACUUM FULL. In
> >> other words, i was still stuck with a sequential scan until after the
> >> vacuum.
> >>
> >> I turned autovacuum off in order to help with the import, but was
> >perfoming
> >> an ANALYZE with every 500 rows imported.
>
> how did you determine that it is done every 500 rows? this is the
> default autovacuum paramater.  if you followed my earlier

Note that that parameter doesn't mean you'll get an analyze every 500
rows.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/6/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> > how did you determine that it is done every 500 rows? this is the
>
> The import program pages the import table - it is currently set at 500 rows
> per page. With each page, I run an ANALYZE.

right, i just wanted to make sure of something (you are doing it
properly).  really, analyze only needs to be run when tables go up an
order of mangitude in size or so, or a little bit less...like when the
table grows 50% or so.

> > default autovacuum paramater.  if you followed my earlier
> > recommendations, you are aware that autovacuum (which also analyzes)
> > is not running during bulk inserts, right?

> It's intuitivly obvious, but I can't do bulk inserts. It's just not the
> nature of what we are doing with the data.

right.

> This is currently being done programmatically. The nature of what we're
> doing is suited for imperitive, navigational logic rather than declarative,
> data set logic; just the opposite of what SQL likes, I know! If there's some
> way to replace thousands of lines of analysis and decision trees with
> ultrafast queries - great...
>
> > important feature of analyze is to tell the planner approx. how big
> > the tables are.
>
> But the tables grow as the process progresses - would you not want the
> server to re-evaluate its strategy periodically?

yes, but it makes the most difference when the tables are small so as
to keep the planner from doing seqscans as they grow.

well it looks like you are on the right track, hopefully the process
runs in an acceptable amount of time at this point.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/8/06, Jim C. Nasby <jim@nasby.net> wrote:
> On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:
> > I personally only use explicit joins when doing outer joins and even
> > them push them out as far as possible.
>
> I used to be like that too, until I actually started using join syntax.
> I now find it's *way* easier to identify what the join conditions are,
> and to seperate them from the rest of the where clause. It also makes it
> pretty much impossible to mess up a join clause and get a cartesian
> product.
>
> If you are going to put the join clauses in the WHERE clause, at least
> put a space between the join stuff and the rest of the WHERE clause.

I use the row constructor to define key relationships for non trivial
queries i.e.
select foo.*, bar.* from foo f, bar b
  where (f.a, f.b, f.c) = (b.a, b.b, b.c) -- etc

I am a really big fan of the row constructor, especially since we can
do proper key ordering in 8.2.

by convention I do relating first, filtering second.  for really
complex queries I will inline comment each line of the where clause:

where
  (p.a) = (pd.b) and -- match part to part description
  pd.type != 'A' -- not using archived parts

as to unwanted cartesian products, I test all prodution queries in the
shell first.  The really complex ones are somewhat trial and error
process after all these years :)

being something of a mathematical guy, I love sql for its (mostly)
functional nature but hate the grammar.  reminds me a little bit too
much of cobol.  the join syntax is just too much for me, although with
left/right/natural joins there is no other way, and I very much agree
with Carlo wrt oracle's nonstandard join syntax being more elegant.

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Hi Merlin,

Well, I'm back. first of all, thanks for your dogged determination to help
me out - it is much appreciated. I owe you a beer or twelve.

The import has been running for a week. The import program got faster as I
tuned things. I capture the dynamic SQL statements generated by the app, as
well as an accompanying EXPLAIN - and put it out to an XML file. I turned
off seq scan in the config, and ran a trial import. I knew that with seq
scan off that if I saw a seq scan in my log, it's because there were no
indexes available to satisfy the query - I adjusted accordingly and this
worked really well.

When the import runs against an empty or small db, it's blisteringly fast
(considering that it's a heauristically based process). This proved that it
wasn't the app or the SQL connection that was slow. Once again, though, as
the data db grows, it slows down. Now it's crawling again. All of the
queries appear to be fine, taking advantage of the indexes. There is ONE
query, though, that seems to be the troublemaker - the same one I had
brought up before. I believe that it is one sub-query that is causing the
problem, taking what appears to be 500 to 1000+ms to run every time. (See
below).

Curiously, it's using index scans, and it really looks like a simple query
to me. I am completely baffled. The two tables in question have about 800K
rows each - not exactly an incredible number. The EXPLAIN is simple, but the
performance is dreadful. All the other queries run much faster than this -
does ANYTHING about this query strike you as odd?

Carlo

/*
Find all facilities that do not have full address information
but do have default location information that indicates
its the facilitiy's US zip code.
NULL values cast as columns are placeholders to allow
this sub-query to be unioned with another subquery
that contains full address data
*/
select
    f.facility_id,
    null as facility_address_id,
    null as address_id,
    f.facility_type_code,
    f.name,
    null as address,
    f.default_city as city,
    f.default_state_code as state_code,
    f.default_postal_code as postal_code,
    f.default_country_code as country_code,
    null as parsed_unit
from
    mdx_core.facility as f
left outer join mdx_core.facility_address as fa
    on fa.facility_id = f.facility_id
where
     facility_address_id is null
     and f.default_country_code = 'US'
     and (f.default_postal_code = '14224-1945' or f.default_postal_code =
'14224')

"Nested Loop Left Join  (cost=22966.70..23594.84 rows=93 width=71) (actual
time=662.075..662.075 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Bitmap Heap Scan on facility f  (cost=22966.70..23231.79 rows=93
width=71) (actual time=661.907..661.929 rows=7 loops=1)"
"        Recheck Cond: (((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text)) OR
((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =
'14224'::text)))"
"        ->  BitmapOr  (cost=22966.70..22966.70 rows=93 width=0) (actual
time=661.891..661.891 rows=0 loops=1)"
"              ->  Bitmap Index Scan on
facility_country_state_postal_code_idx  (cost=0.00..11483.35 rows=47
width=0) (actual time=374.284..374.284 rows=7 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text))"
"              ->  Bitmap Index Scan on
facility_country_state_postal_code_idx  (cost=0.00..11483.35 rows=47
width=0) (actual time=287.599..287.599 rows=0 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224'::text))"
"  ->  Index Scan using facility_address_facility_address_address_type_idx
on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
time=0.014..0.016 rows=1 loops=7)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 662.203 ms"
>



Re: Performance Optimization for Dummies 2 - the SQL

From
Tom Lane
Date:
"Carlo Stonebanks" <stonec.register@sympatico.ca> writes:
> Curiously, it's using index scans, and it really looks like a simple query
> to me. I am completely baffled. The two tables in question have about 800K
> rows each - not exactly an incredible number. The EXPLAIN is simple, but the
> performance is dreadful. All the other queries run much faster than this -
> does ANYTHING about this query strike you as odd?

Lots of dead rows perhaps?  The EXPLAIN estimates look a bit out of line
--- 11483 cost units to fetch 47 index entries is an order or two of
magnitude higher than it ought to be.  The real time also seems to be
concentrated in that index scan.  What are the physical sizes of the
table and index?  (VACUUM VERBOSE output for the facility table might
tell something.)

            regards, tom lane

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Hey Tom, thanks for jumping in. Nothing on TV on a Sunday afternoon? ;-)
Appreciate teh input.

Here is vacuum verbose output for both the tables in question.

Carlo


INFO:  vacuuming "mdx_core.facility"
INFO:  index "facility_pkey" now contains 832399 row versions in 3179 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.09s/0.04u sec elapsed 0.21 sec.
INFO:  index "facility_country_state_city_idx" now contains 832444 row
versions in 6630 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.15s/0.07u sec elapsed 43.81 sec.
INFO:  index "facility_country_state_postal_code_idx" now contains 832499
row versions in 6658 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.23s/0.07u sec elapsed 0.37 sec.
INFO:  "facility": found 0 removable, 832398 nonremovable row versions in
15029 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.67s/0.32u sec elapsed 44.71 sec.
INFO:  vacuuming "pg_toast.pg_toast_58570311"
INFO:  index "pg_toast_58570311_index" now contains 0 row versions in 1
pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_58570311": found 0 removable, 0 nonremovable row versions
in 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.

Query returned successfully with no result in 44875 ms.

INFO:  vacuuming "mdx_core.facility_address"
INFO:  index "facility_address_pkey" now contains 772770 row versions in
2951 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.10s/0.04u sec elapsed 9.73 sec.
INFO:  index "facility_address_address_idx" now contains 772771 row versions
in 2750 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.04s/0.04u sec elapsed 0.34 sec.
INFO:  index "facility_address_facility_address_address_type_idx" now
contains 772773 row versions in 3154 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.04u sec elapsed 0.06 sec.
INFO:  "facility_address": found 0 removable, 772747 nonremovable row
versions in 7969 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.39s/0.18u sec elapsed 10.70 sec.

Query returned successfully with no result in 10765 ms.




"Tom Lane" <tgl@sss.pgh.pa.us> wrote in message
news:2808.1160951238@sss.pgh.pa.us...
> "Carlo Stonebanks" <stonec.register@sympatico.ca> writes:
>> Curiously, it's using index scans, and it really looks like a simple
>> query
>> to me. I am completely baffled. The two tables in question have about
>> 800K
>> rows each - not exactly an incredible number. The EXPLAIN is simple, but
>> the
>> performance is dreadful. All the other queries run much faster than
>> this -
>> does ANYTHING about this query strike you as odd?
>
> Lots of dead rows perhaps?  The EXPLAIN estimates look a bit out of line
> --- 11483 cost units to fetch 47 index entries is an order or two of
> magnitude higher than it ought to be.  The real time also seems to be
> concentrated in that index scan.  What are the physical sizes of the
> table and index?  (VACUUM VERBOSE output for the facility table might
> tell something.)
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>



Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/15/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> Hi Merlin,
>
> Well, I'm back. first of all, thanks for your dogged determination to help
> me out - it is much appreciated. I owe you a beer or twelve.
>
> The import has been running for a week. The import program got faster as I
> tuned things. I capture the dynamic SQL statements generated by the app, as
> well as an accompanying EXPLAIN - and put it out to an XML file. I turned
> off seq scan in the config, and ran a trial import. I knew that with seq
> scan off that if I saw a seq scan in my log, it's because there were no
> indexes available to satisfy the query - I adjusted accordingly and this
> worked really well.
>
> When the import runs against an empty or small db, it's blisteringly fast
> (considering that it's a heauristically based process). This proved that it
> wasn't the app or the SQL connection that was slow. Once again, though, as
> the data db grows, it slows down. Now it's crawling again. All of the
> queries appear to be fine, taking advantage of the indexes. There is ONE
> query, though, that seems to be the troublemaker - the same one I had
> brought up before. I believe that it is one sub-query that is causing the
> problem, taking what appears to be 500 to 1000+ms to run every time. (See
> below).
>
> Curiously, it's using index scans, and it really looks like a simple query
> to me. I am completely baffled. The two tables in question have about 800K
> rows each - not exactly an incredible number. The EXPLAIN is simple, but the
> performance is dreadful. All the other queries run much faster than this -
> does ANYTHING about this query strike you as odd?


Can you try temporarily disabling bitmap scans and see what comes up?

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> Can you try temporarily disabling bitmap scans and see what comes up?

Well, that's slowing everything down. I've got a couple of results, below

1) Bitmap scan off, but seq scan enabled.
2) Bitmap scan and seq scan off
3) Bitmap scan back on, seq scan back on, and a new index created
4) VACUUM VERBOSE on the tables involved
5) Original SQL with original EXPLAIN to show the code that started this.

Carlo

1) Bitmap scan off, but seq scan enabled. It created a suprisingly expensive
seq scan.

"Nested Loop Left Join  (cost=0.00..34572.43 rows=109 width=71) (actual
time=1536.827..1536.827 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Seq Scan on facility f  (cost=0.00..34146.91 rows=109 width=71)
(actual time=621.100..1536.606 rows=7 loops=1)"
"        Filter: ((default_country_code = 'US'::bpchar) AND
(((default_postal_code)::text = '14224-1945'::text) OR
((default_postal_code)::text = '14224'::text)))"
"  ->  Index Scan using facility_address_facility_address_address_type_idx
on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
time=0.020..0.023 rows=1 loops=7)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 1536.957 ms"

2) So I turned both bitmap scan and seq scan off - now we get index scans,
the performance is suprisingly horrible:

"Nested Loop Left Join  (cost=0.00..39286.55 rows=109 width=71) (actual
time=3598.462..3598.462 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Index Scan using facility_pkey on facility f  (cost=0.00..38861.03
rows=109 width=71) (actual time=1500.690..3598.201 rows=7 loops=1)"
"        Filter: ((default_country_code = 'US'::bpchar) AND
(((default_postal_code)::text = '14224-1945'::text) OR
((default_postal_code)::text = '14224'::text)))"
"  ->  Index Scan using facility_address_facility_address_address_type_idx
on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
time=0.024..0.027 rows=1 loops=7)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 3598.600 ms"

3) So I turned bitmap scan back on, seq scan back on, and created an index
to EXPLICITLY to satisfy this condition. Iintuitivly, I thought that
combinations of other indexes should have satisfied the optimizer, but
figured better overkill than nothing. I thought this would solve it - but
no. We is using a BRAND NEW INDEX which is unlikely to be corrupt so
expensive?

"Nested Loop Left Join  (cost=25300.96..26043.67 rows=110 width=71) (actual
time=1339.216..1339.216 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Bitmap Heap Scan on facility f  (cost=25300.96..25614.42 rows=110
width=71) (actual time=1339.043..1339.066 rows=7 loops=1)"
"        Recheck Cond: (((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text)) OR
((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =
'14224'::text)))"
"        ->  BitmapOr  (cost=25300.96..25300.96 rows=110 width=0) (actual
time=1339.027..1339.027 rows=0 loops=1)"
"              ->  Bitmap Index Scan on
facility_facility_country_state_postal_code_idx  (cost=0.00..12650.48
rows=55 width=0) (actual time=796.146..796.146 rows=7 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text))"
"              ->  Bitmap Index Scan on
facility_facility_country_state_postal_code_idx  (cost=0.00..12650.48
rows=55 width=0) (actual time=542.873..542.873 rows=0 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224'::text))"
"  ->  Index Scan using facility_address_facility_address_address_type_idx
on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
time=0.014..0.016 rows=1 loops=7)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 1339.354 ms"

4) VACUUM VERBOSE on the tables involved. Note how much more painful in
elapsed time it is to vacuum facility vs facility_address, even though the
number of rows is comparable:

INFO:  vacuuming "mdx_core.facility"
INFO:  index "facility_pkey" now contains 964123 row versions in 3682 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.03s/0.03u sec elapsed 0.18 sec.
INFO:  index "facility_country_state_city_idx" now contains 964188 row
versions in 7664 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.25s/0.17u sec elapsed 84.14 sec.
INFO:  index "facility_country_state_postal_code_idx" now contains 964412
row versions in 7689 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.42s/0.10u sec elapsed 137.12 sec.
INFO:  index "facility_facility_country_state_city_idx" now contains 964493
row versions in 6420 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.17s/0.09u sec elapsed 2.23 sec.
INFO:  index "facility_facility_country_state_postal_code_idx" now contains
964494 row versions in 6895 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.01u sec elapsed 0.95 sec.
INFO:  "facility": found 0 removable, 964123 nonremovable row versions in
17398 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.90s/0.57u sec elapsed 224.80 sec.
INFO:  vacuuming "pg_toast.pg_toast_58570311"
INFO:  index "pg_toast_58570311_index" now contains 0 row versions in 1
pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  "pg_toast_58570311": found 0 removable, 0 nonremovable row versions
in 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.01 sec.

Query returned successfully with no result in 224903 ms.

INFO:  vacuuming "mdx_core.facility_address"
INFO:  index "facility_address_pkey" now contains 893157 row versions in
3411 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.17s/0.04u sec elapsed 11.10 sec.
INFO:  index "facility_address_address_idx" now contains 893157 row versions
in 3164 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.07s/0.04u sec elapsed 0.61 sec.
INFO:  index "facility_address_facility_address_address_type_idx" now
contains 893157 row versions in 3797 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.00u sec elapsed 0.07 sec.
INFO:  "facility_address": found 0 removable, 893139 nonremovable row
versions in 9210 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.26s/0.15u sec elapsed 12.14 sec.

Query returned successfully with no result in 12297 ms.





5) Here is the original query, plus original explain analyze. Number of rows
have increased since this was run, so the costs are lower, but still
significant:

/*
Find all facilities that do not have full address information
but do have default location information that indicates
its the facilitiy's US zip code.
NULL values cast as columns are placeholders to allow
this sub-query to be unioned with another subquery
that contains full address data
*/
select
    f.facility_id,
    null as facility_address_id,
    null as address_id,
    f.facility_type_code,
    f.name,
    null as address,
    f.default_city as city,
    f.default_state_code as state_code,
    f.default_postal_code as postal_code,
    f.default_country_code as country_code,
    null as parsed_unit
from
    mdx_core.facility as f
left outer join mdx_core.facility_address as fa
    on fa.facility_id = f.facility_id
where
     facility_address_id is null
     and f.default_country_code = 'US'
     and (f.default_postal_code = '14224-1945' or f.default_postal_code =
'14224')

"Nested Loop Left Join  (cost=22966.70..23594.84 rows=93 width=71) (actual
time=662.075..662.075 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Bitmap Heap Scan on facility f  (cost=22966.70..23231.79 rows=93
width=71) (actual time=661.907..661.929 rows=7 loops=1)"
"        Recheck Cond: (((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text)) OR
((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =
'14224'::text)))"
"        ->  BitmapOr  (cost=22966.70..22966.70 rows=93 width=0) (actual
time=661.891..661.891 rows=0 loops=1)"
"              ->  Bitmap Index Scan on
facility_country_state_postal_code_idx  (cost=0.00..11483.35 rows=47
width=0) (actual time=374.284..374.284 rows=7 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text))"
"              ->  Bitmap Index Scan on
facility_country_state_postal_code_idx  (cost=0.00..11483.35 rows=47
width=0) (actual time=287.599..287.599 rows=0 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224'::text))"
"  ->  Index Scan using facility_address_facility_address_address_type_idx
on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
time=0.014..0.016 rows=1 loops=7)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 662.203 ms"
>




Re: Performance Optimization for Dummies 2 - the SQL

From
"Jim C. Nasby"
Date:
I think there's 2 things that would help this case. First, partition on
country. You can either do this on a table level or on an index level
by putting where clauses on the indexes (index method would be the
fastest one to test, since it's just new indexes). That should shrink
the size of that index noticably.

The other thing is to try and get the planner to not double-scan the
index. If you add the following, I think it will scan the index once for
the LIKE, and then just filter whatever it finds to match the other
conditions.

      and f.default_postal_code LIKE '14224%'

On Mon, Oct 16, 2006 at 01:33:28PM -0400, Carlo Stonebanks wrote:
> INFO:  vacuuming "mdx_core.facility"
> INFO:  index "facility_pkey" now contains 964123 row versions in 3682 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.03s/0.03u sec elapsed 0.18 sec.
> INFO:  index "facility_country_state_city_idx" now contains 964188 row
> versions in 7664 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.25s/0.17u sec elapsed 84.14 sec.
> INFO:  index "facility_country_state_postal_code_idx" now contains 964412
> row versions in 7689 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.42s/0.10u sec elapsed 137.12 sec.
> INFO:  index "facility_facility_country_state_city_idx" now contains 964493
> row versions in 6420 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.17s/0.09u sec elapsed 2.23 sec.
> INFO:  index "facility_facility_country_state_postal_code_idx" now contains
> 964494 row versions in 6895 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.01s/0.01u sec elapsed 0.95 sec.
> INFO:  "facility": found 0 removable, 964123 nonremovable row versions in
> 17398 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.90s/0.57u sec elapsed 224.80 sec.
> INFO:  vacuuming "pg_toast.pg_toast_58570311"
> INFO:  index "pg_toast_58570311_index" now contains 0 row versions in 1
> pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.01 sec.
> INFO:  "pg_toast_58570311": found 0 removable, 0 nonremovable row versions
> in 0 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.00s/0.00u sec elapsed 0.01 sec.
>
> Query returned successfully with no result in 224903 ms.
>
> INFO:  vacuuming "mdx_core.facility_address"
> INFO:  index "facility_address_pkey" now contains 893157 row versions in
> 3411 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.17s/0.04u sec elapsed 11.10 sec.
> INFO:  index "facility_address_address_idx" now contains 893157 row versions
> in 3164 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.07s/0.04u sec elapsed 0.61 sec.
> INFO:  index "facility_address_facility_address_address_type_idx" now
> contains 893157 row versions in 3797 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.01s/0.00u sec elapsed 0.07 sec.
> INFO:  "facility_address": found 0 removable, 893139 nonremovable row
> versions in 9210 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.26s/0.15u sec elapsed 12.14 sec.
>
> Query returned successfully with no result in 12297 ms.
>
>
>
>
>
> 5) Here is the original query, plus original explain analyze. Number of rows
> have increased since this was run, so the costs are lower, but still
> significant:
>
> /*
> Find all facilities that do not have full address information
> but do have default location information that indicates
> its the facilitiy's US zip code.
> NULL values cast as columns are placeholders to allow
> this sub-query to be unioned with another subquery
> that contains full address data
> */
> select
>     f.facility_id,
>     null as facility_address_id,
>     null as address_id,
>     f.facility_type_code,
>     f.name,
>     null as address,
>     f.default_city as city,
>     f.default_state_code as state_code,
>     f.default_postal_code as postal_code,
>     f.default_country_code as country_code,
>     null as parsed_unit
> from
>     mdx_core.facility as f
> left outer join mdx_core.facility_address as fa
>     on fa.facility_id = f.facility_id
> where
>      facility_address_id is null
>      and f.default_country_code = 'US'
>      and (f.default_postal_code = '14224-1945' or f.default_postal_code =
> '14224')
>
> "Nested Loop Left Join  (cost=22966.70..23594.84 rows=93 width=71) (actual
> time=662.075..662.075 rows=0 loops=1)"
> "  Filter: ("inner".facility_address_id IS NULL)"
> "  ->  Bitmap Heap Scan on facility f  (cost=22966.70..23231.79 rows=93
> width=71) (actual time=661.907..661.929 rows=7 loops=1)"
> "        Recheck Cond: (((default_country_code = 'US'::bpchar) AND
> ((default_postal_code)::text = '14224-1945'::text)) OR
> ((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =
> '14224'::text)))"
> "        ->  BitmapOr  (cost=22966.70..22966.70 rows=93 width=0) (actual
> time=661.891..661.891 rows=0 loops=1)"
> "              ->  Bitmap Index Scan on
> facility_country_state_postal_code_idx  (cost=0.00..11483.35 rows=47
> width=0) (actual time=374.284..374.284 rows=7 loops=1)"
> "                    Index Cond: ((default_country_code = 'US'::bpchar) AND
> ((default_postal_code)::text = '14224-1945'::text))"
> "              ->  Bitmap Index Scan on
> facility_country_state_postal_code_idx  (cost=0.00..11483.35 rows=47
> width=0) (actual time=287.599..287.599 rows=0 loops=1)"
> "                    Index Cond: ((default_country_code = 'US'::bpchar) AND
> ((default_postal_code)::text = '14224'::text))"
> "  ->  Index Scan using facility_address_facility_address_address_type_idx
> on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
> time=0.014..0.016 rows=1 loops=7)"
> "        Index Cond: (fa.facility_id = "outer".facility_id)"
> "Total runtime: 662.203 ms"
> >
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>

--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: Performance Optimization for Dummies 2 - the SQL

From
"Merlin Moncure"
Date:
On 10/15/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
> that contains full address data
> */
> select
>     f.facility_id,
>     null as facility_address_id,
>     null as address_id,
>     f.facility_type_code,
>     f.name,
>     null as address,
>     f.default_city as city,
>     f.default_state_code as state_code,
>     f.default_postal_code as postal_code,
>     f.default_country_code as country_code,
>     null as parsed_unit
> from
>     mdx_core.facility as f
> left outer join mdx_core.facility_address as fa
>     on fa.facility_id = f.facility_id
> where
>      facility_address_id is null
>      and f.default_country_code = 'US'
>      and (f.default_postal_code = '14224-1945' or f.default_postal_code =
> '14224')

what is the facility_address_id is null all about? remove it since you
hardcode it to true in select.

you have a two part part key on facility(country code, postal code), right?

merlin

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> what is the facility_address_id is null all about? remove it since you
> hardcode it to true in select.

The facility_address_id is null statement is necessary, as this is a
sub-query from a union clause and I want to optimise the query with the
original logic intact. The value is not hard coded to true but rather to
null. Admittedly, it's redundant but I put it there to make sure that I
matched up the columns from the other select in the union clause.

> you have a two part part key on facility(country code, postal code),
> right?

The indexes and constrains are below. If you see redundancy, this was from
vain attempts to please the optimiser gods.

Carlo

ALTER TABLE mdx_core.facility
  ADD CONSTRAINT facility_pkey PRIMARY KEY(facility_id);

CREATE INDEX facility_country_state_city_idx
  ON mdx_core.facility
  USING btree
  (default_country_code, default_state_code, lower(default_city::text));

CREATE INDEX facility_country_state_postal_code_idx
  ON mdx_core.facility
  USING btree
  (default_country_code, default_state_code, default_postal_code);

CREATE INDEX facility_facility_country_state_city_idx
  ON mdx_core.facility
  USING btree
  (facility_id, default_country_code, default_state_code,
lower(default_city::text));

CREATE INDEX facility_facility_country_state_postal_code_idx
  ON mdx_core.facility
  USING btree
  (facility_id, default_country_code, default_state_code,
default_postal_code);


""Merlin Moncure"" <mmoncure@gmail.com> wrote in message
news:b42b73150610161407l55e66854g94b31de6dd511f16@mail.gmail.com...
> On 10/15/06, Carlo Stonebanks <stonec.register@sympatico.ca> wrote:
>> that contains full address data
>> */
>> select
>>     f.facility_id,
>>     null as facility_address_id,
>>     null as address_id,
>>     f.facility_type_code,
>>     f.name,
>>     null as address,
>>     f.default_city as city,
>>     f.default_state_code as state_code,
>>     f.default_postal_code as postal_code,
>>     f.default_country_code as country_code,
>>     null as parsed_unit
>> from
>>     mdx_core.facility as f
>> left outer join mdx_core.facility_address as fa
>>     on fa.facility_id = f.facility_id
>> where
>>      facility_address_id is null
>>      and f.default_country_code = 'US'
>>      and (f.default_postal_code = '14224-1945' or f.default_postal_code =
>> '14224')
>
> what is the facility_address_id is null all about? remove it since you
> hardcode it to true in select.
>
> you have a two part part key on facility(country code, postal code),
> right?
>
> merlin
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>



Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
>I think there's 2 things that would help this case. First, partition on
> country. You can either do this on a table level or on an index level
> by putting where clauses on the indexes (index method would be the
> fastest one to test, since it's just new indexes). That should shrink
> the size of that index noticably.

I'm afraid I don't quite understand this, or how to 'partition' this at a
table level. Right now, the table consists of ONLY US addresses, so I don't
know if I would expect a performance improvement in changing the table or
the indexes as the indexes would not reduce anything.>
> The other thing is to try and get the planner to not double-scan the
> index. If you add the following, I think it will scan the index once for
> the LIKE, and then just filter whatever it finds to match the other
> conditions.
>
>      and f.default_postal_code LIKE '14224%'

I did try this - nothing signoificant came from the results (see below)

thanks,

Carlo

explain analyze select
   f.facility_id,
   null as facility_address_id,
   null as address_id,
   f.facility_type_code,
   f.name,
   null as address,
   f.default_city as city,
   f.default_state_code as state_code,
   f.default_postal_code as postal_code,
   f.default_country_code as country_code,
   null as parsed_unit
from
   mdx_core.facility as f
left outer join mdx_core.facility_address as fa
   on fa.facility_id = f.facility_id
where
   facility_address_id is null
   and f.default_country_code = 'US'
   and f.default_postal_code like '14224%'
   and (f.default_postal_code = '14224-1945' or f.default_postal_code =
'14224')

"Nested Loop Left Join  (cost=26155.38..26481.58 rows=1 width=71) (actual
time=554.138..554.138 rows=0 loops=1)"
"  Filter: ("inner".facility_address_id IS NULL)"
"  ->  Bitmap Heap Scan on facility f  (cost=26155.38..26477.68 rows=1
width=71) (actual time=554.005..554.025 rows=7 loops=1)"
"        Recheck Cond: (((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text)) OR
((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =
'14224'::text)))"
"        Filter: ((default_postal_code)::text ~~ '14224%'::text)"
"        ->  BitmapOr  (cost=26155.38..26155.38 rows=113 width=0) (actual
time=553.983..553.983 rows=0 loops=1)"
"              ->  Bitmap Index Scan on
facility_facility_country_state_postal_code_idx  (cost=0.00..13077.69
rows=57 width=0) (actual time=313.156..313.156 rows=7 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224-1945'::text))"
"              ->  Bitmap Index Scan on
facility_facility_country_state_postal_code_idx  (cost=0.00..13077.69
rows=57 width=0) (actual time=240.819..240.819 rows=0 loops=1)"
"                    Index Cond: ((default_country_code = 'US'::bpchar) AND
((default_postal_code)::text = '14224'::text))"
"  ->  Index Scan using facility_address_facility_address_address_type_idx
on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
time=0.010..0.012 rows=1 loops=7)"
"        Index Cond: (fa.facility_id = "outer".facility_id)"
"Total runtime: 554.243 ms"



Re: Performance Optimization for Dummies 2 - the SQL

From
Shaun Thomas
Date:
On Monday 16 October 2006 16:37, Carlo Stonebanks wrote:

> The facility_address_id is null statement is necessary, as this is a
> sub-query from a union clause and I want to optimise the query with
> the original logic intact. The value is not hard coded to true but
> rather to null.

Heh, you neglect to mention that this query is discovering faculty who
do *not* have an address entry, which makes the "is null" a major
necessity.  With that, how did a "not exists (blabla faculty_address
blabla)" subquery to get the same effect treat you?  How about an "IN
(blabla LIMIT 1)" ?

--

Shaun Thomas
Database Administrator

Leapfrog Online
807 Greenwood Street
Evanston, IL 60201
Tel. 847-440-8253
Fax. 847-570-5750
www.leapfrogonline.com

Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
Sorry, I didn'tpoint it out because an earlier post included the query with
documentation - that post got lost... or at least *I* can't see it.

The other half of the union renders the facilities that DO have addresses,
and because of the performance problem (which I have finally sorted out by
creating indexes which are more explicit - my oversight, really!)

The original query was a slightly more complex outer join, which I then
decomposed to an explicit union with two halves - one half handling the
explicit "facility_address_id is null" portion, the other half handling the
"is not null" portion (implicitly because of the normal join between
facility and facility_address).

I hadn't considered the "not exists" option - it's obvious when you look at
the sub-query by itself, but didn't strike me before I broke it out of the
union and you mentioned it. I was just under th eimpression that getting
this sub-query to work would have produced the most clear, straightforward
ANALYZE results.

Carlo

"Shaun Thomas" <sthomas@leapfrogonline.com> wrote in message
news:200610161728.43193.sthomas@leapfrogonline.com...
> On Monday 16 October 2006 16:37, Carlo Stonebanks wrote:
>
>> The facility_address_id is null statement is necessary, as this is a
>> sub-query from a union clause and I want to optimise the query with
>> the original logic intact. The value is not hard coded to true but
>> rather to null.
>
> Heh, you neglect to mention that this query is discovering faculty who
> do *not* have an address entry, which makes the "is null" a major
> necessity.  With that, how did a "not exists (blabla faculty_address
> blabla)" subquery to get the same effect treat you?  How about an "IN
> (blabla LIMIT 1)" ?
>
> --
>
> Shaun Thomas
> Database Administrator
>
> Leapfrog Online
> 807 Greenwood Street
> Evanston, IL 60201
> Tel. 847-440-8253
> Fax. 847-570-5750
> www.leapfrogonline.com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faq
>



Re: Performance Optimization for Dummies 2 - the SQL

From
"Carlo Stonebanks"
Date:
> you have a two part part key on facility(country code, postal code),
> right?

Well, I'm glad you pointed it out, because I THOUGhT I had created it, but
apparently I haven't -- I only noticed that it was missing after I listed
all the other indexes. Looks like this query is one of the victims of a db
structure corruption I suffered when transferring the schema over from
development into production.

(Well, that's my excuse and I'm sticking to it!)

Thanks for all the help - I've reduced the execution time to 1/10 of its
original time.

Carlo



Re: Performance Optimization for Dummies 2 - the SQL

From
"Jim C. Nasby"
Date:
On Mon, Oct 16, 2006 at 05:56:54PM -0400, Carlo Stonebanks wrote:
> >I think there's 2 things that would help this case. First, partition on
> > country. You can either do this on a table level or on an index level
> > by putting where clauses on the indexes (index method would be the
> > fastest one to test, since it's just new indexes). That should shrink
> > the size of that index noticably.
>
> I'm afraid I don't quite understand this, or how to 'partition' this at a
> table level. Right now, the table consists of ONLY US addresses, so I don't
> know if I would expect a performance improvement in changing the table or
> the indexes as the indexes would not reduce anything.>

It will help because you can then drop country_code from the index,
making it smaller.

> > The other thing is to try and get the planner to not double-scan the
> > index. If you add the following, I think it will scan the index once for
> > the LIKE, and then just filter whatever it finds to match the other
> > conditions.
> >
> >      and f.default_postal_code LIKE '14224%'

Hrm... well, first step would be to drop the explicit postal code tests
just to validate that it's faster to do the LIKE than it is to do the
two explicit tests. If that proves to be the case, you can wrap that in
a subquery, and put the final where clause in the outer part of the
query. You'll probably have to use the OFFSET 0 hack, too.

> I did try this - nothing signoificant came from the results (see below)
>
> thanks,
>
> Carlo
>
> explain analyze select
>    f.facility_id,
>    null as facility_address_id,
>    null as address_id,
>    f.facility_type_code,
>    f.name,
>    null as address,
>    f.default_city as city,
>    f.default_state_code as state_code,
>    f.default_postal_code as postal_code,
>    f.default_country_code as country_code,
>    null as parsed_unit
> from
>    mdx_core.facility as f
> left outer join mdx_core.facility_address as fa
>    on fa.facility_id = f.facility_id
> where
>    facility_address_id is null
>    and f.default_country_code = 'US'
>    and f.default_postal_code like '14224%'
>    and (f.default_postal_code = '14224-1945' or f.default_postal_code =
> '14224')
>
> "Nested Loop Left Join  (cost=26155.38..26481.58 rows=1 width=71) (actual
> time=554.138..554.138 rows=0 loops=1)"
> "  Filter: ("inner".facility_address_id IS NULL)"
> "  ->  Bitmap Heap Scan on facility f  (cost=26155.38..26477.68 rows=1
> width=71) (actual time=554.005..554.025 rows=7 loops=1)"
> "        Recheck Cond: (((default_country_code = 'US'::bpchar) AND
> ((default_postal_code)::text = '14224-1945'::text)) OR
> ((default_country_code = 'US'::bpchar) AND ((default_postal_code)::text =
> '14224'::text)))"
> "        Filter: ((default_postal_code)::text ~~ '14224%'::text)"
> "        ->  BitmapOr  (cost=26155.38..26155.38 rows=113 width=0) (actual
> time=553.983..553.983 rows=0 loops=1)"
> "              ->  Bitmap Index Scan on
> facility_facility_country_state_postal_code_idx  (cost=0.00..13077.69
> rows=57 width=0) (actual time=313.156..313.156 rows=7 loops=1)"
> "                    Index Cond: ((default_country_code = 'US'::bpchar) AND
> ((default_postal_code)::text = '14224-1945'::text))"
> "              ->  Bitmap Index Scan on
> facility_facility_country_state_postal_code_idx  (cost=0.00..13077.69
> rows=57 width=0) (actual time=240.819..240.819 rows=0 loops=1)"
> "                    Index Cond: ((default_country_code = 'US'::bpchar) AND
> ((default_postal_code)::text = '14224'::text))"
> "  ->  Index Scan using facility_address_facility_address_address_type_idx
> on facility_address fa  (cost=0.00..3.89 rows=1 width=8) (actual
> time=0.010..0.012 rows=1 loops=7)"
> "        Index Cond: (fa.facility_id = "outer".facility_id)"
> "Total runtime: 554.243 ms"
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
>

--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

index growth problem

From
Graham Davis
Date:
I have a question about index growth.

The way I understand it, dead tuples in indexes were not reclaimed by
VACUUM commands in the past.  However, I've read in a few forum posts
that this was changed somewhere between 7.4 and 8.0.

I'm having an issue where my GIST indexes are growing quite large, and
running a VACUUM doesn't appear to remove the dead tuples.  For example,
if I check out the size an index before running any VACUUM :

select pg_relation_size('asset_positions_position_idx');
 pg_relation_size
------------------
         11624448
(1 row)

The size is about 11Mb.  If I run a VACUUM command in verbose, I see
this about the index:

INFO:  index "asset_positions_position_idx" now contains 4373 row
versions in 68 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.16 sec.

When I run the same command to find the size after the VACUUM, it hasn't
changed.  However, if I drop and then recreate this index, the size
becomes much smaller (almost half the size):

drop index asset_positions_position_idx;
DROP INDEX

CREATE INDEX asset_positions_position_idx ON asset_positions USING GIST
(position GIST_GEOMETRY_OPS);
CREATE INDEX

select pg_relation_size('asset_positions_position_idx');
 pg_relation_size
------------------
          6225920
(1 row)

Is there something I am missing here, or is the reclaiming of dead
tuples for these indexes just not working when I run a VACUUM?  Is it
suppose to work?

--
Graham Davis
Refractions Research Inc.
gdavis@refractions.net


Re: index growth problem

From
"Jim C. Nasby"
Date:
On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:
> I have a question about index growth.
>
> The way I understand it, dead tuples in indexes were not reclaimed by
> VACUUM commands in the past.  However, I've read in a few forum posts
> that this was changed somewhere between 7.4 and 8.0.

There was a change to indexes that made vacuum more effective; I don't
remember the details off-hand.

> I'm having an issue where my GIST indexes are growing quite large, and
> running a VACUUM doesn't appear to remove the dead tuples.  For example,
> if I check out the size an index before running any VACUUM :
>
> select pg_relation_size('asset_positions_position_idx');
> pg_relation_size
> ------------------
>         11624448
> (1 row)
>
> The size is about 11Mb.  If I run a VACUUM command in verbose, I see
> this about the index:
>
> INFO:  index "asset_positions_position_idx" now contains 4373 row
> versions in 68 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.16 sec.
>
> When I run the same command to find the size after the VACUUM, it hasn't
> changed.  However, if I drop and then recreate this index, the size
> becomes much smaller (almost half the size):
>
> drop index asset_positions_position_idx;
> DROP INDEX
>
> CREATE INDEX asset_positions_position_idx ON asset_positions USING GIST
> (position GIST_GEOMETRY_OPS);
> CREATE INDEX
>
> select pg_relation_size('asset_positions_position_idx');
> pg_relation_size
> ------------------
>          6225920
> (1 row)
>
> Is there something I am missing here, or is the reclaiming of dead
> tuples for these indexes just not working when I run a VACUUM?  Is it
> suppose to work?

That's not really a useful test to see if VACUUM is working. VACUUM can
only trim space off the end of a relation (index or table), where by
'end' I mean the end of the last file for that relation on the
filesystem. This means it's pretty rare for VACUUM to actually shrink
files on-disk for tables. This can be even more difficult for indexes (I
think it's virtually impossible to shrink a B-tree index file).
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: index growth problem

From
Graham Davis
Date:
So I guess any changes that were made to make VACUUM and FSM include
indexes
does not remove the necessity to reindex (as long as we don't want index
sizes to bloat and grow larger than they need be).
Is that correct?

Graham.


Jim C. Nasby wrote:

>On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:
>
>
>>I have a question about index growth.
>>
>>The way I understand it, dead tuples in indexes were not reclaimed by
>>VACUUM commands in the past.  However, I've read in a few forum posts
>>that this was changed somewhere between 7.4 and 8.0.
>>
>>
>
>There was a change to indexes that made vacuum more effective; I don't
>remember the details off-hand.
>
>
>
>>I'm having an issue where my GIST indexes are growing quite large, and
>>running a VACUUM doesn't appear to remove the dead tuples.  For example,
>>if I check out the size an index before running any VACUUM :
>>
>>select pg_relation_size('asset_positions_position_idx');
>>pg_relation_size
>>------------------
>>        11624448
>>(1 row)
>>
>>The size is about 11Mb.  If I run a VACUUM command in verbose, I see
>>this about the index:
>>
>>INFO:  index "asset_positions_position_idx" now contains 4373 row
>>versions in 68 pages
>>DETAIL:  0 index pages have been deleted, 0 are currently reusable.
>>CPU 0.00s/0.00u sec elapsed 0.16 sec.
>>
>>When I run the same command to find the size after the VACUUM, it hasn't
>>changed.  However, if I drop and then recreate this index, the size
>>becomes much smaller (almost half the size):
>>
>>drop index asset_positions_position_idx;
>>DROP INDEX
>>
>>CREATE INDEX asset_positions_position_idx ON asset_positions USING GIST
>>(position GIST_GEOMETRY_OPS);
>>CREATE INDEX
>>
>>select pg_relation_size('asset_positions_position_idx');
>>pg_relation_size
>>------------------
>>         6225920
>>(1 row)
>>
>>Is there something I am missing here, or is the reclaiming of dead
>>tuples for these indexes just not working when I run a VACUUM?  Is it
>>suppose to work?
>>
>>
>
>That's not really a useful test to see if VACUUM is working. VACUUM can
>only trim space off the end of a relation (index or table), where by
>'end' I mean the end of the last file for that relation on the
>filesystem. This means it's pretty rare for VACUUM to actually shrink
>files on-disk for tables. This can be even more difficult for indexes (I
>think it's virtually impossible to shrink a B-tree index file).
>
>


--
Graham Davis
Refractions Research Inc.
gdavis@refractions.net


Re: index growth problem

From
"Jim C. Nasby"
Date:
On Wed, Oct 18, 2006 at 03:39:56PM -0700, Graham Davis wrote:
> So I guess any changes that were made to make VACUUM and FSM include
> indexes
> does not remove the necessity to reindex (as long as we don't want index
> sizes to bloat and grow larger than they need be).
> Is that correct?

Not in recent releases, no. Remember that any index on a field that gets
update activity will naturally have some amount of empty space due to
page splits, but this is normal (and actually desireable). So you can't
just compare index size before and after a REINDEX and assume
something's wrong if REINDEX shrinks the index; that gain is artificial.

So long as you are vacuuming frequently enough and keep the free space
map large enough, there shouldn't be any need to REINDEX.
--
Jim Nasby                                            jim@nasby.net
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: index growth problem

From
Tom Lane
Date:
"Jim C. Nasby" <jim@nasby.net> writes:
> On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:
>> When I run the same command to find the size after the VACUUM, it hasn't
>> changed.

> That's not really a useful test to see if VACUUM is working. VACUUM can
> only trim space off the end of a relation (index or table), where by
> 'end' I mean the end of the last file for that relation on the
> filesystem. This means it's pretty rare for VACUUM to actually shrink
> files on-disk for tables. This can be even more difficult for indexes (I
> think it's virtually impossible to shrink a B-tree index file).

Right; IIRC, a plain VACUUM doesn't even try to shorten the physical
index file, because of locking considerations.  The important question
is whether space gets recycled properly for re-use within the index.
If the index continues to grow over time, then you might have a problem
with insufficient FSM space (or not vacuuming often enough).

It might be worth pointing out that VACUUM isn't intended to try to
reduce the disk file to the shortest possible length --- the assumption
is that you are doing vacuuming on a regular basis and so the file
length should converge to a "steady state", wherein the internal free
space runs out about the time you do another VACUUM and reclaim some
more space for re-use.  There's not really any point in being more
aggressive than that; we'd just create additional disk I/O when the
filesystem releases and later reassigns space to the file.

Of course, this argument fails in the scenario where you make a large
and permanent reduction in the amount of data in a table.  There are
various hacks you can use to clean up in that case --- use TRUNCATE not
DELETE if you can, or consider using CLUSTER (not VACUUM FULL).  Some
variants of ALTER TABLE will get rid of internal free space, too.

            regards, tom lane