Hi everyone,
i have a function that looks executes the following command in a while loop in which it iterates through tables
(aTable):
FOR rec in EXECUTE 'SELECT count(a.allele_1) as c from aTable a INNER JOIN map_table b on(a.snp_id=upper(b.snp_id))
WHERENOT a.allele_1=a.allele_2 and b.gene=something
Unfortunately this command leads to 4 minutes of execution for 15 tables of which each has around 3 Million tuples. I
havea partial index on the expression where not allele_1=allele_2 and one on snp_id.
Here is the explain i get for the above mentioned command:
Aggregate (cost=229621.08..229621.09 rows=1 width=16)
-> Merge Join (cost=496.29..229361.10 rows=103991 width=16)
Merge Cond: (("outer".snp_id)::text = "inner"."?column2?")
-> Index Scan using idx_snpid_pt1 on snp_allel_chr_11pt1 a (cost=0.00..212667.07 rows=2875580 width=29)
Filter: ((allele_1)::text <> (allele_2)::text)
-> Sort (cost=496.29..496.63 rows=138 width=13)
Sort Key: upper((b.snp_id)::text)
-> Bitmap Heap Scan on snps_map b (cost=2.48..491.38 rows=138 width=13)
Recheck Cond: ((gene)::text = 'FN5'::text)
-> Bitmap Index Scan on idx_snps_map_gene (cost=0.00..2.48 rows=138 width=0)
Index Cond: ((gene)::text = 'FN5'::text)
As you can see the partial index is not used. So, does anyone have a suggestion on how to increase the performance of
thequeries.
Thanks
Chris
Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230