Re: Slow running query with views...how to increase efficiency? with index? - Mailing list pgsql-general
From | Alban Hertroys |
---|---|
Subject | Re: Slow running query with views...how to increase efficiency? with index? |
Date | |
Msg-id | 2F7E7306-292A-4AAE-BC39-764274C2C957@solfertje.student.utwente.nl Whole thread Raw |
In response to | Re: Slow running query with views...how to increase efficiency? with index? (fox7 <ale_shark7@yahoo.it>) |
List | pgsql-general |
On 28 Oct 2009, at 14:51, fox7 wrote: > > I copy the results derived by istruction "EXPLAIN ANALYZE" for the two > query... For a next time, if you attach that output as text files they won't get wrapped by e-mail clients, making them a bit easier to read. Also, this looks like output from pg_admin? Command-line psql doesn't wrap the lines in quotes (those give problems in tools like http://explain-analyze.info/ > ----------------------Query with views------------------------- > " -> Sort (cost=40863.02..40865.50 rows=994 width=436) (actual > time=5142.974..5143.026 rows=40 loops=1)" > " Sort Key: TC.term1, v2TO.term2" > " Sort Method: quicksort Memory: 23kB" > " -> Hash Join (cost=38857.33..40813.53 rows=994 width=436) > (actual > time=3547.557..5142.853 rows=40 loops=1)" > " Hash Cond: ((TC.term2)::text = (v2TO.term2)::text)" > " -> Unique (cost=38837.21..40099.83 rows=49719 > width=111) > (actual time=3546.697..4869.647 rows=168340 loops=1)" > " -> Sort (cost=38837.21..39258.08 rows=168350 > width=111) (actual time=3546.691..4363.092 rows=168350 loops=1)" > " Sort Key: TC.term1, TC.term2" > " Sort Method: external merge Disk: > 21032kB" Here's your problem. The time taken jumps from a few hundreds of milliseconds to 3.5 seconds here. Postgres is told to sort a largish dataset and it doesn't fit in workmem, so it has to push it to disk. This may well be one of the unnecessary orderings or distinct specifiers you put in your views, I'd try removing some of those and see what happens. Alternatively you can increase the amount of work_mem that's available per connection. > " -> Seq Scan on TC (cost=0.00..4658.50 > rows=168350 width=111) (actual time=0.010..294.459 rows=168350 > loops=1)" > " -> Hash (cost=20.07..20.07 rows=4 width=218) (actual > time=0.219..0.219 rows=2 loops=1)" > " -> Subquery Scan v2TO (cost=20.00..20.07 rows=4 > width=218) (actual time=0.192..0.207 rows=2 loops=1)" > " -> Unique (cost=20.00..20.03 rows=4 > width=108) > (actual time=0.186..0.195 rows=2 loops=1)" > " -> Sort (cost=20.00..20.01 rows=4 > width=108) (actual time=0.182..0.185 rows=2 loops=1)" > " Sort Key: TO.term1, TO.term2" > " Sort Method: quicksort > Memory: > 17kB" > " -> Append > (cost=15.17..19.96 rows=4 > width=108) (actual time=0.094..0.169 rows=2 loops=1)" > " -> Unique > (cost=15.17..15.19 > rows=3 width=108) (actual time=0.090..0.100 rows=2 loops=1)" > " -> Sort > (cost=15.17..15.18 rows=3 width=108) (actual time=0.086..0.088 rows=2 > loops=1)" > " Sort Key: > TO.term2" > " Sort Method: > quicksort Memory: 17kB" > " -> Bitmap > Heap > Scan on TO (cost=4.28..15.15 rows=3 width=108) (actual > time=0.064..0.067 > rows=2 loops=1)" > " > Recheck Cond: > ((term1)::text = 'c'::text)" > " -> > Bitmap > Index Scan on TO_index1 (cost=0.00..4.28 rows=3 width=0) (actual > time=0.052..0.052 rows=2 loops=1)" > " > Index > Cond: ((term1)::text = 'c'::text)" > " -> Seq Scan on TB > (cost=0.00..4.72 rows=1 width=104) (actual time=0.056..0.056 rows=0 > loops=1)" > " Filter: > ((term2)::text = > 'c'::text)" > "Total runtime: 5147.410 ms" Alban Hertroys -- Screwing up is the best way to attach something to the ceiling. !DSPAM:737,4ae956d611071386765946!
pgsql-general by date: