Re: joining two tables slow due to sequential scan - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: joining two tables slow due to sequential scan
Date
Msg-id 1139611159.22740.149.camel@state.g2switchworks.com
Whole thread Raw
In response to Re: joining two tables slow due to sequential scan  ("Tim Jones" <TJones@optio.com>)
List pgsql-performance
On Fri, 2006-02-10 at 16:37, Tim Jones wrote:
> for first query
>
> QUERY PLAN
> 'Limit  (cost=4.69..88.47 rows=10 width=1350) (actual
> time=32.195..32.338 rows=10 loops=1)'
> '  ->  Nested Loop  (cost=4.69..4043.09 rows=482 width=1350) (actual
> time=32.190..32.316 rows=10 loops=1)'
> '        ->  Bitmap Heap Scan on documentversions  (cost=4.69..1139.40
> rows=482 width=996) (actual time=32.161..32.171 rows=10 loops=1)'
> '              Recheck Cond: (documentstatus = ''AC''::bpchar)'
> '              ->  Bitmap Index Scan on ix_docstatus  (cost=0.00..4.69
> rows=482 width=0) (actual time=31.467..31.467 rows=96368 loops=1)'
> '                    Index Cond: (documentstatus = ''AC''::bpchar)'
> '        ->  Index Scan using ix_cdocdid on clinicaldocuments
> (cost=0.00..6.01 rows=1 width=354) (actual time=0.006..0.007 rows=1
> loops=10)'
> '              Index Cond: ("outer".documentidentifier =
> clinicaldocuments.dssdocumentidentifier)'
>
>
>  for second query
>
> QUERY PLAN
> 'Hash Join  (cost=899.83..4384.17 rows=482 width=1350)'
> '  Hash Cond: ("outer".documentidentifier =
> "inner".dssdocumentidentifier)'
> '  ->  Seq Scan on documentversions  (cost=0.00..2997.68 rows=96368
> width=996)'
> '  ->  Hash  (cost=898.62..898.62 rows=482 width=354)'
> '        ->  Bitmap Heap Scan on clinicaldocuments  (cost=4.69..898.62
> rows=482 width=354)'
> '              Recheck Cond: (patientidentifier = 123)'
> '              ->  Bitmap Index Scan on ix_cdocpid  (cost=0.00..4.69
> rows=482 width=0)'
> '                    Index Cond: (patientidentifier = 123)'

OK, the first one is explain analyze, but the second one is just plain
explain...

pgsql-performance by date:

Previous
From: Ragnar
Date:
Subject: Re: Large Database Design Help
Next
From: Scott Marlowe
Date:
Subject: Re: Large Database Design Help