Does postgres cache the entire result set before it begins returning
data to the client?
I have a table with ~8 million rows and I am executing a query which
should return about ~800,000 rows. The problem is that as soon as I
execute the query it absolutely kills my machine and begins swapping
for 5 or 6 minutes before it begins returning results. Is postgres
trying to load the whole query into memory before returning anything?
Also, why would it choose not to use the index? It is properly
estimating the # of rows returned. If I set enable_seqscan to off it
is just as slow.
Running postgres 8.0 beta2 dev2
explain select * from island_history where date='2004-09-07' and stock='QQQ';
QUERY PLAN
---------------------------------------------------------------------------
Seq Scan on island_history (cost=0.00..266711.23 rows=896150 width=83)
Filter: ((date = '2004-09-07'::date) AND ((stock)::text = 'QQQ'::text))
(2 rows)
Any help would be appreciated
--Stephen
Table "public.island_history"
Column | Type | Modifiers
------------------+------------------------+-----------
date | date | not null
stock | character varying(6) |
time | time without time zone | not null
reference_number | numeric(9,0) | not null
message_type | character(1) | not null
buy_sell_ind | character(1) |
shares | numeric(6,0) |
remaining_shares | numeric(6,0) |
price | numeric(10,4) |
display | character(1) |
match_number | numeric(9,0) | not null
Indexes:
"island_history_pkey" PRIMARY KEY, btree (date, reference_number,
message_type, "time", match_number)
"island_history_date_stock_time" btree (date, stock, "time")
"island_history_oid" btree (oid)