I'm running postgres 7.1.2 on a freebsd machine -- Celeron 500 with 128
megs of ram (256 swap). Not the best for a real gut wrenching machine, but
what was around to get the feel of what was wanted.
A question was asked which i through to the database to see how it was
able to handle the question at hand and it failed . . . after 50 minutes
of processing it flopped to the ground killed: out of swap space.
Granted the query was a large one (explanations below) but a few
questions..
Is there a way to predict the requirements a system would need to handle a
query of specific size / complexity? (and how?)
Is there a way to pull this type of query off on this system? (is there a
solution other than throw more ram / swap at it?) (one would easily be to
handle it in chunks, but other suggestions are welcome)
What would this type of query need to execute? How about to execute well?
Table and query explanations follow...
The query was joining three tables, which i know is not quite a good idea,
but didn't see much of another way. The question was posed to find all
the subcategories all customers have ordered from a company.
The history table (history of orders) contains the id, date, cost,
and orderid and has 838500 records.
The ordered table (line items of orders) contains the orderid and a sku
and has 2670000 records
The subcategories table has the sku and subcategory and has 20000 records.
each customer can have many orders which can have many items which can
have many subcategories.
the query was posed as:
SELECT history.id, sub
FROM insub
WHERE history.orderid = ordered.orderid
AND ordered.items = insub.sku
ORDER BY ID;
Any help would be greatly appreciated.
Thanks in advance.
.jtp