I am not familiar enough with how postgres optimizes the queries but won't this end up with
total number of queries run on DB = 1 query + 1 query/row in first query
What would be more efficient on a large database - a query like Richard submitted (subquery in the select) or one like I submitted (join the two tables and then do a group by)? My guess is it depends on the % of records returned out of the sales_orders table, the smaller the % the better Richard's query would perform, the higher the % the better the join would run.
The database I am working with aren't big enough yet to warrant spending a lot of time researching this but if someone with more experience knows what is best I would love to hear about it.
Thanks,
Aaron Bono
On 6/29/06, Richard Broersma Jr <rabroersma@yahoo.com> wrote: > SELECT trans_no,
> customer,
> date_placed,
> date_complete,
> date_printed,
> ord_type,
> ord_status,
select (
SUM(sell_price)
from soh_product
where sales_orders.trans_no = soh_product.soh_num
) as transact_sum,
> customer_reference,
> salesman,
> parent_order,
> child_order,
> order_number
> FROM sales_orders
> WHERE (trans_no Like '8%' AND order_number Like '8%')
> OR (trans_no Like '9%' AND order_number Like '8%')
> OR (trans_no Like '8%' AND order_number Like '9%')
> OR (trans_no Like '9%' AND order_number Like '9%')
> AND (warehouse='M')
> AND (date_placed > (current_date + ('12 months ago'::interval)))
> ORDER BY trans_no DESC