Re: [SQL] optimizing 2-table join w/millions of rows - Mailing list pgsql-sql

From Michael Olivier
Subject Re: [SQL] optimizing 2-table join w/millions of rows
Date
Msg-id 19981123030514.15338.rocketmail@send106.yahoomail.com
Whole thread Raw
Responses Re: [SQL] optimizing 2-table join w/millions of rows
List pgsql-sql
---Herouth Maoz <herouth@oumail.openu.ac.il> wrote:
>
> At 4:26 +0200 on 20/11/98, Michael Olivier wrote:
>
>
> >
> > select U.acctname from usertest U, bgndtest B where
> >     B.part_needed=3 and B.loc_needed=5 and
> >     B.acctname=U.acctname and U.acctname in
> >         (select acctname from usertest where part=2 and loc=3)
>
> Can you explain *verbally* what you meant to do here? It seems as if
the
> subselect is redundant.

It's not redundant, but an attempt to optimize the overall query
performance by using a subquery. What this does is compare two users
parameters against each other: "find all users who need part 3 and
need loc 5 and who have part 2 and loc 3"

> How about:
>
> SELECT U.acctname
> FROM   usertest U, bgndtest B
> WHERE  B.acctname = U.acctname
>   AND  B.part_needed=3 AND B.loc_needed=5
>   AND  U.part=2 AND U.loc=3;

Yes, that looks equivalent. My problem is this is too slow an
operation as I'm benchmarking it right now. And if I add less-than or
greater-than comparisons, the performance goes _way_ down from there.
How can I get the best performance out of this kind of operation?

Is there any way to force postgres to hold certain tables in memory
all the time? As I said, cost of memory isn't an issue, but
performance is.

thanks,
Michael

_________________________________________________________
DO YOU YAHOO!?
Get your free @yahoo.com address at http://mail.yahoo.com


pgsql-sql by date:

Previous
From: Postgres DBA
Date:
Subject: Re: [SQL] Re: How do I COPY empty Datetimes?
Next
From: Herouth Maoz
Date:
Subject: Re: [SQL] optimizing 2-table join w/millions of rows