On Fri, 10 Sep 2004, Ennio-Sr wrote:
> * Stephan Szabo <sszabo@megazone.bigpanda.com> [100904, 09:05]:
> >
> > On Fri, 10 Sep 2004, Ennio-Sr wrote:
> >
> > > * Stephan Szabo <sszabo@megazone.bigpanda.com> [100904, 07:10]:
> > > > On Fri, 10 Sep 2004, Ennio-Sr wrote:
> > > > [ big cut ]
> >
> > Note however, that this may very well perform poorly compared to other
> > solutions because as foo and foo_d get large, you're going to be
> > evaluating the case clause alot. In addition, this gives an extra NULL
> > row AFAICS (see below where you get a "blank" row and the rowcount is 1
> > higher than the meaningful number of rows.
> >
> Stephan,
> I just tested my query on the main tables (bibl_lt and bidbt) and it
> seems to work reasonably quickly (my tables are not all that large:
> around 10.000 rows only!). But, if it is possible to get a better
> result, why not?
> So, when you say '..compared to other solutions..' are you thinking
> about 'COALESCE' (which I have not studied yet) or some other type
> of instruction, other than psql's?
Well, I'd expect that for large tables the outer join type solution would
tend to be faster than joining every row to every other row and then using
a unique step (probably after a sort) to basically remove the ones you
don't want. If you try different solutions, you can use EXPLAIN ANALYZE to
compare query plans.