> Hello Tom,
>
> Monday, July 05, 1999 you wrote:
>
> T> If we did have such a concept, the speed penalties for supporting
> T> hard links from one tuple to another would be enormous. Every time
> T> you change a tuple, you'd have to try to figure out what other tuples
> T> reference it, and update them all.
>
> I'm afraid that's mainly because fields in Postgres have variable
> length and after update they go to the end of the table. Am I right?
> In that case there could be done such referencing only with
> tables with wixed width rows, whose updates can naturally be done
> without moving. It is a little sacrifice, but it is worth it.
>
> T> Finally, I'm not convinced that the results would be materially faster
> T> than a standard mergejoin (assuming that you have indexes on both the
> T> fields being joined) or hashjoin (in the case that one table is small
> T> enough to be loaded into memory).
>
> Consider this: no indices, no optimizer thinking, no index lookups -
> no nothing! Just a sequential number of record multiplied by
> record size. Exactly three CPU instructions: read, multiply,
> lookup. Can you see the gain now?
>
> Best regards, Leon
>
>
>
>
-- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026