"Ross J. Reedstrom" <reedstrm@rice.edu> writes:
> These groups _really are_ ideal for Joe Conway's work on arrays: we need
> ordered vectors, so we'd be sorting all the time, otherwise. They're
> static, and they're read only. The one thing they're not is fixed, known
> size (Sorry Merlin). They work fine for the query as shown: the only
> issue is performance.
Well just as long as you understand the trade-offs. Denormalizing can be
useful but you need to know what flexibility you're losing too.
> > Postgres does have a way to do what you ask, though. It involves GiST
> > indexes and the operators from the contrib/intarray directory from the
> > Postgres source.
>
> Well, yes, that's how it used to be done. I figured the new array
> support should be able to handle it without the addon, however.
I think you can btree index arrays now, which is new, but it's not useful for
the kind of lookup you're doing. It would only be useful for joining on array
types or looking for groups with given content, or things like that.
> > However I warn you in advance that this is fairly esoteric stuff and
> > will take some time to get used to. And at least in my case I found
> > the indexes didn't actually help much for my data sets, probably
> > because they just weren't big enough to benefit.
>
> I know that they should help in this case: we've got lots of content.
> Any particular author or maintainter will be in a small fraction of
> those. i.e.: it's ideal for an index. And the current joined case uses
> an index, when it's available. I'll take a look at the GiST/contrib work,
> anyway.
I would be curious to know how it goes. My own project uses denormalized sets
stored as arrays as well, though in my case they're precalculated from the
fully normalized data. I tried to use GiST indexes but ran into problems
combining the btree-GiST code with array GiST code in a multicolumn index. I
still don't really know why it failed, but after two days building the index I
gave up.
--
greg