Ron Mayer wrote:
> In my case my biggest/slowest tables are clustered by zip-code (which
> does a reasonable job at keeping counties/cities/etc on the
> same pages too). Data comes in constantly (many records per minute, as
> we ramp up), pretty uniformly across the country; but most queries
> are geographically bounded. The data's pretty much insert-only.
No deletes? If the tables grow over time, you probably would need to run
CLUSTER every now and then to get the best performance, though the patch
would alleviate that quite a lot.
Do you have a development environment where you could test what effect
the patch would have? It would be interesting to have a real-world use
case, since I don't have one myself at the moment.
> If I understand Heikki's patch, it would help for this use case.
Yes, it would.
> > Your best bet might be to partition the table into two subtables, one
> > with "stable" data and one with the fresh data, and transfer rows from
> > one to the other once they get stable. Storage density in the "fresh"
> > part would be poor, but it should be small enough you don't care.
>
> Hmm... that should work well for me too. Not sure if the use-case
> I mentioned above is still compelling anymore; since this seems like
> it'd give me much of the benefit; and I don't need an excessive
> fillfactor on the stable part of the table.
Umm, if your inserts are uniformly distributed across the country, you
wouldn't have a stable part, right?
- Heikki