Tim,
Commando. I like it. Thanks a ton for that suggestion. I'd still like to hear if anyone has a good way to estimate the performance of these operations, but I'll explore what it would mean to do exactly that.
John: thankfully this is a table without any fks in, although it is indexed to hell. I was concerned about the speed of updating the indexes, but with Tim's suggestion we could recalculate everything in the background.
We are also considering sharding the table and maybe the right thing is to simply fix it when we do the sharding.
Thanks for the tips guys,
Carson
P.S. A side question: this table of ours is under a fairly constant insert load, and is read infrequently but rather violently (e.g. a group by over, say 1-10k rows.) Is that a bad access pattern?
On Mon, Mar 12, 2012 at 2:18 PM, John R Pierce
<pierce@hogranch.com> wrote:
On 03/12/12 1:25 PM, Tim Uckun wrote:
create a new bigint field.
copy all the IDs to it.
index it in the background
at frequency of your choosing sync the id field to the new field to keep it up.
at a time of your choosing set the default for the new field to be
serial starting at max(id)
drop the ID field
rename the field to id
if there's other tables that have FK references to this table's ID, that could be problematic.
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general