Re: BUG #5946: Long exclusive lock taken by vacuum (not full) - Mailing list pgsql-bugs

From Greg Stark
Subject Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Date
Msg-id AANLkTik11YkL2Otst7Uf0f-_3+YmTh6O8tFyg8CnQ5o2@mail.gmail.com
Whole thread Raw
In response to Re: BUG #5946: Long exclusive lock taken by vacuum (not full)  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: BUG #5946: Long exclusive lock taken by vacuum (not full)  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: BUG #5946: Long exclusive lock taken by vacuum (not full)  (Christopher Browne <cbbrowne@gmail.com>)
Re: BUG #5946: Long exclusive lock taken by vacuum (not full)  (Dimitri Fontaine <dimitri@2ndQuadrant.fr>)
List pgsql-bugs
On Fri, Mar 25, 2011 at 8:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Interesting, but I don't understand/believe your argument as to why this
> is a bad idea or fixed-size extents are better. =A0It sounds to me just
> like the typical Oracle DBA compulsion to have a knob to twiddle. =A0A
> self-adjusting enlargement behavior seems smarter all round.
>

So is it ok for inserting one row to cause my table to grow by 90GB?
Or should there be some maximum size increment at which it stops
growing? What should that maximum be? What if I'm on a big raid system
where that size doesn't even add a block to every stripe element?

Say you start with 64k (8 pg blocks). That means your growth
increments will be 64k, 70k, 77kl, 85k, 94k, 103k, 113k, 125k, 137k,
...

I'm having trouble imagining a set of hardware and filesystem where
growing a table by 125k will be optimal. The next allocation will have
to do some or all of a) go back and edit the previous one to round it
up, then b) add 128k more, then c) still have 6k more to allocate in a
new allocation.

--=20
greg

pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Next
From: Tom Lane
Date:
Subject: Re: BUG #5946: Long exclusive lock taken by vacuum (not full)