* Craig Ringer (craig@2ndquadrant.com) wrote:
> On 05/17/2013 11:38 AM, Robert Haas wrote:
> > maybe with a bit of modest pre-extension.
> When it comes to pre-extension, is it realistic to get a count of
> backends waiting on the lock and extend the relation by (say) 2x the
> number of waiting backends?
Having the process which has the lock do more work before releasing it,
and having the other processes realize that there is room available
after blocking on the lock (and not trying to extend the relation
themselves..), might help. One concern that came up in Ottawa is
over autovacuum coming along and discovering empty pages at the end of
the relation and deciding to try and truncate it. I'm not convinced
that would happen due to the locks involved but if we actually extend
the relation by enough that the individual processes can continue
writing for a while before another extension is needed, then perhaps it
could.
On the other hand, I do feel like people are worried about
over-extending a relation and wasting disk space- but with the way that
vacuum can clean up pages at the end, that would only be a temporary
situation anyway.
> If it's possible this would avoid the need to attempt any
> recency-of-last-extension based preallocation with the associated
> problem of how to store and access the last-extended time efficiently,
> while still hopefully reducing contention on the relation extension lock
> and without delaying the backend doing the extension too much more.
I do like the idea of getting an idea of how many blocks are being asked
for, based on how many other backends are trying to write, but I've been
thinking a simple algorithm might also work well, eg:
alloc_size = 1 page
extend_time = 0
while(writing) if(blocked and extend_time < 5s) alloc_size *= 2 extend_start_time = now()
extend(alloc_size)extend_time= now() - extend_start_time
Thanks,
Stephen