On 11/07/2014 05:29 PM, Alvaro Herrera wrote:
> Josh Berkus wrote:
>> Of course, this will lead to LOTs of additional vacuuming ...
>
> There's a trade-off here: more vacuuming I/O usage for less disk space
> used. How stressed your customers really are about 1 GB of disk space?
These customers not so much. The users I encountered on chat whose
pg_multixact was over 20GB, and larger than their database? Lots.
On 11/08/2014 03:54 AM, Andres Freund wrote:
> On 2014-11-07 17:20:44 -0800, Josh Berkus wrote:
>> So the basic problem is that multixact files are just huge, with an
>> average of 35 bytes per multixact?
>
> Depends on the concurrency. The number of members is determined by the
> number of xacts concurrenly locking a row..
Yeah, that leads to some extreme inflation for databases where FK
conflicts are common though.
On 11/08/2014 03:54 AM, Andres Freund wrote:
> On 2014-11-07 17:20:44 -0800, Josh Berkus wrote:
>> Of course, this will lead to LOTs of additional vacuuming ...
>
> Yes. And that's likely to cause much, much more grief.
>
> Also. Didn't you just *vehemently* oppose making these values tunable at
> all?
Yes, I opposed adding a *user* tunable with zero information on how it
should be tuned or why. I always do and always will. I also think our
defaults for multixact freezing should be tied to the ones for xid
freezing, and should not by default be completely independent numbers;
I'm still not convinced that it makes sense to have a separate multixact
threshold at all **since the same amount of vacuuming needs to be done
regardless of whether we're truncating xids or mxids**.
Certainly when I play with tuning this for customers, I'm going to lower
vacuum_freeze_table_age as well.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com