On Wed, 30 Jun 2021 at 05:11, David Christensen
<david.christensen@crunchydata.com> wrote:
> 1) A basic refactor of the existing code to easily handle expanding the
> units we use into a table-based format. This also includes changing the
> return value of `pg_size_bytes()` from an int64 into a numeric, and
> minor test adjustments to reflect this.
This is not quite what I had imagined when you said about adding a
table to make it easier to add new units in the future. I expected a
single table that handles all units, not just the ones above kB and
not one for each function.
There are actually two pg_size_pretty functions, one for BIGINT and
one for NUMERIC. I see you only changed the NUMERIC version. I'd
expect them both to have the same treatment and use the same table so
there's consistency between the two functions.
The attached is more like what I had in mind. There's a very small net
reduction in lines of code with this and it also helps keep
pg_size_pretty() and pg_size_pretty_numeric() in sync.
I don't really like the fact that I had to add the doHalfRound field
to get the same rounding behaviour as the original functions. I'm
wondering if it would just be too clever just to track how many bits
we've shifted right by in pg_size_pretty* and compare that to the
value of multishift for the current unit and do appropriate rounding
to get us to the value of multishift. In theory, we could just keep
calling the half_rounded macro until we make it to the multishift
value. My current thoughts are that it's unlikely that anyone would
twiddle with the size_pretty_units array in such a way for the extra
code to be worth it. Maybe someone else feels differently.
David