Re: Removing more vacuumlazy.c special cases, relfrozenxid optimizations - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: Removing more vacuumlazy.c special cases, relfrozenxid optimizations
Date
Msg-id CAH2-WzkfGgskXX0xsM4TgcWmiyHeHnWzt1mxz89TSp6wHcTMaA@mail.gmail.com
Whole thread Raw
In response to Re: Removing more vacuumlazy.c special cases, relfrozenxid optimizations  (John Naylor <john.naylor@enterprisedb.com>)
List pgsql-hackers
On Fri, Feb 4, 2022 at 2:00 PM John Naylor <john.naylor@enterprisedb.com> wrote:
> Without having looked at the latest patches, there was something in
> the back of my mind while following the discussion upthread -- the
> proposed opportunistic freezing made a lot more sense if the
> earlier-proposed open/closed pages concept was already available.

Yeah, sorry about that. The open/closed pages concept is still
something I plan on working on. My prototype (which I never posted to
the list) will be rebased, and I'll try to target Postgres 16.

> > Freezing whole pages
> > ====================
>
> > It's possible that a higher cutoff (for example a cutoff of 80% of
> > BLCKSZ, not 50%) will actually lead to *worse* space utilization, in
> > addition to the downsides from fragmentation -- it's far from a simple
> > trade-off. (Not that you should believe that 50% is special, it's just
> > a starting point for me.)
>
> How was the space utilization with the 50% cutoff in the TPC-C test?

The picture was mixed. To get the raw numbers, compare
pg-relation-sizes-after-patch-2.out and
pg-relation-sizes-after-master-2.out files from the drive link I
provided (to repeat, get them from
https://drive.google.com/drive/u/1/folders/1A1g0YGLzluaIpv-d_4o4thgmWbVx3LuR)

Highlights: the largest table (the bmsql_order_line table) had a total
size of x1.006 relative to master, meaning that we did slightly worse
there. However, the index on the same table was slightly smaller
instead, probably because reducing heap fragmentation tends to make
the index deletion stuff work a bit better than before.

Certain small tables (bmsql_district and bmsql_warehouse) were
actually significantly smaller (less than half their size on master),
probably just because the patch can reliably remove LP_DEAD items from
heap pages, even when a cleanup lock isn't available.

The bmsql_new_order table was quite a bit larger, but it's not that
large anyway (1250 MB on master at the very end, versus 1433 MB with
the patch). This is a clear trade-off, since we get much less
fragmentation in the same table (as evidenced by the VACUUM output,
where there are fewer pages with any LP_DEAD items per VACUUM with the
patch). The workload for that table is characterized by inserting new
orders together, and deleting the same orders as a group later on. So
we're bound to pay a cost in space utilization to lower the
fragmentation.

> > blks_hit                 | 174,551,067,731
> > tup_fetched              | 124,797,772,450
>
> > Here is the same pg_stat_database info for master:
>
> > blks_hit                 | 283,015,966,386
> > tup_fetched              | 237,052,965,901
>
> That's impressive.

Thanks!

It's still possible to get a big improvement like that with something
like TPC-C because there are certain behaviors that are clearly
suboptimal -- once you look at the details of the workload, and
compare an imaginary ideal to the actual behavior of the system. In
particular, there is really only one way that the free space
management can work for the two big tables that will perform
acceptably -- the orders have to be stored in the same place to begin
with, and stay in the same place forever (at least to the extent that
that's possible).

-- 
Peter Geoghegan



pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: configure sets GCC=yes for clang
Next
From: Tom Lane
Date:
Subject: Re: configure sets GCC=yes for clang