Re: [PoC] Improve dead tuple storage for lazy vacuum - Mailing list pgsql-hackers

From John Naylor
Subject Re: [PoC] Improve dead tuple storage for lazy vacuum
Date
Msg-id CANWCAZZ2LXMHSuY6uWEBMxbHgLr=DmHz-=CWBNnu7eFdssCOxw@mail.gmail.com
Whole thread Raw
In response to Re: [PoC] Improve dead tuple storage for lazy vacuum  (Masahiko Sawada <sawada.mshk@gmail.com>)
Responses Re: [PoC] Improve dead tuple storage for lazy vacuum
List pgsql-hackers
On Mon, Jan 29, 2024 at 2:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

> > > +/*
> > > + * Calculate the slab blocksize so that we can allocate at least 32 chunks
> > > + * from the block.
> > > + */
> > > +#define RT_SLAB_BLOCK_SIZE(size) \
> > > + Max((SLAB_DEFAULT_BLOCK_SIZE / (size)) * (size), (size) * 32)
> > >
> > > The first parameter seems to be trying to make the block size exact,
> > > but that's not right, because of the chunk header, and maybe
> > > alignment. If the default block size is big enough to waste only a
> > > tiny amount of space, let's just use that as-is.

> If we use SLAB_DEFAULT_BLOCK_SIZE (8kB) for each node class, we waste
> [snip]
> We might want to calculate a better slab block size for node256 at least.

I meant the macro could probably be

Max(SLAB_DEFAULT_BLOCK_SIZE, (size) * N)

(Right now N=32). I also realize I didn't answer your question earlier
about block sizes being powers of two. I was talking about PG in
general -- I was thinking all block sizes were powers of two. If
that's true, I'm not sure if it's because programmers find the macro
calculations easy to reason about, or if there was an implementation
reason for it (e.g. libc behavior). 32*2088 bytes is about 65kB, or
just above a power of two, so if we did  round that up it would be
128kB.

> > > + * TODO: The caller must be certain that no other backend will attempt to
> > > + * access the TidStore before calling this function. Other backend must
> > > + * explicitly call TidStoreDetach() to free up backend-local memory associated
> > > + * with the TidStore. The backend that calls TidStoreDestroy() must not call
> > > + * TidStoreDetach().
> > >
> > > Do we need to do anything now?
> >
> > No, will remove it.
> >
>
> I misunderstood something. I think the above statement is still true
> but we don't need to do anything at this stage. It's a typical usage
> that the leader destroys the shared data after confirming all workers
> are detached. It's not a TODO but probably a NOTE.

Okay.



pgsql-hackers by date:

Previous
From: Bharath Rupireddy
Date:
Subject: Re: New Table Access Methods for Multi and Single Inserts
Next
From: Richard Guo
Date:
Subject: Re: Retiring is_pushed_down