Re: [Patch] Optimize dropping of relation buffers using dlist - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: [Patch] Optimize dropping of relation buffers using dlist
Date
Msg-id CAA4eK1+iTYaRYfXWGPJbFCy9CWH7U6fVoPP=bG-ZcYJNsC995A@mail.gmail.com
Whole thread Raw
In response to Re: [Patch] Optimize dropping of relation buffers using dlist  (Andres Freund <andres@anarazel.de>)
Responses Re: [Patch] Optimize dropping of relation buffers using dlist  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2020-07-31 15:50:04 -0400, Tom Lane wrote:
> > Andres Freund <andres@anarazel.de> writes:
>
> > > Wonder if the temporary fix is just to do explicit hashtable probes for
> > > all pages iff the size of the relation is < s_b / 500 or so. That'll
> > > address the case where small tables are frequently dropped - and
> > > dropping large relations is more expensive from the OS and data loading
> > > perspective, so it's not gonna happen as often.
> >
> > Oooh, interesting idea.  We'd need a reliable idea of how long the
> > relation had been (preferably without adding an lseek call), but maybe
> > that's do-able.
>
> IIRC we already do smgrnblocks nearby, when doing the truncation (to
> figure out which segments we need to remove). Perhaps we can arrange to
> combine the two? The layering probably makes that somewhat ugly :(
>
> We could also just use pg_class.relpages. It'll probably mostly be
> accurate enough?
>

Don't we need the accurate 'number of blocks' if we want to invalidate
all the buffers? Basically, I think we need to perform BufTableLookup
for all the blocks in the relation and then Invalidate all buffers.

-- 
With Regards,
Amit Kapila.



pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: [Patch] Optimize dropping of relation buffers using dlist
Next
From: Tom Lane
Date:
Subject: Re: [Patch] Optimize dropping of relation buffers using dlist