Re: shared memory stats: high level design decisions: consistency, dropping - Mailing list pgsql-hackers

From Hannu Krosing
Subject Re: shared memory stats: high level design decisions: consistency, dropping
Date
Msg-id CAMT0RQQbu9UdBkugBXarfhtEmxeMwTUsGTUNybi0j=JG_EbTMg@mail.gmail.com
Whole thread Raw
In response to Re: shared memory stats: high level design decisions: consistency, dropping  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On Sat, Mar 20, 2021 at 1:21 AM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2021-03-20 01:16:31 +0100, Hannu Krosing wrote:
> > > But now we could instead schedule stats to be removed at commit
> > time. That's not trivial of course, as we'd need to handle cases where
> > the commit fails after the commit record, but before processing the
> > dropped stats.
> >
> > We likely can not remove them at commit time, but only after the
> > oldest open snapshot moves parts that commit ?
>
> I don't see why? A dropped table is dropped, and cannot be accessed
> anymore. Snapshots don't play a role here - the underlying data is gone
> (minus a placeholder file to avoid reusing the oid, until the next
> commit).  If you run a vacuum on some unrelated table in the same
> database, the stats for a dropped table will already be removed long
> before there's no relation that could theoretically open the table.
>
> Note that table level locking would prevent a table from being dropped
> if a long-running transaction has already accessed it.

Yeah, just checked. DROP TABLE waits until the reading transaction finishes.

>
> > Would an approach where we keep stats in a structure logically similar
> > to MVCC we use for normal tables be completely unfeasible ?
>
> Yes, pretty unfeasible. Stats should work on standbys too...

I did not mean actually using MVCC and real transaction ids but rather
 similar approach, where (potentially) different stats rows are kept
for each backend.

This of course only is a win in case multiple backends can use the
same stats row. Else it is easier to copy the backends version into
backend local memory.

But I myself do not see any problem with stats rows changing all the time.

The only worry would be parts of the same row being out of sync. This
can of course be solved by locking, but for large number of backends
with tiny transactions this locking itself could potentially become a
problem. Here alternating between two or more versions could help and
then it also starts makes sense to keep the copies in shared memory



> Regards,
>
> Andres



pgsql-hackers by date:

Previous
From: David Steele
Date:
Subject: Re: fdatasync performance problem with large number of DB files
Next
From: Ajin Cherian
Date:
Subject: Re: [HACKERS] logical decoding of two-phase transactions