Re: shared memory stats: high level design decisions: consistency, dropping - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: shared memory stats: high level design decisions: consistency, dropping
Date
Msg-id 20210321221606.GP20766@tamriel.snowman.net
Whole thread Raw
In response to Re: shared memory stats: high level design decisions: consistency, dropping  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: shared memory stats: high level design decisions: consistency, dropping  (Greg Stark <stark@mit.edu>)
List pgsql-hackers
Greetings,

* Tom Lane (tgl@sss.pgh.pa.us) wrote:
> If I understand what you are proposing, all stats views would become
> completely volatile, without even within-query consistency.  That really
> is not gonna work.  As an example, you could get not-even-self-consistent
> results from a join to a stats view if the planner decides to implement
> it as a nestloop with the view on the inside.
>
> I also believe that the snapshotting behavior has advantages in terms
> of being able to perform multiple successive queries and get consistent
> results from them.  Only the most trivial sorts of analysis don't need
> that.
>
> In short, what you are proposing sounds absolutely disastrous for
> usability of the stats views, and I for one will not sign off on it
> being acceptable.
>
> I do think we could relax the consistency guarantees a little bit,
> perhaps along the lines of only caching view rows that have already
> been read, rather than grabbing everything up front.  But we can't
> just toss the snapshot concept out the window.  It'd be like deciding
> that nobody needs MVCC, or even any sort of repeatable read.

This isn't the same use-case as traditional tables or relational
concepts in general- there aren't any foreign keys for the fields that
would actually be changing across these accesses to the shared memory
stats- we're talking about gross stats numbers like the number of
inserts into a table, not an employee_id column.  In short, I don't
agree that this is a fair comparison.

Perhaps there's a good argument to try and cache all this info per
backend, but saying that it's because we need MVCC-like semantics for
this data because other things need MVCC isn't it and I don't know that
saying this is a justifiable case for requiring repeatable read is
reasonable either.

What specific, reasonable, analysis of the values that we're actually
talking about, which are already aggregates themselves, is going to
end up being utterly confused?

Thanks,

Stephen

Attachment

pgsql-hackers by date:

Previous
From: Stephen Frost
Date:
Subject: Re: recovery_init_sync_method=wal
Next
From: Thomas Munro
Date:
Subject: Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE ?