On Wed, 2024-05-29 at 14:36 +0200, Alexander Staubo wrote:
> > On 29 May 2024, at 02:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > I'm unpersuaded by the idea that ANALYZE should count dead tuples.
> > Since those are going to go away pretty soon, we would risk
> > estimating on the basis of no-longer-relevant stats and thus
> > creating problems worse than the one we solve.
>
> Mind you, “pretty soon” could actually be “hours" if a pg_dump is running,
> or some other long-running transaction is holding back the xmin. Granted,
> long-running transactions should be avoided, but they happen, and the
> result is operationally surprising.
Don't do these things on a busy transactional database.
> I have another use case where I used a transaction to do lock a resource
> to prevent concurrent access. I.e. the logic did
> “SELECT … FROM … WHERE id = $1 FOR UPDATE” and held that transaction open
> for hours while doing maintenance.
That's a dreadful idea.
>
> Just to clarify, this is a real use case, though the repro is of course
> artificial since the real production case is inserting and deleting rows
> very quickly.
No doubt.
Still I think that your main trouble are long-running transactions.
They will always give you trouble on a busy PostgreSQL database.
You should avoid them.
Yours,
Laurenz Albe