On Wed, 2025-05-07 at 06:08 +0300, Agis wrote:
> On Wed, May 7, 2025, 00:57 Laurenz Albe <laurenz.albe@cybertec.at> wrote:
> > On Tue, 2025-05-06 at 12:06 +0300, Agis Anastasopoulos wrote:
> > > I'd like to "preflight" a given schema migration (i.e. one or
> > > more DDL statements) before applying it to the production database (e.g.
> > > for use in a CI pipeline). I'm thinking of a strategy and would like to
> > > know about its soundness.
> > >
> > > The general idea is:
> > >
> > > - you have a test database that's a clone of your production one (with
> > > or without data but with the schema being identical)
> > > - given the DDL statements, you open a transaction, grab its pid, and
> > > for each statement:
> > > 1. from a different "observer" connection, you read pg_locks,
> > > filtering locks for that pid. This is the "before" locks
> > > 2. from the first tx, you execute the statement
> > > 3. from the observer, you grab again pg_locks and compute the diff
> > > between this and the "before" view
> > > 4. from the first tx, you rollback the transaction
> >
> > I think that that is a good strategy, as long as you run all DDL statements
> > in a single transaction.
>
> Can you elaborate on that?
>
> I was thinking that we should mirror the way the statements are going to be
> executed in production: if they're all going to be executed inside a single
> tx, then we should do the same. But if not, them we should follow course and
> execute them in separate txs.
>
> Am I missing something?
No; I was sloppy.
What I wanted to emphasize is that you have to look at "pg_locks" *before*
the transaction is committed, otherwise you won't see any locks.
It doesn't have to be one single transaction.
Yours,
Laurenz Albe