Hello! I'd like to "preflight" a given schema migration (i.e. one or
more DDL statements) before applying it to the production database (e.g.
for use in a CI pipeline). I'm thinking of a strategy and would like to
know about its soundness.
The general idea is:
- you have a test database that's a clone of your production one (with
or without data but with the schema being identical)
- given the DDL statements, you open a transaction, grab its pid, and
for each statement:
1. from a different "observer" connection, you read pg_locks,
filtering locks for that pid. This is the "before" locks
2. from the first tx, you execute the statement
3. from the observer, you grab again pg_locks and compute the diff
between this and the "before" view
4. from the first tx, you rollback the transaction
By diffing the after/before pg_locks view, my assumption is that you
know what locks will be acquired by the DDL statements (but not for how
long). The query I'm thinking is:
SELECT locktype, database, relation, objid, mode FROM
pg_catalog.pg_locks WHERE pid = $1 AND locktype IN ('relation',
'object') AND granted";
The type of statements that would be fed as input would be `ALTER|CREATE
TABLE`, `CREATE|DROP INDEX` and perhaps DML statements (`UPDATE`,
`INSERT`, `DELETE`).
Do you think this is a robust way to detect the locks that were
acquired? Are there any caveats/drawbacks/flaws in this strategy?
Thanks in advance