When running database migrations with .sql files on a live database, it's not uncommon to have to run a migration in a loop to prevent a big lock on a table. For instance if one want to delete some old datas from a big table one would write :
DELETE FROM big_table WHERE id IN (SELECT id FROM big_table WHERE bad = true LIMIT 1000); VACUUM big_table;
Right now, doing this is quite inefficient. We either have to write a script in another language, or run psql in a shell loop and wait for the migration to stop altering rows.
The attached **proof of concept** patch (I insist, it's a 15 minutes hack sprint with no previous knowledge of psql code) implements an 'until-0' loop in psql. The previous migration could be simply written as :
\until-0 BEGIN; DELETE FROM big_table WHERE id IN (SELECT id FROM big_table WHERE bad = true LIMIT 1000); VACUUM big_table; COMMIT; \end-until
And psql will execute it until there is no row affected in the inner queries.
I am willing to write a proper patch for this (I hope the tell/seek is an acceptable implementation…), but I prefer having some feedback first.
I like this idea, but it is really hack :)
In this case, the cycle should be \repeat ... and \end-repeat-until