Thanks to all.
I'll give the bash loop method a try and let you know how it works out.
Regards to all,
Gus
On Wed, Jan 28, 2026 at 2:32 AM Olivier Gautherot
<ogautherot@gautherot.net> wrote:
>
> Hi Gus!
>
> This reminds me of a costly mistake I made and you want to avoid: it was a mission critical database (say physical
safety,real people) and the vacuum froze the DB for 24 hours, until I finally took it offline.
>
> If you can take it offline (and you have a couple of hours)
> - disconnect the DB
> - drop indexes (that's the killer)
> - remove unnecessary data
> - vaccuum manually (or better, copy the relevant data to a new table and rename it - this will save the DELETE above
andwill defragment the table)
> - rebuild indexes
> - connect the DB
>
> The better solution would be partitioning:
> - choose a metrics (for instance a timestamp)
> - create partition tables for the period you want to keep
> - copy the relevant data to the partitions and create partial indexes
> - take the DB off line
> - update the last partition with the latest data (should be a fast update)
> - truncate the original table
> - connect partitions
> - connect the DB
>
> In the future, deleting historic data will be a simple DROP TABLE.
>
> Hope it helps
> --
> Olivier Gautherot
> Tel: +33 6 02 71 92 23
>
>
> El mié, 28 de ene de 2026, 5:06 a.m., Tom Lane <tgl@sss.pgh.pa.us> escribió:
>>
>> Ron Johnson <ronljohnsonjr@gmail.com> writes:
>> > Hmm. Must have been START TRANSACTION which I remember causing issues in DO
>> > blocks.
>>
>> Too lazy to test, but I think we might reject that. The normal rule
>> in a procedure is that the next command after a COMMIT automatically
>> starts a new transaction, so you don't need an explicit START.
>>
>> regards, tom lane
>>
>>