Am 12.11.2021 um 08:41 schrieb Laurenz Albe:
> On Thu, 2021-11-11 at 18:39 +0000, sch8el@posteo.de wrote:
>> every few weeks I use Postgres ability, to import huge data sets very
>> fast by means of "unlogged tables". The bulk load (consisting of plenty
>> "copy"- & DML-Stmts) and the spatial index creation afterwards, takes
>> about 5 hours on a proper server (pg12.7 & PostGIS-Extension). After
>> that all unlogged tables remain completely unchanged (no
>> DML-/DDL-Statements). Hence all of my huge unlogged, "static" tables get
>> never "unclean" and should not be truncated after a server crash.
> There is no way to achieve that.
>
> But you could keep the "huge data sets" around and load them again if
> your server happens to crash (which doesn't happen often, I hope).
Thx Laurenz for yr reply! Yes, that's what we did after server crashes
(~ 2/yr on different locations).
But the system is at least 5 hours offline plus the time until the admin
manually re-starts the bulk loads. On my system, I have 6 databases
configured like this. For all I have to redo the bulk loads.
I hoped there was a 'switch' on crash-recovery, to avoid truncating the
datafiles of these unlogged tables, which are definitely in a perfect
condition.
Mart
>
> Yours,
> Laurenz Albe