On Fri, Jul 19, 2019 at 4:41 PM Matthew Pounsett <matt@conundrum.com> wrote:
> My current backup plan for this database is on-site replication, and a monthly pg_dump from the standby to be copied
off-site. Doing per-table backups sounds like a great way to end up with an inconsistent backup, but perhaps I
misunderstandwhat you mean.
>
Well, my idea was that, as I was supposing and you confirmed, the
database is full also of historical data, that will not be updated in
the future. Therefore you could at least perform a partitioning,
backup historical data and, after having verified it, drop historical
data. This will lead you to have a "partially" online system (I mean
partially because it will not have let's say the last 20 years, but
only the last 10 years) and you will save space for upgrading. After
the upgrade is completed, you can restore the oldest data and you will
come back online with the full dataset. IN this scenario the backup is
not inconsistent, since old data is supposed to stay frozen. If this
is not true, my idea is completly wrong.
I know, this is the desperate-poor-man approach, but I have quite
frankly no other ideas if you are constrained on space, money and time
(because as much as you take, the much it becomes harder to upgrade,
in my opinion).
I would also inspect _now_ a possible refactoring of the database in
order to gain, if possible, some extra space. I mean, sorry to be
harsh, but a database with a huge large table has not been designed
efficiently, so chances are some columns can be shrinked (to the
correct data type could be?) and this could provide you some extra
space.
But without having a better understanding of the scenario and the
context, I think I cannot help very much.
Sorry,
Luca