Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues) - Mailing list pgsql-admin

From Scott Ribe
Subject Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues)
Date
Msg-id 6F46CFEB-3C3A-4CEC-89DF-57D8225A5863@elevated-dev.com
Whole thread Raw
In response to Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues)  (Thomas Simpson <ts@talentstack.to>)
List pgsql-admin
Do you actually have 100G networking between the nodes? Because if not, a single CPU should be able to saturate 10G.

Likewise the receiving end would need disk capable of keeping up. Which brings up the question, why not write to disk,
butdirectly to the destination rather than write locally then copy? 

Do you require dump-reload because of suspected corruption? That's a tough one. But if not, if the goal is just to get
upand running on a new server, why not pg_basebackup, streaming replica, promote? That depends on the level of data
modificationactivity being low enough that pg_basebackup can keep up with WAL as it's generated and apply it faster
thannew WAL comes in, but given that your server is currently keeping up with writing that much WAL and flushing that
manychanges, seems likely it would keep up as long as the network connection is fast enough. Anyway, in that scenario,
youdon't need to care how long pg_basebackup takes. 

If you do need a dump/reload because of suspected corruption, the only thing I can think of is something like doing it
atable at a time--partitioning would help here, if practical. 


pgsql-admin by date:

Previous
From: Ron Johnson
Date:
Subject: Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues)
Next
From: Scott Ribe
Date:
Subject: Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full during vacuum - space recovery issues)