On 11/20/19 4:03 PM, stan wrote:
I am working on a fairly small application to use for managing a companies
business.
I have a "production" instance hosted by one of the cloud providers, and 2
other instances. This is fairly new to me. In the past, I have created
applications by keeping a set of scripts that can be used to rebuild the
database, and pg_dump to restore the date. Based on some recommendations I
am using pg_basebackup to backup the production instance nightly. My
background is primarily Oracle. I realize looking at the way pg_basebackup
works that multiple database instances, provided by one server are actually
stored in the same physical OS files.
We have traditionally (in the Postgres world) had a sandbox, that we used
for upgrades, and testing development methodologies, and this seems to be
supported pretty well by pg_dump.
Now that I know "too much" I am concerned about hosting the sandbox on the
same Postgres instance.
What specifically do you mean by "instance"? (I know what it means in the SQL Server world, and in Postgres all the databases accessible via a single $PGDATA are called a
cluster.)
Recognizing that this is a fairly small application, what are wiser folks
than I recommendations?
Should I run the sandbox from different Postgres server, possibly even on a
different machine? Is pg_dump still good way to move the production
instance to the sandbox, and perhaps even the other way around?
Running CAT, STG, UAT, DEV, etc on different VMs is certainly one solution, isolating them from each other.
OTOH, you can initdb multiple clusters on the same host, accessing them via different $PGDATA variables and port numbers.
--
Angular momentum makes the world go 'round.