Re: Fastest way to duplicate a quite large database - Mailing list pgsql-general

From Adrian Klaver
Subject Re: Fastest way to duplicate a quite large database
Date
Msg-id 570E552C.8090908@aklaver.com
Whole thread Raw
In response to Re: Fastest way to duplicate a quite large database  (Edson Richter <edsonrichter@hotmail.com>)
Responses Re: Fastest way to duplicate a quite large database
List pgsql-general
On 04/13/2016 06:58 AM, Edson Richter wrote:

>
>
> Another trouble I've found: I've used "pg_dump" and "pg_restore" to
> create the new CustomerTest database in my cluster. Immediately,
> replication started to replicate the 60Gb data into slave, causing big
> trouble.
> Does mark it as "template" avoids replication of that "copied" database?
> How can I mark a database to "do not replicate"?

With the Postgres built in binary replication you can't, it replicates
the entire cluster. There are third party solutions that offer that choice:

http://www.postgresql.org/docs/9.5/interactive/different-replication-solutions.html

Table 25-1. High Availability, Load Balancing, and Replication Feature
Matrix


It has been mentioned before, running a non-production database on the
same cluster as the production database is a generally not a good idea.
Per previous suggestions I would host your CustomerTest database on
another instance/cluster of Postgres listening on a different port. Then
all you customers have to do is create a connection that points at the
new port.

>
> Thanks,
>
> Edson
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


pgsql-general by date:

Previous
From: Edson Richter
Date:
Subject: Re: Fastest way to duplicate a quite large database
Next
From: Adrian Klaver
Date:
Subject: Re: Freezing localtimestamp and other time function on some value