Thread: DRAFT 9.6 release

From:
Josh Berkus
Date:

Folks,

Here is a preliminary draft of a 9.6 release announcement.

Please comment, suggest, edit, make comments on the wiki, whatever.

https://wiki.postgresql.org/wiki/96releasedraft

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Amit Langote
Date:

On 2016/08/30 8:00, Josh Berkus wrote:
> Folks,
>
> Here is a preliminary draft of a 9.6 release announcement.
>
> Please comment, suggest, edit, make comments on the wiki, whatever.
>
> https://wiki.postgresql.org/wiki/96releasedraft

In the section on scale out, I see quorum commit mentioned but it's not
part of what's offered in 9.6.  The quorum part is still being worked on:
https://commitfest.postgresql.org/10/696/

Thanks,
Amit




From:
Josh Berkus
Date:

On 08/29/2016 06:24 PM, Amit Langote wrote:
> On 2016/08/30 8:00, Josh Berkus wrote:
>> Folks,
>>
>> Here is a preliminary draft of a 9.6 release announcement.
>>
>> Please comment, suggest, edit, make comments on the wiki, whatever.
>>
>> https://wiki.postgresql.org/wiki/96releasedraft
>
> In the section on scale out, I see quorum commit mentioned but it's not
> part of what's offered in 9.6.  The quorum part is still being worked on:
> https://commitfest.postgresql.org/10/696/

Oh, figures the one feature I haven't tested would be the one which
isn't right.  So what DID get added to 9.6?  Is it still a significant
feature?

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Bruce Momjian
Date:

On Tue, Aug 30, 2016 at 09:41:59AM -0700, Josh Berkus wrote:
> On 08/29/2016 06:24 PM, Amit Langote wrote:
> > On 2016/08/30 8:00, Josh Berkus wrote:
> >> Folks,
> >>
> >> Here is a preliminary draft of a 9.6 release announcement.
> >>
> >> Please comment, suggest, edit, make comments on the wiki, whatever.
> >>
> >> https://wiki.postgresql.org/wiki/96releasedraft
> >
> > In the section on scale out, I see quorum commit mentioned but it's not
> > part of what's offered in 9.6.  The quorum part is still being worked on:
> > https://commitfest.postgresql.org/10/696/
>
> Oh, figures the one feature I haven't tested would be the one which
> isn't right.  So what DID get added to 9.6?  Is it still a significant
> feature?

We did this (from the 9.6 release notes):

        Allow synchronous replication to support multiple simultaneous
        synchronous standby servers, not just one (Masahiko Sawada,
        Beena Emerson, Michael Paquier, Fujii Masao, Kyotaro Horiguchi)

        The number of standby servers that must acknowledge a commit
        before it is considered complete is now configurable as part of
        the synchronous_standby_names parameter.

You can see the details here, e.g. "3 (s1, s2, s3, s4)"

    https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html#GUC-SYNCHRONOUS-STANDBY-NAMES

--
  Bruce Momjian  <>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +


From:
Josh Berkus
Date:

On 08/30/2016 02:28 PM, Bruce Momjian wrote:
> On Tue, Aug 30, 2016 at 09:41:59AM -0700, Josh Berkus wrote:
>> On 08/29/2016 06:24 PM, Amit Langote wrote:
>>> On 2016/08/30 8:00, Josh Berkus wrote:
>>>> Folks,
>>>>
>>>> Here is a preliminary draft of a 9.6 release announcement.
>>>>
>>>> Please comment, suggest, edit, make comments on the wiki, whatever.
>>>>
>>>> https://wiki.postgresql.org/wiki/96releasedraft
>>>
>>> In the section on scale out, I see quorum commit mentioned but it's not
>>> part of what's offered in 9.6.  The quorum part is still being worked on:
>>> https://commitfest.postgresql.org/10/696/
>>
>> Oh, figures the one feature I haven't tested would be the one which
>> isn't right.  So what DID get added to 9.6?  Is it still a significant
>> feature?
>
> We did this (from the 9.6 release notes):
>
>         Allow synchronous replication to support multiple simultaneous
>         synchronous standby servers, not just one (Masahiko Sawada,
>         Beena Emerson, Michael Paquier, Fujii Masao, Kyotaro Horiguchi)
>
>         The number of standby servers that must acknowledge a commit
>         before it is considered complete is now configurable as part of
>         the synchronous_standby_names parameter.
>
> You can see the details here, e.g. "3 (s1, s2, s3, s4)"
>
>     https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html#GUC-SYNCHRONOUS-STANDBY-NAMES

So that's usually what I mean when I say quorum commit.  But apparently
our feature does something slightly different?

"For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
wait until their WAL records are received by three higher-priority
standbys chosen from standby servers s1, s2, s3 and s4"

What does that mean exactly?  If I do:

3 ( s1, s2, s3, s4, s5 )

And a commit is ack'd by s2, s3, and s5, what happens?




--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Bruce Momjian
Date:

On Tue, Aug 30, 2016 at 03:22:18PM -0700, Josh Berkus wrote:
> So that's usually what I mean when I say quorum commit.  But apparently
> our feature does something slightly different?
>
> "For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
> wait until their WAL records are received by three higher-priority
> standbys chosen from standby servers s1, s2, s3 and s4"
>
> What does that mean exactly?  If I do:
>
> 3 ( s1, s2, s3, s4, s5 )
>
> And a commit is ack'd by s2, s3, and s5, what happens?

As I understand it, it can continue with those three servers sending a
confirmation back.

--
  Bruce Momjian  <>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +


From:
Michael Paquier
Date:

On Wed, Aug 31, 2016 at 7:32 AM, Bruce Momjian <> wrote:
> On Tue, Aug 30, 2016 at 03:22:18PM -0700, Josh Berkus wrote:
>> What does that mean exactly?  If I do:
>>
>> 3 ( s1, s2, s3, s4, s5 )
>>
>> And a commit is ack'd by s2, s3, and s5, what happens?
>
> As I understand it, it can continue with those three servers sending a
> confirmation back.

Assuming that all servers are connected at the moment decision is
made, you need to wait for s1, s2 *and* s3 to acknowledge depending on
synchronous_commit. By default that would be waiting for the LSN to
have been flushed on all of them. And the important point to get is
that what has been committed is dependent on the order of the items
listed. This is not quorum commit, in which case having only
confirmation from 3 servers in the set of 5 servers listed would be
fine.

If for example s2 and s4 are not connected at the moment of the
decision, you'd need to wait for acknowledgment from s1, s3 and s5
before moving on.
--
Michael


From:
Josh Berkus
Date:

On 08/30/2016 05:35 PM, Michael Paquier wrote:
> On Wed, Aug 31, 2016 at 7:32 AM, Bruce Momjian <> wrote:
>> On Tue, Aug 30, 2016 at 03:22:18PM -0700, Josh Berkus wrote:
>>> What does that mean exactly?  If I do:
>>>
>>> 3 ( s1, s2, s3, s4, s5 )
>>>
>>> And a commit is ack'd by s2, s3, and s5, what happens?
>>
>> As I understand it, it can continue with those three servers sending a
>> confirmation back.
>
> Assuming that all servers are connected at the moment decision is
> made, you need to wait for s1, s2 *and* s3 to acknowledge depending on
> synchronous_commit. By default that would be waiting for the LSN to
> have been flushed on all of them. And the important point to get is
> that what has been committed is dependent on the order of the items
> listed. This is not quorum commit, in which case having only
> confirmation from 3 servers in the set of 5 servers listed would be
> fine.
>
> If for example s2 and s4 are not connected at the moment of the
> decision, you'd need to wait for acknowledgment from s1, s3 and s5
> before moving on.

OK, so this says to me that we need a bunch of additional documentation
on this feature, because the existing docs read like it's "any 3 out of
the list" instead of "the first 3 which are connected".


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Michael Paquier
Date:

On Wed, Aug 31, 2016 at 9:40 AM, Josh Berkus <> wrote:
> On 08/30/2016 05:35 PM, Michael Paquier wrote:
>> On Wed, Aug 31, 2016 at 7:32 AM, Bruce Momjian <> wrote:
>>> On Tue, Aug 30, 2016 at 03:22:18PM -0700, Josh Berkus wrote:
>>>> What does that mean exactly?  If I do:
>>>>
>>>> 3 ( s1, s2, s3, s4, s5 )
>>>>
>>>> And a commit is ack'd by s2, s3, and s5, what happens?
>>>
>>> As I understand it, it can continue with those three servers sending a
>>> confirmation back.
>>
>> Assuming that all servers are connected at the moment decision is
>> made, you need to wait for s1, s2 *and* s3 to acknowledge depending on
>> synchronous_commit. By default that would be waiting for the LSN to
>> have been flushed on all of them. And the important point to get is
>> that what has been committed is dependent on the order of the items
>> listed. This is not quorum commit, in which case having only
>> confirmation from 3 servers in the set of 5 servers listed would be
>> fine.
>>
>> If for example s2 and s4 are not connected at the moment of the
>> decision, you'd need to wait for acknowledgment from s1, s3 and s5
>> before moving on.
>
> OK, so this says to me that we need a bunch of additional documentation
> on this feature, because the existing docs read like it's "any 3 out of
> the list" instead of "the first 3 which are connected".

Really? Here are the doc quotes that I guess matter, and I read that
differently than you do:
If any of the current synchronous standbys disconnects for whatever
reason, it will be replaced immediately with the next-highest-priority
standby.
[...]
For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
wait until their WAL records are received by *three higher-priority
standbys* chosen from standby servers s1, s2, s3 and s4.

This clearly says that we wait for the servers that have a higher
priority, meaning that we do *not* wait for any k elements in a set of
n listed, but suggest that the order of the element matters.
--
Michael


From:
Amit Langote
Date:

On 2016/08/31 9:40, Josh Berkus wrote:
> On 08/30/2016 05:35 PM, Michael Paquier wrote:
>> Assuming that all servers are connected at the moment decision is
>> made, you need to wait for s1, s2 *and* s3 to acknowledge depending on
>> synchronous_commit. By default that would be waiting for the LSN to
>> have been flushed on all of them. And the important point to get is
>> that what has been committed is dependent on the order of the items
>> listed. This is not quorum commit, in which case having only
>> confirmation from 3 servers in the set of 5 servers listed would be
>> fine.
>>
>> If for example s2 and s4 are not connected at the moment of the
>> decision, you'd need to wait for acknowledgment from s1, s3 and s5
>> before moving on.
>
> OK, so this says to me that we need a bunch of additional documentation
> on this feature, because the existing docs read like it's "any 3 out of
> the list" instead of "the first 3 which are connected".

IIUC, "any 3 out of the list" will be the quorum logic.  Currently, the
order of listing of standby names determines their priority of being the
next potential synchronous standby if and when we start running short of
3.  So "the first 3 which are connected" is exactly the feature that's
available.  Of course unless I am missing something.

Thanks,
Amit




From:
Josh Berkus
Date:

On 08/30/2016 06:12 PM, Michael Paquier wrote:

> Really? Here are the doc quotes that I guess matter, and I read that
> differently than you do:
> If any of the current synchronous standbys disconnects for whatever
> reason, it will be replaced immediately with the next-highest-priority
> standby.
> [...]
> For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
> wait until their WAL records are received by *three higher-priority
> standbys* chosen from standby servers s1, s2, s3 and s4.
>
> This clearly says that we wait for the servers that have a higher
> priority, meaning that we do *not* wait for any k elements in a set of
> n listed, but suggest that the order of the element matters.

Yeah, the problem is that "higher priority" isn't defined, and could
mean a lot of things.  It *is* defined in the actual section on
synchronous standby, though (25.2.8.2.); maybe what we need is less docs
under the GUC and more references to that?

Otherwise, you're going to have lots of people confused that it's
actually quorum commit, as witnessed by the current discussion.  Right
now what's in the GUC doc page appears to be complete but isn't.

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Josh Berkus
Date:

On 08/30/2016 06:20 PM, Josh Berkus wrote:
> On 08/30/2016 06:12 PM, Michael Paquier wrote:
>
>> Really? Here are the doc quotes that I guess matter, and I read that
>> differently than you do:
>> If any of the current synchronous standbys disconnects for whatever
>> reason, it will be replaced immediately with the next-highest-priority
>> standby.
>> [...]
>> For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
>> wait until their WAL records are received by *three higher-priority
>> standbys* chosen from standby servers s1, s2, s3 and s4.
>>
>> This clearly says that we wait for the servers that have a higher
>> priority, meaning that we do *not* wait for any k elements in a set of
>> n listed, but suggest that the order of the element matters.
>
> Yeah, the problem is that "higher priority" isn't defined, and could
> mean a lot of things.  It *is* defined in the actual section on
> synchronous standby, though (25.2.8.2.); maybe what we need is less docs
> under the GUC and more references to that?
>
> Otherwise, you're going to have lots of people confused that it's
> actually quorum commit, as witnessed by the current discussion.  Right
> now what's in the GUC doc page appears to be complete but isn't.

Also, if I do this:


2 ( g1, g2, g3 )

... and g1, g2 and g3 are *groups* of three standbys each, what happens?
 Does it wait for one or more responses from g1 and from g2, or does
getting two responses from g1 trigger a commit?

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Amit Langote
Date:

On 2016/08/31 10:25, Josh Berkus wrote:
> On 08/30/2016 06:20 PM, Josh Berkus wrote:
>> On 08/30/2016 06:12 PM, Michael Paquier wrote:
>>
>>> Really? Here are the doc quotes that I guess matter, and I read that
>>> differently than you do:
>>> If any of the current synchronous standbys disconnects for whatever
>>> reason, it will be replaced immediately with the next-highest-priority
>>> standby.
>>> [...]
>>> For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
>>> wait until their WAL records are received by *three higher-priority
>>> standbys* chosen from standby servers s1, s2, s3 and s4.
>>>
>>> This clearly says that we wait for the servers that have a higher
>>> priority, meaning that we do *not* wait for any k elements in a set of
>>> n listed, but suggest that the order of the element matters.
>>
>> Yeah, the problem is that "higher priority" isn't defined, and could
>> mean a lot of things.  It *is* defined in the actual section on
>> synchronous standby, though (25.2.8.2.); maybe what we need is less docs
>> under the GUC and more references to that?
>>
>> Otherwise, you're going to have lots of people confused that it's
>> actually quorum commit, as witnessed by the current discussion.  Right
>> now what's in the GUC doc page appears to be complete but isn't.
>
> Also, if I do this:
>
>
> 2 ( g1, g2, g3 )
>
> ... and g1, g2 and g3 are *groups* of three standbys each, what happens?
>  Does it wait for one or more responses from g1 and from g2, or does
> getting two responses from g1 trigger a commit?

We do not support specifying groups either.  Names refer to the actual
standby names.  Groups part of the earlier proposal(s) was taken out of
the patch, IIRC.

Thanks,
Amit




From:
Josh Berkus
Date:

On 08/30/2016 06:32 PM, Amit Langote wrote:
> On 2016/08/31 10:25, Josh Berkus wrote:
>> On 08/30/2016 06:20 PM, Josh Berkus wrote:
>>> On 08/30/2016 06:12 PM, Michael Paquier wrote:
>>>
>>>> Really? Here are the doc quotes that I guess matter, and I read that
>>>> differently than you do:
>>>> If any of the current synchronous standbys disconnects for whatever
>>>> reason, it will be replaced immediately with the next-highest-priority
>>>> standby.
>>>> [...]
>>>> For example, a setting of 3 (s1, s2, s3, s4) makes transaction commits
>>>> wait until their WAL records are received by *three higher-priority
>>>> standbys* chosen from standby servers s1, s2, s3 and s4.
>>>>
>>>> This clearly says that we wait for the servers that have a higher
>>>> priority, meaning that we do *not* wait for any k elements in a set of
>>>> n listed, but suggest that the order of the element matters.
>>>
>>> Yeah, the problem is that "higher priority" isn't defined, and could
>>> mean a lot of things.  It *is* defined in the actual section on
>>> synchronous standby, though (25.2.8.2.); maybe what we need is less docs
>>> under the GUC and more references to that?
>>>
>>> Otherwise, you're going to have lots of people confused that it's
>>> actually quorum commit, as witnessed by the current discussion.  Right
>>> now what's in the GUC doc page appears to be complete but isn't.
>>
>> Also, if I do this:
>>
>>
>> 2 ( g1, g2, g3 )
>>
>> ... and g1, g2 and g3 are *groups* of three standbys each, what happens?
>>  Does it wait for one or more responses from g1 and from g2, or does
>> getting two responses from g1 trigger a commit?
>
> We do not support specifying groups either.  Names refer to the actual
> standby names.  Groups part of the earlier proposal(s) was taken out of
> the patch, IIRC.

??? It's always been possible for me to give multiple standbys the same
name, making a de-facto group.

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Michael Paquier
Date:

On Wed, Aug 31, 2016 at 10:35 AM, Josh Berkus <> wrote:
> ??? It's always been possible for me to give multiple standbys the same
> name, making a de-facto group.

A "group" grammar, by that I mean an alias referring to a set of
nodes, is not supported. And you can still define multiple entries
with the same name.
--
Michael


From:
Amit Langote
Date:

On 2016/08/31 10:35, Josh Berkus wrote:
> On 08/30/2016 06:32 PM, Amit Langote wrote:
>> On 2016/08/31 10:25, Josh Berkus wrote:
>>> Also, if I do this:
>>>
>>>
>>> 2 ( g1, g2, g3 )
>>>
>>> ... and g1, g2 and g3 are *groups* of three standbys each, what happens?
>>>  Does it wait for one or more responses from g1 and from g2, or does
>>> getting two responses from g1 trigger a commit?
>>
>> We do not support specifying groups either.  Names refer to the actual
>> standby names.  Groups part of the earlier proposal(s) was taken out of
>> the patch, IIRC.
>
> ??? It's always been possible for me to give multiple standbys the same
> name, making a de-facto group.

Oh, I didn't know that.  I thought you were referring to some new feature.
 I remember discussions about various syntaxes for specifying standby
groups (json, etc.) as part of the proposed feature.  Sorry about the noise.

Thanks,
Amit




From:
Josh Berkus
Date:

On 08/30/2016 06:39 PM, Michael Paquier wrote:
> On Wed, Aug 31, 2016 at 10:35 AM, Josh Berkus <> wrote:
>> ??? It's always been possible for me to give multiple standbys the same
>> name, making a de-facto group.
>
> A "group" grammar, by that I mean an alias referring to a set of
> nodes, is not supported. And you can still define multiple entries
> with the same name.
>

Yeah, so what happens in the case I described?  Is the master just
looking for that number of commits, or is it looking for a commit from
g1 and from g2?

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Michael Paquier
Date:

On Wed, Aug 31, 2016 at 10:45 AM, Josh Berkus <> wrote:
> On 08/30/2016 06:39 PM, Michael Paquier wrote:
>> On Wed, Aug 31, 2016 at 10:35 AM, Josh Berkus <> wrote:
>>> ??? It's always been possible for me to give multiple standbys the same
>>> name, making a de-facto group.
>>
>> A "group" grammar, by that I mean an alias referring to a set of
>> nodes, is not supported. And you can still define multiple entries
>> with the same name.
>>
>
> Yeah, so what happens in the case I described?  Is the master just
> looking for that number of commits, or is it looking for a commit from
> g1 and from g2?

How do you set up synchronous_standby_names in this case? Are multiple
nodes using the same application_name, being either 'g1' or 'g2'?
--
Michael


From:
Josh Berkus
Date:

On 08/30/2016 06:51 PM, Michael Paquier wrote:
> On Wed, Aug 31, 2016 at 10:45 AM, Josh Berkus <> wrote:
>> On 08/30/2016 06:39 PM, Michael Paquier wrote:
>>> On Wed, Aug 31, 2016 at 10:35 AM, Josh Berkus <> wrote:
>>>> ??? It's always been possible for me to give multiple standbys the same
>>>> name, making a de-facto group.
>>>
>>> A "group" grammar, by that I mean an alias referring to a set of
>>> nodes, is not supported. And you can still define multiple entries
>>> with the same name.
>>>
>>
>> Yeah, so what happens in the case I described?  Is the master just
>> looking for that number of commits, or is it looking for a commit from
>> g1 and from g2?
>
> How do you set up synchronous_standby_names in this case? Are multiple
> nodes using the same application_name, being either 'g1' or 'g2'?
>

Correct.

The other question I have is:  presumably if s2 does not respond within
a certain amount of time, it times out and is marked "disconnected", no?
 So the main way this is inferior to true quorum commit is that (a) we
wait for that and (b) if s2 is busy but not unresponsive, we wait
forever.  No?

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Michael Paquier
Date:

On Wed, Aug 31, 2016 at 10:52 AM, Josh Berkus <> wrote:
> On 08/30/2016 06:51 PM, Michael Paquier wrote:
>> On Wed, Aug 31, 2016 at 10:45 AM, Josh Berkus <> wrote:
>>> On 08/30/2016 06:39 PM, Michael Paquier wrote:
>>>> On Wed, Aug 31, 2016 at 10:35 AM, Josh Berkus <> wrote:
>>>>> ??? It's always been possible for me to give multiple standbys the same
>>>>> name, making a de-facto group.
>>>>
>>>> A "group" grammar, by that I mean an alias referring to a set of
>>>> nodes, is not supported. And you can still define multiple entries
>>>> with the same name.
>>>>
>>>
>>> Yeah, so what happens in the case I described?  Is the master just
>>> looking for that number of commits, or is it looking for a commit from
>>> g1 and from g2?
>>
>> How do you set up synchronous_standby_names in this case? Are multiple
>> nodes using the same application_name, being either 'g1' or 'g2'?
>>
>
> Correct.

If my memories are correct, we'd wait for the nodes connected,
matching the name, and marked as 'sync' because the nodes connected
having the same name have the same priority rank. If there are other
nodes connected with the same name but have a higher priority the
potential ones are ignored.

> The other question I have is:  presumably if s2 does not respond within
> a certain amount of time, it times out and is marked "disconnected", no?

Yes that depends on when the master node is aware of that.

>  So the main way this is inferior to true quorum commit is that (a) we
> wait for that and (b) if s2 is busy but not unresponsive, we wait
> forever.  No?

Even on previous versions we'd wait in this case. This is not new, and
this class of problems is expected to be solved at some degree with
the quorum patch that's listed in the first CF of 10 actually.
--
Michael


From:
Josh Berkus
Date:

On 08/30/2016 07:09 PM, Michael Paquier wrote:
> On Wed, Aug 31, 2016 at 10:52 AM, Josh Berkus <> wrote:
>> On 08/30/2016 06:51 PM, Michael Paquier wrote:
>>> On Wed, Aug 31, 2016 at 10:45 AM, Josh Berkus <> wrote:
>>>> On 08/30/2016 06:39 PM, Michael Paquier wrote:
>>>>> On Wed, Aug 31, 2016 at 10:35 AM, Josh Berkus <> wrote:
>>>>>> ??? It's always been possible for me to give multiple standbys the same
>>>>>> name, making a de-facto group.
>>>>>
>>>>> A "group" grammar, by that I mean an alias referring to a set of
>>>>> nodes, is not supported. And you can still define multiple entries
>>>>> with the same name.
>>>>>
>>>>
>>>> Yeah, so what happens in the case I described?  Is the master just
>>>> looking for that number of commits, or is it looking for a commit from
>>>> g1 and from g2?
>>>
>>> How do you set up synchronous_standby_names in this case? Are multiple
>>> nodes using the same application_name, being either 'g1' or 'g2'?
>>>
>>
>> Correct.
>
> If my memories are correct, we'd wait for the nodes connected,
> matching the name, and marked as 'sync' because the nodes connected
> having the same name have the same priority rank. If there are other
> nodes connected with the same name but have a higher priority the
> potential ones are ignored.

So, if we had:

2 ( g1, g2, g3 )

where each of g1, g2 and g3 were three replicas with the same name

then if two of the replicas from g1 ack'd the commit would proceed, even
though no replica from g2 has?

there's a big difference in utility depending on the answer to this, and
I don't have any good way to set up a test case.

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Michael Paquier
Date:

On Thu, Sep 1, 2016 at 1:38 AM, Josh Berkus <> wrote:
> 2 ( g1, g2, g3 )
>
> where each of g1, g2 and g3 were three replicas with the same name
>
> then if two of the replicas from g1 ack'd the commit would proceed, even
> though no replica from g2 has?

[Checking]

Yes that's the case. If for example I have a set of slaves like that:
 application_name | replay_delta | sync_priority | sync_state
------------------+--------------+---------------+------------
 node1            |            0 |             1 | sync
 node1            |            0 |             1 | sync
 node1            |            0 |             1 | potential
 node2            |            0 |             2 | potential
 node2            |            0 |             2 | potential
 node2            |            0 |             2 | potential
 node3            |            0 |             0 | async
 node3            |            0 |             0 | async
 node3            |            0 |             0 | async
=# show synchronous_standby_names ;
 synchronous_standby_names
---------------------------
 2(node1, node2)

You'd need to have the confirmation to come from two nodes with node1
as application_name because those have the higher priority in the
list.
--
Michael


From:
Amit Langote
Date:

On 2016/09/01 11:13, Michael Paquier wrote:
> On Thu, Sep 1, 2016 at 1:38 AM, Josh Berkus <> wrote:
>> 2 ( g1, g2, g3 )
>>
>> where each of g1, g2 and g3 were three replicas with the same name
>>
>> then if two of the replicas from g1 ack'd the commit would proceed, even
>> though no replica from g2 has?
>
> [Checking]
>
> Yes that's the case. If for example I have a set of slaves like that:
>  application_name | replay_delta | sync_priority | sync_state
> ------------------+--------------+---------------+------------
>  node1            |            0 |             1 | sync
>  node1            |            0 |             1 | sync
>  node1            |            0 |             1 | potential
>  node2            |            0 |             2 | potential
>  node2            |            0 |             2 | potential
>  node2            |            0 |             2 | potential
>  node3            |            0 |             0 | async
>  node3            |            0 |             0 | async
>  node3            |            0 |             0 | async
> =# show synchronous_standby_names ;
>  synchronous_standby_names
> ---------------------------
>  2(node1, node2)
>
> You'd need to have the confirmation to come from two nodes with node1
> as application_name because those have the higher priority in the
> list.

If my reading of the documentation of the synchronous_standby_names
parameter is correct, the behavior in this case is said to be  indeterminate:

"""
The name of a standby server for this purpose is the application_name
setting of the standby, as set in the primary_conninfo of the standby's
WAL receiver. There is no mechanism to enforce uniqueness. In case of
duplicates one of the matching standbys will be considered as higher
priority, though exactly which one is indeterminate.
"""

Although after looking at what goes on in the related code, it seems 2 of
3 replicas named g1 (Josh's example) could exhaust num_sync = 2 and ack
the commit (also as you show). Whereas I thought, as the document
suggests, that one of g1's and then one of g2's would need to ack.

Need document fix or am I still missing something?

Thanks,
Amit




From:
Josh Berkus
Date:

On 08/31/2016 07:13 PM, Michael Paquier wrote:

> Yes that's the case. If for example I have a set of slaves like that:
>  application_name | replay_delta | sync_priority | sync_state
> ------------------+--------------+---------------+------------
>  node1            |            0 |             1 | sync
>  node1            |            0 |             1 | sync
>  node1            |            0 |             1 | potential
>  node2            |            0 |             2 | potential
>  node2            |            0 |             2 | potential
>  node2            |            0 |             2 | potential
>  node3            |            0 |             0 | async
>  node3            |            0 |             0 | async
>  node3            |            0 |             0 | async
> =# show synchronous_standby_names ;
>  synchronous_standby_names
> ---------------------------
>  2(node1, node2)
>
> You'd need to have the confirmation to come from two nodes with node1
> as application_name because those have the higher priority in the
> list.

So, I have to say, this doesn't *feel* like a major press-worthy feature
yet.  It will be in 10, but is it right now?


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
"Nicholson, Brad (Toronto, ON, CA)"
Date:

> -----Original Message-----
> From:  [mailto:pgsql-advocacy-
> ] On Behalf Of Josh Berkus
> So, I have to say, this doesn't *feel* like a major press-worthy feature yet.  It
> will be in 10, but is it right now?

For me the press-worthy side of this in its current state is that it allows for a no-data loss guarantee in the event
ofa network partition.
 

Having more than two sync copies of data is pretty major in my opinion as well.

Brad.

From:
Michael Paquier
Date:

On Fri, Sep 2, 2016 at 1:09 AM, Nicholson, Brad (Toronto, ON, CA)
<> wrote:
>> -----Original Message-----
>> From:  [mailto:pgsql-advocacy-
>> ] On Behalf Of Josh Berkus
>> So, I have to say, this doesn't *feel* like a major press-worthy feature yet.  It
>> will be in 10, but is it right now?
>
> For me the press-worthy side of this in its current state is that it allows for a no-data loss guarantee in the event
ofa network partition. 
>
> Having more than two sync copies of data is pretty major in my opinion as well.

Yes, the case described by Josh is rather narrow as most users are not
going to use the same application_name for multiple standbys. Combined
with synchronous_commit = remote_apply what you actually have is the
guarantee that WAL has been applied synchronously to multiple nodes,
allowing for read balancing.
--
Michael


From:
Josh Berkus
Date:

On 09/01/2016 04:56 PM, Michael Paquier wrote:
> Yes, the case described by Josh is rather narrow as most users are not
> going to use the same application_name for multiple standbys. Combined
> with synchronous_commit = remote_apply what you actually have is the
> guarantee that WAL has been applied synchronously to multiple nodes,
> allowing for read balancing.

It's not narrow if you think of it this way:


2 ( north_carolina, oregon, californa )


That is, if each pseudo-group is a data center, then that arrangement
makes a lot of sense.  Oh, well, waiting for 10.


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Michael Paquier
Date:

On Fri, Sep 2, 2016 at 9:01 AM, Josh Berkus <> wrote:
> On 09/01/2016 04:56 PM, Michael Paquier wrote:
>> Yes, the case described by Josh is rather narrow as most users are not
>> going to use the same application_name for multiple standbys. Combined
>> with synchronous_commit = remote_apply what you actually have is the
>> guarantee that WAL has been applied synchronously to multiple nodes,
>> allowing for read balancing.
>
> It's not narrow if you think of it this way:
> 2 ( north_carolina, oregon, californa )

Yes.

> That is, if each pseudo-group is a data center, then that arrangement
> makes a lot of sense.  Oh, well, waiting for 10.

I was referring to the wait behavior where multiple standbys use the
same application_name, which is what you complained about AFAIK.
--
Michael


From:
Robert Haas
Date:

On Thu, Sep 1, 2016 at 11:36 AM, Josh Berkus <> wrote:
> So, I have to say, this doesn't *feel* like a major press-worthy feature
> yet.  It will be in 10, but is it right now?

IMHO, what makes this a big deal is that we also got
synchronous_commit=remote_apply.  That means that, in PostgreSQL 9.6,
for the first time, you can build a reliable read-scaling cluster,
where "reliable" means that a value that you wrote on the master is
guaranteed to be visible in a subsequent read from a standby.  I think
the release notes and the release announcement should both mention
those two things in conjunction with each other, because the
combination is very powerful.

If you have only synchronous_commit=remote_apply, you can have a
single read replica and you can be sure that if you commit a change on
the master and then read it back from the replica, you'll see the
result of the change.  No previous release could guarantee this, and
it's nice, but having only one replica that can do this wouldn't be
very exciting.

If you have only multiple synchronous standbys, you can have a whole
bunch of standbys and wait for WAL to be written or flushed on any
number of them, which I guess is good if your transactions are made of
solid platinum, but most people will find limited application for
this.

But if you have BOTH features, then you can set
synchronous_standby_names to require an ACK from *every* standby, and
you can set synchronous_commit=remote_apply so that you wait for WAL
to be applied, not just fsync'd, and now you are guaranteed that
whenever you make a change on the master and then read it back from
any one of your read-replicas, it will be there!  And that, IMHO, is
pretty cool.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From:
Bruce Momjian
Date:

On Fri, Sep  2, 2016 at 07:50:58AM +0530, Robert Haas wrote:
> But if you have BOTH features, then you can set
> synchronous_standby_names to require an ACK from *every* standby, and
> you can set synchronous_commit=remote_apply so that you wait for WAL
> to be applied, not just fsync'd, and now you are guaranteed that
> whenever you make a change on the master and then read it back from
> any one of your read-replicas, it will be there!  And that, IMHO, is
> pretty cool.

Are we clear on how useful this will be because of the delay in applying
WAL, particularly for when conflicting read-only queries are running?

--
  Bruce Momjian  <>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +


From:
Robert Haas
Date:

On Sep 3, 2016, at 1:39 AM, Bruce Momjian <> wrote:
>> On Fri, Sep  2, 2016 at 07:50:58AM +0530, Robert Haas wrote:
>> But if you have BOTH features, then you can set
>> synchronous_standby_names to require an ACK from *every* standby, and
>> you can set synchronous_commit=remote_apply so that you wait for WAL
>> to be applied, not just fsync'd, and now you are guaranteed that
>> whenever you make a change on the master and then read it back from
>> any one of your read-replicas, it will be there!  And that, IMHO, is
>> pretty cool.
>
> Are we clear on how useful this will be because of the delay in applying
> WAL, particularly for when conflicting read-only queries are running?

Not entirely, but people are already doing read-scaling with replicas, so having an option to make that reliable seems
likea good thing. 

...Robert

From:
Josh Berkus
Date:

All,

Updated per discussion.

Please make more improvements.

Also, if anyone can find a user to quote about synch rep, phrase search,
or postgres_fdw, it would help ...

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Josh Berkus
Date:

All,

Given the lack of additional feedback, we're going to call the current
press release final.  Translation copies to be prepared by Thursday.

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Simon Riggs
Date:

On 13 September 2016 at 12:55, Josh Berkus <> wrote:
> All,
>
> Given the lack of additional feedback, we're going to call the current
> press release final.  Translation copies to be prepared by Thursday.

What is the thing we are calling final?

This looks still in progress... not just in name
https://wiki.postgresql.org/wiki/96releasedraft

Can we have a clear RC version go out, so we all agree what it actually will be?

Thanks

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From:
Josh Berkus
Date:

On 09/13/2016 11:22 AM, Simon Riggs wrote:
> On 13 September 2016 at 12:55, Josh Berkus <> wrote:
>> All,
>>
>> Given the lack of additional feedback, we're going to call the current
>> press release final.  Translation copies to be prepared by Thursday.
>
> What is the thing we are calling final?
>
> This looks still in progress... not just in name
> https://wiki.postgresql.org/wiki/96releasedraft

If you have additional feedback/improvements on the release text, please
give them ASAP.


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Stefan Kaltenbrunner
Date:

On 09/13/2016 08:39 PM, Josh Berkus wrote:
> On 09/13/2016 11:22 AM, Simon Riggs wrote:
>> On 13 September 2016 at 12:55, Josh Berkus <> wrote:
>>> All,
>>>
>>> Given the lack of additional feedback, we're going to call the current
>>> press release final.  Translation copies to be prepared by Thursday.
>>
>> What is the thing we are calling final?
>>
>> This looks still in progress... not just in name
>> https://wiki.postgresql.org/wiki/96releasedraft
>
> If you have additional feedback/improvements on the release text, please
> give them ASAP.

some comments:


1. I changed all the urls to https because that is what we have for a
while now...

2. the wiki page still has this:

"QUOTE ABOUT SCALING OUT ON POSTGRES HERE "

is this going to be filled?

3. "Phrase search means that PostgreSQL continues to be an alternative
to dedicated text search technologies for web search and data mining"
sound rather boring and not very nice to me - it basically reads as
"yeah we can kinda do what others can do better"


4. is it Btree or B-tree (we seem to use the later in the docs - and
yeah I do see it is a quote)?

5. 32X -> 32x?


In general the press release seems to be fairly rough to me and a bit of
a strange mix between technical details/wording like "remote apply" and
general marketing speak like "scale up and scale out"



Stefan


From:
Simon Riggs
Date:

On 13 September 2016 at 14:05, Stefan Kaltenbrunner
<> wrote:

> 2. the wiki page still has this:
>
> "QUOTE ABOUT SCALING OUT ON POSTGRES HERE "
>
> is this going to be filled?

Exactly.

I'm sure we're not sending out a Wiki page as the press release, so
clearly there will be some editing before it goes out.

So please can we do the "final editing" now so we can see and
agree/disagree/add to it?

I will give further feedback once we see things in their final-ish form.

Thanks

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From:
Josh Berkus
Date:

On 09/13/2016 12:05 PM, Stefan Kaltenbrunner wrote:
> On 09/13/2016 08:39 PM, Josh Berkus wrote:
>> On 09/13/2016 11:22 AM, Simon Riggs wrote:
>>> On 13 September 2016 at 12:55, Josh Berkus <> wrote:
>>>> All,
>>>>
>>>> Given the lack of additional feedback, we're going to call the current
>>>> press release final.  Translation copies to be prepared by Thursday.
>>>
>>> What is the thing we are calling final?
>>>
>>> This looks still in progress... not just in name
>>> https://wiki.postgresql.org/wiki/96releasedraft
>>
>> If you have additional feedback/improvements on the release text, please
>> give them ASAP.
>
> some comments:
>
>
> 1. I changed all the urls to https because that is what we have for a
> while now...

Thank you.

>
> 2. the wiki page still has this:
>
> "QUOTE ABOUT SCALING OUT ON POSTGRES HERE "
>
> is this going to be filled?

Yes, wording is currently being negotiated.

>
> 3. "Phrase search means that PostgreSQL continues to be an alternative
> to dedicated text search technologies for web search and data mining"
> sound rather boring and not very nice to me - it basically reads as
> "yeah we can kinda do what others can do better"

Suggested replacement wording?


> 4. is it Btree or B-tree (we seem to use the later in the docs - and
> yeah I do see it is a quote)?

Wasn't sure, thanks.  I can fix that in a quote.

> 5. 32X -> 32x?

I see 32X more frequently, but maybe we should go with "32 times" just
to be clear.

>
>
> In general the press release seems to be fairly rough to me and a bit of
> a strange mix between technical details/wording like "remote apply" and
> general marketing speak like "scale up and scale out"

That's our press releases in a nutshell.  We are, after all, a project
and not a product.

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Simon Riggs
Date:

On 13 September 2016 at 13:22, Simon Riggs <> wrote:
> On 13 September 2016 at 12:55, Josh Berkus <> wrote:
>> All,
>>
>> Given the lack of additional feedback, we're going to call the current
>> press release final.  Translation copies to be prepared by Thursday.
>
> What is the thing we are calling final?
>
> This looks still in progress... not just in name
> https://wiki.postgresql.org/wiki/96releasedraft
>
> Can we have a clear RC version go out, so we all agree what it actually will be?

I think we should mention that performance has been increased for
* two-phase commit
* replication apply
* aggregation (2)
* indexing (various)
* sorting (various)
* PL/pgSQL expressions
* text search

Improved planning

Improved interaction with kernel for performance

And we have removed a few of Postgres' common annoyances...
* vacuum freeze on large tables
* long-lived snapshots holding back vacuum
* idle in transaction timeout

Improved monitoring and security

Overall I'd call all of that fine tuning and attention to detail based
upon user feedback with regard to various negative use cases, leading
me to think that the system is Faster, Smoother and Easier to use.

Not sure which parts of that go in but suggest something like "Faster,
Smoother and Easier to use based upon feedback from our large user
base of high volume production databases." etc

Hope that helps. Thanks for writing the press release.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From:
Josh Berkus
Date:

On 09/13/2016 01:57 PM, Simon Riggs wrote:

> Not sure which parts of that go in but suggest something like "Faster,
> Smoother and Easier to use based upon feedback from our large user
> base of high volume production databases." etc

So, a section at the bottom for this?  Yeah, I can see that.  Lemme try
something tonight.


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Josh Berkus
Date:

Folks,

One last round of checks/suggestions before I put it into Git.  Thanks!

https://wiki.postgresql.org/wiki/96releasedraft


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Justin Clift
Date:

On 14 Sep 2016, at 06:36, Josh Berkus <> wrote:
>
> Folks,
>
> One last round of checks/suggestions before I put it into Git.  Thanks!
>
> https://wiki.postgresql.org/wiki/96releasedraft

Directly mentioning those things we’ve removed (as per Simon’s email)
seems like a good idea too:

> And we have removed a few of Postgres' common annoyances…
> * vacuum freeze on large tables
> * long-lived snapshots holding back vacuum
> * idle in transaction timeout

Exact wording though… hmm… how’s this?

  Common annoyances fixed:
    * vacuum freeze on large tables
    * long-lived snapshots holding back vacuum
    * idle in transaction timeout

Trying to find further info for those 3 items (eg to link to) isn’t pulling
up something (non-mailing list) obvious.  Does such a thing exist? :)

Regards and best wishes,

Justin Clift

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi



From:
Justin Clift
Date:

On 14 Sep 2016, at 13:22, Justin Clift <> wrote:
>  Common annoyances fixed:
>    * vacuum freeze on large tables
>    * long-lived snapshots holding back vacuum
>    * idle in transaction timeout
>
> Trying to find further info for those 3 items (eg to link to) isn’t pulling
> up something (non-mailing list) obvious.  Does such a thing exist? :)

Ahh, they’re in the Release 9.6 page:

  * vacuum freeze on large tables → E.1.3.1.6. VACUUM
  * long-lived snapshots holding back vacuum → E.1.3.1.7. General Performance
  * idle in transaction timeout → E.1.3.1.10. Server Configuration

+ Justin

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi



From:
Josh Berkus
Date:

On 09/14/2016 05:28 AM, Justin Clift wrote:
> On 14 Sep 2016, at 13:22, Justin Clift <> wrote:
>>  Common annoyances fixed:
>>    * vacuum freeze on large tables
>>    * long-lived snapshots holding back vacuum
>>    * idle in transaction timeout
>>
>> Trying to find further info for those 3 items (eg to link to) isn’t pulling
>> up something (non-mailing list) obvious.  Does such a thing exist? :)
>
> Ahh, they’re in the Release 9.6 page:
>
>   * vacuum freeze on large tables → E.1.3.1.6. VACUUM
>   * long-lived snapshots holding back vacuum → E.1.3.1.7. General Performance
>   * idle in transaction timeout → E.1.3.1.10. Server Configuration

So, the press release isn't about listing every single feature in a
release.  It's about picking 3 or 4 things which hang about a common
theme ... in this case, "scale up and scale out".

The "what's new" and the release notes list all of the features.


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
"Gilberto Castillo"
Date:

> On 09/14/2016 05:28 AM, Justin Clift wrote:
>> On 14 Sep 2016, at 13:22, Justin Clift <> wrote:
>>>  Common annoyances fixed:
>>>    * vacuum freeze on large tables
>>>    * long-lived snapshots holding back vacuum
>>>    * idle in transaction timeout
>>>
>>> Trying to find further info for those 3 items (eg to link to) isn’t
>>> pulling
>>> up something (non-mailing list) obvious.  Does such a thing exist? :)
>>
>> Ahh, they’re in the Release 9.6 page:
>>
>>   * vacuum freeze on large tables → E.1.3.1.6. VACUUM
>>   * long-lived snapshots holding back vacuum → E.1.3.1.7. General
>> Performance
>>   * idle in transaction timeout → E.1.3.1.10. Server Configuration
>
> So, the press release isn't about listing every single feature in a
> release.  It's about picking 3 or 4 things which hang about a common
> theme ... in this case, "scale up and scale out".
>

Uhmmm "scale up" song positive And "scale out" song negative


--
Saludos,
Gilberto Castillo
ETECSA, La Habana, Cuba



From:
Mike Toews
Date:

On 14 September 2016 at 17:36, Josh Berkus <> wrote:
> Folks,
>
> One last round of checks/suggestions before I put it into Git.  Thanks!
>
> https://wiki.postgresql.org/wiki/96releasedraft

Assuming the URL formats will be similar to 9.5, I would expect these links:

* Release Notes: https://www.postgresql.org/docs/current/static/release-9-6.html
* What's New in 9.6:
https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.6


From:
Josh Berkus
Date:

On 09/14/2016 05:55 PM, Mike Toews wrote:
> On 14 September 2016 at 17:36, Josh Berkus <> wrote:
>> Folks,
>>
>> One last round of checks/suggestions before I put it into Git.  Thanks!
>>
>> https://wiki.postgresql.org/wiki/96releasedraft
>
> Assuming the URL formats will be similar to 9.5, I would expect these links:
>
> * Release Notes: https://www.postgresql.org/docs/current/static/release-9-6.html

Thanks

> * What's New in 9.6:
> https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.6

I changed that one deliberately.  The %27s part of the URL messes with
Wiki logins.


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Masahiko Sawada
Date:

On Wed, Sep 14, 2016 at 2:36 PM, Josh Berkus <> wrote:
> Folks,
>
> One last round of checks/suggestions before I put it into Git.  Thanks!
>
> https://wiki.postgresql.org/wiki/96releasedraft
>
>

> "With the capabilities of remote JOIN, UPDATE and DELETE, Foreign Data Wrappers are now a complete solution for
sharingdata between other > databases and PostgreSQL. For example, PostgreSQL can be used to handle data input going to
twoor more different kinds of databases," 

It is just my opinion, after guaranteed the foreign update/delete
transaction, FDW will be complete solution for sharing data.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


From:
Josh Berkus
Date:

On 09/15/2016 09:47 AM, Masahiko Sawada wrote:
> On Wed, Sep 14, 2016 at 2:36 PM, Josh Berkus <> wrote:
>> Folks,
>>
>> One last round of checks/suggestions before I put it into Git.  Thanks!
>>
>> https://wiki.postgresql.org/wiki/96releasedraft
>>
>>
>
>> "With the capabilities of remote JOIN, UPDATE and DELETE, Foreign Data Wrappers are now a complete solution for
sharingdata between other > databases and PostgreSQL. For example, PostgreSQL can be used to handle data input going to
twoor more different kinds of databases," 
>
> It is just my opinion, after guaranteed the foreign update/delete
> transaction, FDW will be complete solution for sharing data.

Yah, but this is PR.  Also, it's a quote.

--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


From:
Robert Haas
Date:

On Wed, Sep 14, 2016 at 8:22 AM, Justin Clift <> wrote:
>   Common annoyances fixed:
>     * vacuum freeze on large tables
>     * long-lived snapshots holding back vacuum
>     * idle in transaction timeout

Saying we've fixed the second one is stretching the truth to the
breaking point.  old_snapshot_threshold is a good tool for mitigating
that problem for some users, but it's not like it's just "fixed".

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From:
Michael Banck
Date:

Am Mittwoch, den 14.09.2016, 18:30 -0700 schrieb Josh Berkus:
> On 09/14/2016 05:55 PM, Mike Toews wrote:
> > On 14 September 2016 at 17:36, Josh Berkus <> wrote:
> > * What's New in 9.6:
> > https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.6
>
> I changed that one deliberately.  The %27s part of the URL messes with
> Wiki logins.

The current release draft links to wiki/new_in_9.6 at the bottom though,
which does not exist. The correct wiki link seems to be wiki/NewIn96.


Michael

--
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax:  +49 2166 9901-100
Email: 

credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer




From:
Josh Berkus
Date:

On 09/16/2016 12:11 AM, Michael Banck wrote:
> Am Mittwoch, den 14.09.2016, 18:30 -0700 schrieb Josh Berkus:
>> On 09/14/2016 05:55 PM, Mike Toews wrote:
>>> On 14 September 2016 at 17:36, Josh Berkus <> wrote:
>>> * What's New in 9.6:
>>> https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.6
>>
>> I changed that one deliberately.  The %27s part of the URL messes with
>> Wiki logins.
>
> The current release draft links to wiki/new_in_9.6 at the bottom though,
> which does not exist. The correct wiki link seems to be wiki/NewIn96.

Ah, looks like I fixed that on the Git version but not in Wiki.  Thanks.


--
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)