Re: Replication Node Identifiers and crashsafe Apply Progress - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Replication Node Identifiers and crashsafe Apply Progress
Date
Msg-id 20131121111541.GJ7240@alap2.anarazel.de
Whole thread Raw
In response to Re: Replication Node Identifiers and crashsafe Apply Progress  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Replication Node Identifiers and crashsafe Apply Progress
Re: Replication Node Identifiers and crashsafe Apply Progress
List pgsql-hackers
On 2013-11-20 15:05:17 -0500, Robert Haas wrote:
> > That's what I had suggested to some people originally and the response
> > was, well, somewhat unenthusiastic. It's not that easy to assign them in
> > a meaningful automated manner. How do you automatically assign a pg
> > cluster an id?
> 
> /dev/urandom

Well yes. But then you need a way to store and change that random id for
each cluster.

Anyway, the preference is clear, so I am going to go for that in v2. I
am not sure about the type of the public identifier yet, I'll think a
bit about it.

> > But yes, maybe the answer is to balk on that part, let the users figure
> > out what's best, and then only later implement more policy based on that
> > experience.
> >
> > WRT performance: I agree that fixed-width identifiers are more
> > performant, that's why I went for them, but I am not sure it's that
> > important. The performance sensitive parts should all be done using the
> > internal id the identifier maps to, not the public one.
> 
> But I thought the internal identifier was exactly what we're creating.

Sure. But how often are we a) going to create such an identifier b)
looking it up? Hopefully both will be rather infrequent operations.

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Replication Node Identifiers and crashsafe Apply Progress
Next
From: Andres Freund
Date:
Subject: Re: Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1