Dave Page wrote:
>
>Why are such actions not performed on the origin node as a matter of
>course (see below before answering that :-) )?
>
>
Ask Slony's designers :-) Actually, at least storenode() changed its
requirements; in 1.0.5, it must be called on the subscriber, in 1.1.1 on
the provider. I hope I didn't overlook more of those.
>
>
>>>Perhaps for older
>>>versions (well 1.1) we need to just find sets as we connect to any
>>>individual databases, and use your original plan for 1.2 in
>>>
>>>
>>which we use
>>
>>
>>>a separate table as Jan/Chris have suggested?
>>>
>>>
>>While I'm not particularly fond of doing things in 1.2+
>>different from
>>1.0-1.1, this is certainly not a real problem.
>>What *is* a problem is scanning all servers and databases to find a
>>suitable connection, this won't work for several reasons:
>>- we might hit a cluster with the same name, which is actually a
>>different one.
>>- This may take a very long time. I have several remote servers
>>registered, which aren't accessible until I connect via VPN. It would
>>take some minutes until all timeouts elapsed.
>>
>>
>
>I'm not suggesting a full search at connect, just that we populate the
>trees etc only when we do actually connect to an individual database.
>
>
??? How can we know whether a non-connected server contains a cluster,
and if that cluster's name is not just by chance identical to ours?
>
>Which is why you can't use the origin I guess - because you are unlikely
>to be able to access the origin when you need to failover, but need to
>be sure that pgAdmin knows about the most recent configuration before
>doing anything potentially dangerous. Hmmm... Think I see what you mean
>now (at last)!!
>
>
No, failover must explicitely be executed on all non-failed nodes. In
the failover case, you can't rely on the replication network to transfer
those commands, thus direct connection is needed.
As I already pointed out, I've put quite some stress on my 1.2CVS
yesterday. Apparently, the event queue isn't touched by new nodes until
a slon process has been running on the new node the first time, thus
preventing the new node to hold up the queue cleanup process. This makes
perfectly sense, since until a node did its first cry there's no need to
feed it, and complain if it doesn't eat. This makes the issue about
non-active nodes a storm in a teacup, we simply can go on as it's now.
Having listens generated that will never be listened on remains as
unaesthetic (and easily fixable), but not a problem.
Regards,
Andreas