* Aleksander Alekseev <aleksander@timescale.com> [25/04/15 13:20]:
> > I am considering starting work on implementing a built-in Raft
> > replication for PostgreSQL.
>
> Generally speaking I like the idea. The more important question IMO is
> whether we want to maintain Raft within the PostgreSQL core project.
>
> Building distributed systems on commodity hardware was a popular idea
> back in the 2000s. These days you can rent a server with 2 Tb of RAM
> for something like 2000 USD/month (numbers from my memory that were
> valid ~5 years ago) which will fit many of the existing businesses (!)
> in memory. And you can rent another one for a replica, just in order
> not to recover from a backup if something happens to your primary
> server. The common wisdom is if you can avoid building distributed
> systems, don't build one.
>
> Which brings the question if we want to maintain something like this
> (which will include logic for cases when a node joins or leaves the
> cluster, proxy server / service discovery for clients, test cases /
> infrastructure for all this and also upgrading the cluster, docs, ...)
> for a presumably view users which business doesn't fit in a single
> server *and* they want an automatic failover (not the manual one)
> *and* they don't use Patroni/Stolon/CockroachDB/Neon/... already.
>
> Although the idea is tempting personally I'm inclined to think that
> it's better to invest community resources into something else.
My personal take away from this as a community member would be
seamless coordinator failover in Greenplum and all of its forks
(CloudBerry, Greengage, synxdata, what not). I also imagine there
is a number of PostgreSQL derivatives that could benefit from
built-in transparent failover since it standardizes the solution
space.
--
Konstantin Osipov