Re: PG on two nodes with shared disk ocfs2 & drbd - Mailing list pgsql-general

From Jasmin Dizdarevic
Subject Re: PG on two nodes with shared disk ocfs2 & drbd
Date
Msg-id AANLkTimejwO82cNzXeoDuLSR2uFG1DHb8-xbH3ELAhms@mail.gmail.com
Whole thread Raw
In response to Re: PG on two nodes with shared disk ocfs2 & drbd  (Andrew Sullivan <ajs@crankycanuck.ca>)
Responses Re: PG on two nodes with shared disk ocfs2 & drbd  (Andrew Sullivan <ajs@crankycanuck.ca>)
Re: PG on two nodes with shared disk ocfs2 & drbd  (Andrew Sullivan <ajs@crankycanuck.ca>)
List pgsql-general
Thank you for your detailed information about HA and LB. First of all it's a pitty that there is no built-in feature for LB+HA (both of them, simultaneous).
In my eyes, the pgpool2/3-solution has to much disadvantages and restrictions. 
My idea was the one, that john described: DML and DDL are done on the small box and reporting on the "big mama" with streaming replication and hot stand-by enabled. the only problem is that we use temp tables for reporting purposes. i hope that the query duration impact with not using temp tables will be equalized through running dml/ddl on the small box. 

I think, this will be the final configuration:
- drbd with multi primary (ocfs2) as archive location for the primary node
- streaming replication and hot stand-by

this is a good howto to get real high availability when the primary node goes down, but for now I'm going to deploy the described configuration with manual fail over.

Regards,
Jasmin

2011/2/27 Andrew Sullivan <ajs@crankycanuck.ca>
On Sun, Feb 27, 2011 at 12:10:36PM -0800, John R Pierce wrote:
> are made to the master server, but reads are done to either.  note you
> do NOT want to use block level replication like drbd for this as the
> drbd slave can not be actively mounted, nor could the slave instance of
> postgres be aware of changes to the underlying storage, rather you would
> use the streaming replication built into postgresql 9.0.

Note that with drbd, you can have a piece of hot standby hardware
sitting there to take over the filesystem in real time, in the event
the original master blows up or something.  My experience with systems
designed like this is that they are a foot-bazooka: the only real
utility I ever saw in them was to increase on-call hours for sysadmins
after they blew off their own foot (and too often, my database) doing
something tricky with the standby server.  If it were me setting it
up, I'd think the streaming replication approach a better bet.  Not
that anything will save you when someone else has root and decides to
play with a production server.

I believe that Greenplum sells a system based on Postgres that is
supposed to do some kind of distributed cluster thing.  I don't
understand the details and it's been a long time since I had any look
at it.  I think it's intended to compete in the scalability rather
than the availability market.  Maybe someone around here knows more.

The only people I'm aware of who really do this sort of thing for
availability are Oracle with RAC, and Oracle with some mostly-works
clustering stuff in MySQL.  I have never met a happy customer of the
former, but I've heard some people tell me it's real impressive
technology when it's working.  (The unhappy people seemed mostly
unhappy because, for that kind of coin, they would like it to work
most of the time.  I know at least one metronet deployment that didn't
work even once for two years.)  In the case of the MySQL stuff, there
are some trade-offs in the design that make my heart sink.  But maybe
for the OP's application it will work.

A


--
Andrew Sullivan
ajs@crankycanuck.ca

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

pgsql-general by date:

Previous
From: Julia Jacobson
Date:
Subject: Re: Linking against static libpq using Visual C++
Next
From: "David Johnston"
Date:
Subject: Re: Transactions and ID's generated by triggers