Re: Feature Request for 7.5 - Mailing list pgsql-general
From | Chris Travers |
---|---|
Subject | Re: Feature Request for 7.5 |
Date | |
Msg-id | 005901c3b98e$93f24a90$b100053d@SAMUEL Whole thread Raw |
In response to | Feature Request for 7.5 ("Chris Travers" <chris@travelamericas.com>) |
Responses |
Re: Feature Request for 7.5
Re: Feature Request for 7.5 |
List | pgsql-general |
Comments inline From: "Jan Wieck" <JanWieck@Yahoo.com>: > There are many problems with a "proxy" solution. One is that you really > don't know if a statement does modify the database or not. A SELECT for > example can call a user defined function somewhere and that can do > whatever the programmer likes it to do. So you would have to "replicate" > all that too. Granted, you can exclude this type of database usage from > your supported list. That is why it would be nice to be able to check for altered tuples on a select before deciding whether to replicate... In this case you could have a query->check for altered tuples-> if altered replicate query routine. > > Next you don't have control over sequence allocation. Every application > that uses sequence allocated ID's is in danger, because they are not > blocking, you cannot force the order of assignments and they don't roll > back either. This is the more serious problem. I will have to think this one over. I wonder about having cross-proxy sequence generators. > > And you get into deadlock problems if you don't guarantee that your > proxy uses the same order to access all databases. And you cannot > guarantee that if your proxy tries to do it parallel. So it has to do > the queries against all databases one by one, that doesn't scale well. > This is true also, but if the sequence of the queries is similar, then I am having trouble seeing how the deadlocks would happen on a server in a case where you wouldn't otherwise have one. Since a deadlock on ONE server would force a restore process (with some performance problems in the beginning of that process), it would not be too bad. > The last thing (I mention for now) is that I cannot imagine any way that > such proxy code allows for a new member to join without stopping the > whole application, creating an identical copy of one member > (dump+restore) and continue. So it is impossible to build 24*7 support > that way. Not too hard. Read my comments on restoring from failure for details. The same proceedure could be used to add a new member. The only performance drawback is that new transactions would have to be queued up (uncommitted) while the old ones complete. If you have hanging transactions, this could be a problem. What we have is the following process: 1) PRepare query queue for storing incoming queries. 2) Request a restore point. 3) At this point, all new queries get queued in the query queue. No new transactions may be committed. 4) When all transactions which were open at the beginning of step 3 are closed, give permission to start restore, and address of server to use. 5) Use PGDump to start the restore. 6) New transactions may now be committed. 7) when restore finishes, start committing transactions in the query log in order of committal. 8) When no closed transactions remain, change status to online. > > No, separate proxy code doesn't strike me as the superior solution. > There are advantages and disadvantages to either. The other option is to use some sort of library to handle the additional clustering protocols. Either way is limited and difficult. Still working on these problems. > > Jan > > -- > #======================================================================# > # It's easier to get forgiveness for being wrong than for being right. # > # Let's break this rule - forgive me. # > #================================================== JanWieck@Yahoo.com # > > >
pgsql-general by date: