Thread: AW: Re: MySQL and BerkleyDB (fwd)

AW: Re: MySQL and BerkleyDB (fwd)

From
Zeugswetter Andreas SB
Date:
> Is anyone looking at doing this?  Is this purely a MySQL-ism, or is it
> something that everyone else has except us?

We should not only support access to all db's under one postmaster,
but also remote access to other postmaster's databases.
All biggie db's allow this in one way or another (synonyms, 
qualified object names) including 2-phase commit.
Ideally this includes access to other db manufacturers, flat files, bdb ...
Meaning, that this is a problem needing a generic approach.

Andreas

> > > Is there any possibility to get a port for MySQL with BerkleyDB support?
> > > I realy need the transaction support and I'd like to build MySQL from a
> > > port.
> >
> > why not just build PgSQL, and have transaction support *with* subselects
> > and everything else that mySQL doesn't have?
> 
> I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at
> least I couldn't figure out how to do it.)  MySQL handles this, so
> I'm using MySQL and would also like to have transaction support...


Re: AW: Re: MySQL and BerkleyDB (fwd)

From
The Hermit Hacker
Date:
sounds like something that should be handled at the application level
though ... at least the concept of 'access to other db manufacturers' ...
no?


On Mon, 22 Jan 2001, Zeugswetter Andreas SB wrote:

>
> > Is anyone looking at doing this?  Is this purely a MySQL-ism, or is it
> > something that everyone else has except us?
>
> We should not only support access to all db's under one postmaster,
> but also remote access to other postmaster's databases.
> All biggie db's allow this in one way or another (synonyms,
> qualified object names) including 2-phase commit.
> Ideally this includes access to other db manufacturers, flat files, bdb ...
> Meaning, that this is a problem needing a generic approach.
>
> Andreas
>
> > > > Is there any possibility to get a port for MySQL with BerkleyDB support?
> > > > I realy need the transaction support and I'd like to build MySQL from a
> > > > port.
> > >
> > > why not just build PgSQL, and have transaction support *with* subselects
> > > and everything else that mySQL doesn't have?
> >
> > I'd *love* to use PgSQL, but it doesn't support cross-DB joins (or at
> > least I couldn't figure out how to do it.)  MySQL handles this, so
> > I'm using MySQL and would also like to have transaction support...
>

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org



Re: AW: Re: MySQL and BerkleyDB (fwd)

From
Joel Burton
Date:
On Mon, 22 Jan 2001, Zeugswetter Andreas SB wrote:

> 
> > Is anyone looking at doing this?  Is this purely a MySQL-ism, or is it
> > something that everyone else has except us?
> 
> We should not only support access to all db's under one postmaster,
> but also remote access to other postmaster's databases.
> All biggie db's allow this in one way or another (synonyms, 
> qualified object names) including 2-phase commit.
> Ideally this includes access to other db manufacturers, flat files, bdb ...
> Meaning, that this is a problem needing a generic approach.

Of course, a generic, powerful approach would be great.

However, a simple, limited approach would a be solution for (I
suspect) 97% of the cases, which is that one software package creates a
database to store mailing list names, and another creates a database to
store web permissions, and you want to write a query that encompasses
both, w/o semi-tedious COPY TO FILEs to temporarily move a table back and
forth. And of course, a simple solution might be completed faster :-)

How could this be handled? 

* a syntax for db-table names, such as mydb.myfield or something like
that. (do we have any unused punctuation? :-) )

* aliases, so that tblFoo in dbA can be called as ToFoo in dbB

* other ways?

The second might be easier from a conversion view: the user wouldn't have
to understand that this was a 'link', but it might prove complicated when
there are many links to keep track of, etc.


-- 
Joel Burton   <jburton@scw.org>
Director of Information Systems, Support Center of Washington



Re: Re: AW: Re: MySQL and BerkleyDB (fwd)

From
"Ross J. Reedstrom"
Date:
On Mon, Jan 22, 2001 at 12:18:54PM -0500, Joel Burton wrote:
> On Mon, 22 Jan 2001, Zeugswetter Andreas SB wrote:
> 
> > 
> > > Is anyone looking at doing this?  Is this purely a MySQL-ism, or is it
> > > something that everyone else has except us?
> > 
> > We should not only support access to all db's under one postmaster,
> > but also remote access to other postmaster's databases.
> > All biggie db's allow this in one way or another (synonyms, 
> > qualified object names) including 2-phase commit.
> > Ideally this includes access to other db manufacturers, flat files, bdb ...
> > Meaning, that this is a problem needing a generic approach.
> 
> Of course, a generic, powerful approach would be great.
> 
> However, a simple, limited approach would a be solution for (I
> suspect) 97% of the cases, which is that one software package creates a
> database to store mailing list names, and another creates a database to
> store web permissions, and you want to write a query that encompasses
> both, w/o semi-tedious COPY TO FILEs to temporarily move a table back and
> forth. And of course, a simple solution might be completed faster :-)
> 
> How could this be handled? 
> 

And this case can be handled within one database by having multiple
schema, one for each package. It's not there yet, but it's a simpler
solution than the generic solution. The problem (as others have mentioned)
is that we don't want to open the door to remote access until we have a
two-phase transaction commit mechanism in place. Doing it any other way
is not a 'partial solution', it's a corrupt database waiting to happen.


> * a syntax for db-table names, such as mydb.myfield or something like
> that. (do we have any unused punctuation? :-) )

This is the sort of syntax that SQL9* specify for cross schema access.
So far, it fits into the parser just fine.

> * aliases, so that tblFoo in dbA can be called as ToFoo in dbB

This can be done with views, once schema are in place.

Ross
-- 
Open source code is like a natural resource, it's the result of providing
food and sunshine to programmers, and then staying out of their way.
[...] [It] is not going away because it has utility for both the developers 
and users independent of economic motivations.  Jim Flynn, Sunnyvale, Calif.


Re: Re: AW: Re: MySQL and BerkleyDB (fwd)

From
Joel Burton
Date:
On Mon, 22 Jan 2001, Ross J. Reedstrom wrote:

> And this case can be handled within one database by having multiple
> schema, one for each package. It's not there yet, but it's a simpler
> solution than the generic solution. The problem (as others have mentioned)
> is that we don't want to open the door to remote access until we have a
> two-phase transaction commit mechanism in place. Doing it any other way
> is not a 'partial solution', it's a corrupt database waiting to happen.

What does '2-phase transaction commit mechanism' mean in this case?

-- 
Joel Burton   <jburton@scw.org>
Director of Information Systems, Support Center of Washington



Re: Re: AW: Re: MySQL and BerkleyDB (fwd)

From
"Ross J. Reedstrom"
Date:
On Mon, Jan 22, 2001 at 12:41:38PM -0500, Joel Burton wrote:
> On Mon, 22 Jan 2001, Ross J. Reedstrom wrote:
> 
> > And this case can be handled within one database by having multiple
> > schema, one for each package. It's not there yet, but it's a simpler
> > solution than the generic solution. The problem (as others have mentioned)
> > is that we don't want to open the door to remote access until we have a
> > two-phase transaction commit mechanism in place. Doing it any other way
> > is not a 'partial solution', it's a corrupt database waiting to happen.
> 
> What does '2-phase transaction commit mechanism' mean in this case?

Same thing it means elsewhere. Typing "two phase commit" into Google gets me
this url:

http://webopedia.internet.com/Computer_Science/Transaction_Processing/two_phase_commit.html

Which says:
  A feature of transaction processing systems that enables databases  to be returned to the pre-transaction state if
someerror condition  occurs. A single transaction can update many different databases. The  two-phase commit strategy
isdesigned to ensure that either all the  databases are updated or none of them, so that the databases remain
synchronized.
  Database changes required by a transaction are initially stored  temporarily by each database. The transaction
monitorthen  issues a "pre-commit" command to each database which requires an  acknowledgment. If the monitor receives
theappropriate response from  each database, the monitor issues the "commit" command, which causes  all databases to
simultaneouslymake the transaction changes permanent.
 


This 'pre-commit' 'really commit' two-step (get 'yer cowboy hats, right
here) is what's needed, and is currently missing from pgsql. 


Ross


Re: AW: Re: MySQL and BerkleyDB (fwd)

From
Hannu Krosing
Date:
The Hermit Hacker wrote:
> 
> sounds like something that should be handled at the application level
> though ... at least the concept of 'access to other db manufacturers' ...
> no?

If and when we will get functions that can return rowsets (IIRC Oracle's 
RETURN AND CONTINUE)the simplest case can be easily implemented by
having 
a user-defined method that just does the query for the whole table (or 
for rows where "field in (x,y,z)"

Then putting this in a view and then using it as a table should also 
be quite simple (or at least possible ;).

Only after that should we start optimizing ...

--------------
Hannu


Re: Re: AW: Re: MySQL and BerkleyDB (fwd)

From
dom@idealx.com
Date:
> This 'pre-commit' 'really commit' two-step (get 'yer cowboy hats, right
> here) is what's needed, and is currently missing from pgsql. 
 Hello,
 I'm very interested in this topic since I am involved in a
distributed, several-PostgreSQLs-backed, open-source,
buzzword-compliant database replication middleware (still in the draft
stage though --- this is not an announcement :-). I had thought that the pre-commit information could be stored in an
auxiliary table by the middleware program ; we would then have
to re-implement some sort of higher-level WAL (I thought of the list
of the commands performed in the current transaction, with a sequence
number for each of them that would guarantee correct ordering between
concurrent transactions in case of a REDO). But I fear I am missing
a number of important issues there ; so could you please comment on my
idea ?  * what should I try not to forget to record in the higher-level WAL if I want consistency ? * how could one
collectconsistent ordering information without impacting performance too much ? Will ordering suffice to guarantee
correctnessof the REDO ? (I mean, are there sources of nondeterminism in PostgreSQL such as resource exhaustion etc.
thatI should be aware of ?) * would it be easier or harder to help implement 2-phase commit inside PostgreSQL (but I am
notquite a PostgreSQL hacker yet !)
 
 Many thanks in advance !

-- 
<< Tout n'y est pas parfait, mais on y honore certainement les jardiniers >>
        Dominique Quatravaux <dom@kilimandjaro.dyndns.org>


Re: 2-phase commit

From
dom@idealx.com
Date:
 [ sorry to repost this, but I didn't receive my mail back... Anythingwrong with the mailserver ? ]
I am involved in a project of open-source, PostgreSQL-backed,
buzzword-compliant replication/high availability software that would
act as an SQL « one-to-many » gateway (but still in the design phase
--- this is *not* an announcement :-). Of course, the topic of 2-phase
commit is important to us ; we currently plan to record all write
commands issued during the transaction in an auxiliary table (some
sort of higher-level WAL). First commit phase would then consist in
closing this record and ensuring it can be  REDOne in the case of a
crash (UNDO would be done by just rolling back the current
transaction). But this is quite complicated and may require to
serialize all accesses (both read and write) to a given database so as
to guarantee that REDO will yield the very same result.
I understand it would certainly be better and more profitable for
the community if I could help implement 2-phase commit inside
PostgreSQL. But I am not much of a PostgreSQL hacker yet. What do you
think ?

-- 
<< Tout n'y est pas parfait, mais on y honore certainement les jardiniers >>
        Dominique Quatravaux <dom@kilimandjaro.dyndns.org>