Thread: RE: Announcing PgSQL - a Python DB-API 2.0 compliant interface to PostgreSQL

RE: Announcing PgSQL - a Python DB-API 2.0 compliant interface to PostgreSQL

From
Dnesbitt@encryptix.com
Date:
> The project name on SourceForge is "Python Interface to PostgreSQL".

How about PIPgSQL or piPgSQL?

Regards,
//Dave


RE: Announcing PgSQL - a Python DB-API 2.0 compliant interface to PostgreSQL

From
Christopher Sawtell
Date:
On Wed, 11 Oct 2000, Dnesbitt@encryptix.com wrote:
> > The project name on SourceForge is "Python Interface to PostgreSQL".
> 
> How about PIPgSQL or piPgSQL?

Perhaps Pi2PgSQL, or PySQL_ba

-- 
Sincerely etc.,
NAME       Christopher SawtellCELL PHONE 021 257 4451ICQ UIN    45863470EMAIL      csawtell @ xtra . co . nzCNOTES
ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz
-->> Please refrain from using HTML or WORD attachments in e-mails to me <<--



RE: Announcing PgSQL - a Python DB-API 2.0 compliant interface to PostgreSQL

From
The Hermit Hacker
Date:
On Wed, 11 Oct 2000, Christopher Sawtell wrote:

> On Wed, 11 Oct 2000, Dnesbitt@encryptix.com wrote:
> > > The project name on SourceForge is "Python Interface to PostgreSQL".
> > 
> > How about PIPgSQL or piPgSQL?
> 
> Perhaps Pi2PgSQL, or PySQL_ba

PySQL_ba? *grin*




On Tue, 10 Oct 2000, The Hermit Hacker wrote:

> On Wed, 11 Oct 2000, Christopher Sawtell wrote:
> 
> > On Wed, 11 Oct 2000, Dnesbitt@encryptix.com wrote:
> > > > The project name on SourceForge is "Python Interface to PostgreSQL".
> > > 
> > > How about PIPgSQL or piPgSQL?
> > 
> > Perhaps Pi2PgSQL, or PySQL_ba
> 
> PySQL_ba? *grin*

Pyg ??

Vince.
-- 
==========================================================================
Vince Vielhaber -- KA8CSH    email: vev@michvhf.com    http://www.pop4.net128K ISDN from $22.00/mo - 56K Dialup from
$16.00/moat Pop4 Networking       Online Campground Directory    http://www.camping-usa.com      Online Giftshop
Superstore   http://www.cloudninegifts.com
 
==========================================================================





Re: Announcing PgSQL - a Python DB-API 2.0 compliant interface to PostgreSQL

From
Lamar Owen
Date:
Christopher Sawtell wrote:
> On Wed, 11 Oct 2000, Dnesbitt@encryptix.com wrote:
> > > The project name on SourceForge is "Python Interface to PostgreSQL".
> > How about PIPgSQL or piPgSQL?
> Perhaps Pi2PgSQL, or PySQL_ba

PyGSQyl.

Anagram fun on "PostgreSQL":
PostgreSQL=
Eg: Ports SQL
Gets Pro SQL
Got reps, SQL?

"Great Bridge"         = Bigger Trade :-)
"Tuple TOASTer"        = Alert! Pest out! (The pest being the 8K limit).
"Outer Joins"          = Join so true.
"My new job"           = Enjoy BMW.
"Open Source"          = Our one spec.
"Linux versus FreeBSD" = Feud reruns vex bliss.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


JDBC Large ResultSet problem + BadTimeStamp Patch

From
Michael Stephenson
Date:
Two things.

Firstly, when dealing with a large ResultSet (about 120000 rows), I get a
null pointer exception on the line:  wasNullFlag = (this_row[columnIndex - 1] == null);
Whenever I call getString(), has anyone else had this? And does anybody
have a solution?

Secondly, I've not seen it mentioned on here but the jdbc driver will
sometimes throw a bad time stamp exception when you use getTimeStamp() on
times which have are accurate to more than a second, this is the patch we
use to fix it.

Hope it is of some use to somebody,

Michael Stephenson

*** src/interfaces/jdbc/org/postgresql/jdbc2/ResultSet.java    Fri May 12
20:54:22 2000
--- src/interfaces/jdbc/org/postgresql/jdbc2/ResultSetPatch.java
Mon Sep 25 15:36:46 2000
***************
*** 439,445 ****     if(s==null)     return null;     
!     SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd
HH:mm:sszzz");          try {     return new Timestamp(df.parse(s).getTime());
--- 439,456 ----     if(s==null)     return null;     
!     SimpleDateFormat df = null;
!     if (s.length()>21 && s.indexOf('.') != -1) {
!         df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSzzz");
!     } else if (s.length()>19 && s.indexOf('.') == -1) {
!         df = new SimpleDateFormat("yyyy-MM-dd HH:MM:sszzz");
!     } else if (s.length()>19 && s.indexOf('.') != -1) {
!         df = new SimpleDateFormat("yyyy-MM-dd HH:MM:ss.SS");
!     } else if (s.length()>10 && s.length()<=18) {
!         df = new SimpleDateFormat("yyyy-MM-dd HH:MM:ss");
!     } else {
!         df = new SimpleDateFormat("yyyy-MM-dd");
!     }            try {     return new Timestamp(df.parse(s).getTime());




Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Peter Mount
Date:
On Wed, 11 Oct 2000, Michael Stephenson wrote:

> Two things.
> 
> Firstly, when dealing with a large ResultSet (about 120000 rows), I get a
> null pointer exception on the line:
>    wasNullFlag = (this_row[columnIndex - 1] == null);
> Whenever I call getString(), has anyone else had this? And does anybody
> have a solution?

Are you getting any out of memory errors at all?

The problem with the current implementation is that it reads the entire
result into memory, so 120000 rows may be filling up your VM's memory
(defaults to about 16Mb).

Does it occur if you add the -mx argument to java, ie:
java -mx 64m uk.org.retep.sql.RetepSQL

I'm in the design stage of implementing a version of ResultSet that will
use cursors, to limit how much is loaded in memory at a time.

> Secondly, I've not seen it mentioned on here but the jdbc driver will
> sometimes throw a bad time stamp exception when you use getTimeStamp() on
> times which have are accurate to more than a second, this is the patch we
> use to fix it.

This was fixed a few weeks ago and should be in the current CVS already.

peter

-- 
Peter T Mount peter@retep.org.uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/




Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Joseph Shraibman
Date:
Peter Mount wrote:
> 
> On Wed, 11 Oct 2000, Michael Stephenson wrote:
> 
> > Two things.
> >
> > Firstly, when dealing with a large ResultSet (about 120000 rows), I get a
> > null pointer exception on the line:
> >    wasNullFlag = (this_row[columnIndex - 1] == null);
> > Whenever I call getString(), has anyone else had this? And does anybody
> > have a solution?
> 
> Are you getting any out of memory errors at all?
> 
> The problem with the current implementation is that it reads the entire
> result into memory, so 120000 rows may be filling up your VM's memory
> (defaults to about 16Mb).
> 
> Does it occur if you add the -mx argument to java, ie:
> 
>         java -mx 64m uk.org.retep.sql.RetepSQL
> 
> I'm in the design stage of implementing a version of ResultSet that will
> use cursors, to limit how much is loaded in memory at a time.
> 

The problem with that is the same problem I ran into with locking.
cursors need to be in a BEGIN END block, and other calls from different
Statement objects using the same db connectioni will interfere.  Make
sure it is clearly documented that that will not be thread safe (unless
postgres gets named locks by then).


Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Peter Mount
Date:
On Wed, 11 Oct 2000, Joseph Shraibman wrote:

> Peter Mount wrote:
> > 
> > On Wed, 11 Oct 2000, Michael Stephenson wrote:
> > 
> > > Two things.
> > >
> > > Firstly, when dealing with a large ResultSet (about 120000 rows), I get a
> > > null pointer exception on the line:
> > >    wasNullFlag = (this_row[columnIndex - 1] == null);
> > > Whenever I call getString(), has anyone else had this? And does anybody
> > > have a solution?
> > 
> > Are you getting any out of memory errors at all?
> > 
> > The problem with the current implementation is that it reads the entire
> > result into memory, so 120000 rows may be filling up your VM's memory
> > (defaults to about 16Mb).
> > 
> > Does it occur if you add the -mx argument to java, ie:
> > 
> >         java -mx 64m uk.org.retep.sql.RetepSQL
> > 
> > I'm in the design stage of implementing a version of ResultSet that will
> > use cursors, to limit how much is loaded in memory at a time.
> > 
> 
> The problem with that is the same problem I ran into with locking.
> cursors need to be in a BEGIN END block, and other calls from different
> Statement objects using the same db connectioni will interfere.  Make
> sure it is clearly documented that that will not be thread safe (unless
> postgres gets named locks by then).

Yes, there is a problem with multiple statements and transactions on the
same connection when it comes to thread safety.

The problem as I see it is two fold.

1: How to deal with two statements within one transaction. Related to this
is the metadata methods that issue queries, how to deal with them while
within a transaction.

2: Currently the JDBC Specs only have transactions supported at the
Connection level, so I can't see how they thought that Statements could
possibly run within their own transactions, unless they thought that a
workaround of this is the use of batches.

Peter

-- 
Peter T Mount peter@retep.org.uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/




Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Steve Wampler
Date:
Peter Mount wrote:
...
> Yes, there is a problem with multiple statements and transactions on the
> same connection when it comes to thread safety.
> 
> The problem as I see it is two fold.
> 
> 1: How to deal with two statements within one transaction. Related to this
> is the metadata methods that issue queries, how to deal with them while
> within a transaction.
> 
> 2: Currently the JDBC Specs only have transactions supported at the
> Connection level, so I can't see how they thought that Statements could
> possibly run within their own transactions, unless they thought that a
> workaround of this is the use of batches.

Ah, that probably explains why I've seen "tuple arrived before metadata"
messages when I've got several apps talking through CORBA to a java app
that connects to postgres.  Do I need to synchronize both inserts and
queries at the java app level to prevent this?  (I was hoping that
the BEGIN/END block in a transaction would be sufficient, but this makes
it sound as though it isn't.)

--
Steve Wampler-  SOLIS Project, National Solar Observatory
swampler@noao.edu


Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Joseph Shraibman
Date:
Steve Wampler wrote:
> 
> Peter Mount wrote:
> ...
> > Yes, there is a problem with multiple statements and transactions on the
> > same connection when it comes to thread safety.
> >
> > The problem as I see it is two fold.
> >
> > 1: How to deal with two statements within one transaction. Related to this
> > is the metadata methods that issue queries, how to deal with them while
> > within a transaction.
> >
> > 2: Currently the JDBC Specs only have transactions supported at the
> > Connection level, so I can't see how they thought that Statements could
> > possibly run within their own transactions, unless they thought that a
> > workaround of this is the use of batches.
> 
> Ah, that probably explains why I've seen "tuple arrived before metadata"
> messages when I've got several apps talking through CORBA to a java app
> that connects to postgres.  Do I need to synchronize both inserts and
> queries at the java app level to prevent this?  (I was hoping that
> the BEGIN/END block in a transaction would be sufficient, but this makes
> it sound as though it isn't.)
> 
It isn't.  I wish there were online mail archives.  But anyway the
reason it isn't is that if two statements use the same connection, when
one of them calls enters a transaction the other one does too.  So if
you do:
1) BEGIN;
2) SELECT blah FROM tablea FOR UPDATE;
3) UPDATE tablea SET blah = newblah;

since both statements are using the same connection they ARE NOT LOCKED
FROM EACH OTHER, and when one calls and END; or COMIT; both of them exit
their transactions.  That is why I'm using a connection pool now when I
have to do locking.  And if Peter wants to use cursors behind the scenes
it will be even worse, because the cursors themselves need to be in a
BEGIN; END;, and what happens if they user thinks he is in a transaction
but the cursor ended it for him a while ago?  Named transactions would
help with this, but the real answer would be to be able to have more
then one connection to a backend (maybe tunnelled over on TCP/IP link).
Right now each new connection requires forking off another backend,
which makes it impractical to connect whenever you have a new
transaction to do.


Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Peter Mount
Date:
On Wed, 11 Oct 2000, Steve Wampler wrote:

> Peter Mount wrote:
> ...
> > Yes, there is a problem with multiple statements and transactions on the
> > same connection when it comes to thread safety.
> > 
> > The problem as I see it is two fold.
> > 
> > 1: How to deal with two statements within one transaction. Related to this
> > is the metadata methods that issue queries, how to deal with them while
> > within a transaction.
> > 
> > 2: Currently the JDBC Specs only have transactions supported at the
> > Connection level, so I can't see how they thought that Statements could
> > possibly run within their own transactions, unless they thought that a
> > workaround of this is the use of batches.
> 
> Ah, that probably explains why I've seen "tuple arrived before metadata"
> messages when I've got several apps talking through CORBA to a java app
> that connects to postgres.  Do I need to synchronize both inserts and
> queries at the java app level to prevent this?  (I was hoping that
> the BEGIN/END block in a transaction would be sufficient, but this makes
> it sound as though it isn't.)

I think you may need to, although the existing thread locking in the
driver should prevent this. BEGIN/END is protecting the tables, but the
"tuple arrived before metadata" message is from the network protocol
(someone correct me at any point if I'm wrong).

What happens at the moment is that when a query is issued by JDBC, a lock
is made against the network connection, and then the query is issued. Once
everything has been read, the lock is released. This mechanism should
prevent any one thread using the same network connection as another which
is already using it.

Is your corba app under heavy load when this happens, or can it happen
with say 2-3 apps running?

Peter

-- 
Peter T Mount peter@retep.org.uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/




Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Peter Mount
Date:
On Wed, 11 Oct 2000, Joseph Shraibman wrote:

[snip]

> It isn't.  I wish there were online mail archives.  But anyway the
> reason it isn't is that if two statements use the same connection, when
> one of them calls enters a transaction the other one does too.  So if
> you do:
> 1) BEGIN;
> 2) SELECT blah FROM tablea FOR UPDATE;
> 3) UPDATE tablea SET blah = newblah;
> 
> since both statements are using the same connection they ARE NOT LOCKED
> FROM EACH OTHER, and when one calls and END; or COMIT; both of them exit
> their transactions.  That is why I'm using a connection pool now when I
> have to do locking.  And if Peter wants to use cursors behind the scenes
> it will be even worse, because the cursors themselves need to be in a
> BEGIN; END;, and what happens if they user thinks he is in a transaction
> but the cursor ended it for him a while ago?

We already have this problem with the two MetaData interfaces. Some of
those methods issue their own queries, and if they fail while within the
client's app's transaction all hell can break loose. This is one of the
reasons why I'm intending that the cursor based ResultSet only gets used
if the client has requested one.

> Named transactions would help with this, but the real answer would be
> to be able to have more then one connection to a backend (maybe
> tunnelled over on TCP/IP link).

Named transactions would help with both of these problems. Not knowing
enough of how the existing transaction mechanism works internally, I'd say
it's easer to do than tunnelling.

> Right now each new connection requires forking off another backend,
> which makes it impractical to connect whenever you have a new
> transaction to do.

Yes, that's the last thing we want, especially while we are trying to
optimise things...

Peter

-- 
Peter T Mount peter@retep.org.uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/




Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Steve Wampler
Date:
Peter Mount wrote:
> 
> On Wed, 11 Oct 2000, Steve Wampler wrote:
> 
> > Ah, that probably explains why I've seen "tuple arrived before metadata"
> > messages when I've got several apps talking through CORBA to a java app
> > that connects to postgres.  Do I need to synchronize both inserts and
> > queries at the java app level to prevent this?  (I was hoping that
> > the BEGIN/END block in a transaction would be sufficient, but this makes
> > it sound as though it isn't.)
> 
> I think you may need to, although the existing thread locking in the
> driver should prevent this. BEGIN/END is protecting the tables, but the
> "tuple arrived before metadata" message is from the network protocol
> (someone correct me at any point if I'm wrong).
> 
> What happens at the moment is that when a query is issued by JDBC, a lock
> is made against the network connection, and then the query is issued. Once
> everything has been read, the lock is released. This mechanism should
> prevent any one thread using the same network connection as another which
> is already using it.
> 
> Is your corba app under heavy load when this happens, or can it happen
> with say 2-3 apps running?

I'm not sure how to define heavy load, but I'd say yes - there were about
10 processes (spread across 3 machines) all talking corba to the app with
the jdbc app to postgres.  Two apps was doing block inserts while another 8
were doing queries.  I think there were around 100000 entries added in a
20-25minute time span, and there would have been queries accessing most
of those during the same period (the DB acts both as an archive and as
a cache between an instrument and the processes that analyze the instrument's
data).


--
Steve Wampler-  SOLIS Project, National Solar Observatory
swampler@noao.edu


Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Peter Mount
Date:
On Thu, 12 Oct 2000, Steve Wampler wrote:

> Peter Mount wrote:
> > 
> > On Wed, 11 Oct 2000, Steve Wampler wrote:
> > 
> > > Ah, that probably explains why I've seen "tuple arrived before metadata"
> > > messages when I've got several apps talking through CORBA to a java app
> > > that connects to postgres.  Do I need to synchronize both inserts and
> > > queries at the java app level to prevent this?  (I was hoping that
> > > the BEGIN/END block in a transaction would be sufficient, but this makes
> > > it sound as though it isn't.)
> > 
> > I think you may need to, although the existing thread locking in the
> > driver should prevent this. BEGIN/END is protecting the tables, but the
> > "tuple arrived before metadata" message is from the network protocol
> > (someone correct me at any point if I'm wrong).
> > 
> > What happens at the moment is that when a query is issued by JDBC, a lock
> > is made against the network connection, and then the query is issued. Once
> > everything has been read, the lock is released. This mechanism should
> > prevent any one thread using the same network connection as another which
> > is already using it.
> > 
> > Is your corba app under heavy load when this happens, or can it happen
> > with say 2-3 apps running?
> 
> I'm not sure how to define heavy load, but I'd say yes - there were about
> 10 processes (spread across 3 machines) all talking corba to the app with
> the jdbc app to postgres.  Two apps was doing block inserts while another 8
> were doing queries.  I think there were around 100000 entries added in a
> 20-25minute time span, and there would have been queries accessing most
> of those during the same period (the DB acts both as an archive and as
> a cache between an instrument and the processes that analyze the instrument's
> data).

Hmmm, I think you may want to look at using a connection pool, especially
with 100k entries. I've just looked through my Corba books, and they all
seem to use some form of pool, so perhaps that's the assumed best way to
do it.

Peter

-- 
Peter T Mount peter@retep.org.uk http://www.retep.org.uk
PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
Java PDF Generator http://www.retep.org.uk/pdf/




Re: JDBC Large ResultSet problem + BadTimeStamp Patch

From
Joseph Shraibman
Date:
Peter Mount wrote:
> 
> On Wed, 11 Oct 2000, Joseph Shraibman wrote:
> 
> [snip]
> 
> > It isn't.  I wish there were online mail archives.  But anyway the
> > reason it isn't is that if two statements use the same connection, when
> > one of them calls enters a transaction the other one does too.  So if
> > you do:
> > 1) BEGIN;
> > 2) SELECT blah FROM tablea FOR UPDATE;
> > 3) UPDATE tablea SET blah = newblah;
> >
> > since both statements are using the same connection they ARE NOT LOCKED
> > FROM EACH OTHER, and when one calls and END; or COMIT; both of them exit
> > their transactions.  That is why I'm using a connection pool now when I
> > have to do locking.  And if Peter wants to use cursors behind the scenes
> > it will be even worse, because the cursors themselves need to be in a
> > BEGIN; END;, and what happens if they user thinks he is in a transaction
> > but the cursor ended it for him a while ago?
> 
> We already have this problem with the two MetaData interfaces. Some of
> those methods issue their own queries, and if they fail while within the
> client's app's transaction all hell can break loose. This is one of the
> reasons why I'm intending that the cursor based ResultSet only gets used
> if the client has requested one.
> 
> > Named transactions would help with this, but the real answer would be
> > to be able to have more then one connection to a backend (maybe
> > tunnelled over on TCP/IP link).
> 
> Named transactions would help with both of these problems. Not knowing
> enough of how the existing transaction mechanism works internally, I'd say
> it's easer to do than tunnelling.
> 

I don't think tunnelling would be that hard, just add a number to say
which connection this query is for. The real reason it won't be done is
that the coders who coded the backend don't want to share a backends
with more than one connection to keep a segfault (or other error) from
one connection taking down all the others.  I see the logic in that, but
there is a compromise solution of allowing one client to have more than
one connection (with locks for each connection being seperate) over the
same TCP/IP socket so only one client is talking with one backend
process.

> > Right now each new connection requires forking off another backend,
> > which makes it impractical to connect whenever you have a new
> > transaction to do.
> 
> Yes, that's the last thing we want, especially while we are trying to
> optimise things...
> 
> Peter
> 
> --
> Peter T Mount peter@retep.org.uk http://www.retep.org.uk
> PostgreSQL JDBC Driver http://www.retep.org.uk/postgres/
> Java PDF Generator http://www.retep.org.uk/pdf/


> > > The project name on SourceForge is "Python Interface to PostgreSQL".
> > 
> > How about PIPgSQL or piPgSQL?
> 
> Perhaps Pi2PgSQL, or PySQL_ba

or PyPgSQL?

Do we get a reward if you choose our name? ;)




Save java objects using JDBC

From
Rob Judd
Date:
I'm sorry to bother the list with this question - I know people are
always asking it.  I thought I understood how to do this, but my code
doesn't work.

I am trying to save a java object in a database using the setBytes()
method of PreparedStatement - I don't want to use the large object manager
because I want this to be somewhat portable.

The code below gives me the error: 
Exception: FastPath protocol error: Z :: FastPath protocol error: Z
As soon as I call setBytes()

I thought this only came when you forget to call "setAutoCommit(false)".
Can anyone please tell me what I'm doing wrong. 

Thanks a lot, Rob


-------------------- PSQLTest.java -----------------------

import java.sql.* ; 
import java.io.* ; 
import java.util.* ;

public class PSQLTest {      public PSQLTest() { }      public static void main(String[] args) {
Vector vec = new Vector() ;for (int i=0; i<10; ++i)     vec.addElement(new Integer(i+5)) ;
Connection conn = null ;try {    Class.forName("postgresql.Driver") ;    conn = DriverManager.getConnection
("jdbc:postgresql://127.0.0.1:5432/rob","rob", "") ;    conn.setAutoCommit(false);
 
    byte[] bytes ;    ByteArrayOutputStream out = new ByteArrayOutputStream() ;    ObjectOutputStream objOut = new
ObjectOutputStream(out);    objOut.writeObject(vec) ;    objOut.flush() ;    bytes = out.toByteArray() ;
objOut.close();
 
    PreparedStatement ps = conn.prepareStatement     ("insert into vectors (name, id) values ( ?, ?)") ; 
    ps.setString(1, "Vector name") ;    ps.setBytes(2, bytes) ;     ps.executeUpdate() ;     ps.close() ; 
    conn.commit() ;} catch (Exception e) {     System.out.println("Exception: "+e+" :: "+e.getMessage());
e.printStackTrace();}
 

   }   
}



Vince Vielhaber wrote:
> On Tue, 10 Oct 2000, The Hermit Hacker wrote:
> 
> > On Wed, 11 Oct 2000, Christopher Sawtell wrote:
> > 
> > > On Wed, 11 Oct 2000, Dnesbitt@encryptix.com wrote:
> > > > > The project name on SourceForge is "Python Interface to PostgreSQL".
> > > > 
> > > > How about PIPgSQL or piPgSQL?
> > > 
> > > Perhaps Pi2PgSQL, or PySQL_ba
> > 
> > PySQL_ba? *grin*
> 
> Pyg ??
   PyLephant


Jan

-- 

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #