Re: A new JDBC driver... - Mailing list pgsql-jdbc

From Dave Cramer
Subject Re: A new JDBC driver...
Date
Msg-id CADK3HH+ejLmf5zM8QEw+qKPmRS9wMFb0gSuWCNkPO+1EjLShaQ@mail.gmail.com
Whole thread Raw
In response to Re: A new JDBC driver...  (Thor Michael Støre <thormichael@gmail.com>)
Responses Re: A new JDBC driver...  (Kevin Wooten <kdubb@me.com>)
List pgsql-jdbc
Kevin,

While I haven't looked at your driver in detail; it occurs to me that since the FE/BE protocol is broken out in the current driver, would it be possible to use your protocol implementation in the current driver ?

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On Tue, Mar 12, 2013 at 6:41 PM, Thor Michael Støre <thormichael@gmail.com> wrote:
Hello,

From: Kevin Wooten <kdubb@me.com>
Subject: A new JDBC driver...
Date: 12. mars 2013 02:19:11 CET


About 10 days ago I set out to fix the current driver's support for composite and array objects.  Currently they are only ever retrieved as string encoding due to the use of the TEXT format for those types; parsing these encoded strings seemed just short of a nightmare.  Ambition got the better of me and before I knew it I had a "working" implementation of the entire FE/BE protocol (save for Copy In/Out).

Hm, nightmare might be a stretch, though I see your point. I recently wrote a parser for text representations of arrays and composites of the basic types for my own personal little pet project, which is a kind of a MyBatis-like thing but highly Postgres specific. It wasn't much work, but if you're going to support "everything" and doing it in the JDBC layer I guess going the binary route would be the better bet.

The part of my implementation that just deals with parsing the text format isn't much to gawk at, but just for the record I've got it here:

And yes, I'm sure it chokes on lots of things.

* Can decode any recognized type to a Java object (this includes any imaginable composite or array type)
* Connection.setTypeMap and ResultSet.get(int idx, Map) are both fully supported
* Requests for composite objects that have no custom mapping are returned as HashMap
* Arrays can be decoded as a List, Map or native array (e.g. Object[], int[])
* As an an extension it can decode whole rows into POJO's as well (acts a tiny bit like MyBatis)

Right, that's the sort of thing I've been working at too, but since there wasn't any such support in the PgJDBC driver I went with getting the text representation with getString and parsing it.

* Asynchronous I/O engine provided by Netty
* All connections share a single group of worker threads
* LISTEN/NOTIFY and notifications can come through asynchronously
* Netty has a great system for managing buffers and reading/writing messages that shows increased speed
* Performance wasn't a goal of this project but it's a nice side effect

That sounds great, when I poked around the driver trying to figure out how to retrieve arrays of complex types in a non-ineffective manner I did wonder about performance if they got very large.

BINARY SUPPORT
As outlined above I started the project to support decoding of Composite and Array types.  To accomplish this I download "pg_type", "pg_attribute" and "pg_proc" to the client upon connection.  I then create a type registry that holds all the required details of all the types.  Procedures, for both TEXT and BINARY protocols, are looked up and matched by name (e.g. "bool_send", "money_recv", etc) by a list of "Procedure Providers".  When a DataRow message is received it looks up the type in the registry and calls the appropriate TEXT or BINARY decoder to decode the row.  When sending parameter data the type is located and the it's encoder is called to encode the data.

Right, you do all that at work upon initialization at runtime, which would be the way to go to have it all done generically in the driver and accessible via the JDBC methods. Before settling down and writing a parser for the text format I quickly looked inside PgJDBC to see what it would take to have it do something like this, but I realized it would be a lot of work long before I saw a solution like the way you've implemented it. In the end for my own project I went with doing all that part of the work outside of the JDBC driver by generating a static factory method for each type that the application in question actually uses, which fits the "zero reflection and full compile-time metaprogramming" approach my project has.

Should I continue on this path of a new driver and see if people join or should I take what I have learned and try to refit it into the current code?

While I can't comment on this directly I would like to say that I for one would appreciate either of those things, just as long as this doesn't fizzle out I'd be quite content. At the moment it's a non-starter for me to try out on my pet project because it heavily relies on the JDBC metadata functions (which I'm not expecting will come along any time soon either, don't worry), but it's very good to hear that someone is putting effort into areas of Postgres and JDBC that would improve these areas I've been looking at even when that effort may take a while to bear fruits. I'm not sure how to help either, seeing as I don't have much to test it against, and seem to only have vague ideas in areas you seem to have solutions.

Thanks for the effort either way,
Thor Michael

pgsql-jdbc by date:

Previous
From: Thor Michael Støre
Date:
Subject: Re: A new JDBC driver...
Next
From: Kevin Wooten
Date:
Subject: Re: A new JDBC driver...