Thread: Postgres Pain Points 2 ruby / node language drivers
A database is only as good as the ability to use it. Using ORM's with Postgres seems dumb, like having a Ferrari but only allowed to drive 25 kmh. Really, for getting data in and out with Postgres, SQL is pretty simple. The ORM abstractions take as much effort as simply writing direct SQL statements and convenience functions with straight sql. Ruby: The "pg" driver seems to be tweaked to work okay with active record, and maybe Sequel, but we are having problems with with straight ruby/sinatra. (see comments above re ORM's). Since we changed to Fedora and Ruby 2.3, we are now having breaking connections and JSONB query problems. This could be from lack of clear docs and examples - we cant' find any good examples or tutorials with JSONB. It seems with this driver it is Rails or the highway. Nodejs: The docs and tutorials for "node-postgres" are a mess. The github readme, examples, and wikis are all different. Then there is "native" and "pooling" and prepared statement alternatives. Callback hell is bad enough, then to have lots of different approaches. Async/Await should help this issue (whenever it arrives in Node) but meanwhile, it's a painful struggle. Whatever examples exist are for such trivial use, they fall short of helping with real-life examples, like sequential results. Python: The Postgres team supported driver psycopg2 seems to work great (but we really don't want to rewrite lots of code into python It would be great if the Ruby and Node drivers can be brought under the Postgres team umbrella and make them as reliable and clearly documented as the Python or jdbc drivers. -- Support Dept Tiger Nassau, Inc. www.tigernassau.com 406-624-9310
It would be great if the Ruby and Node drivers can be brought under the Postgres team umbrella and make them as reliable and clearly documented as the Python or jdbc drivers.
What makes you say that the Python are JDBC drivers are "under the Postgres team umbrella"?
David J.
On 08/11/2016 10:43 AM, David G. Johnston wrote: > On Thu, Aug 11, 2016 at 1:13 PM, support-tiger <support@tigernassau.com > <mailto:support@tigernassau.com>>wrote: > > It would be great if the Ruby and Node drivers can be brought under > the Postgres team umbrella and make them as reliable and clearly > documented as the Python or jdbc drivers. > > > What makes you say that the Python are JDBC drivers are "under the > Postgres team umbrella"? To some degree, the JDBC driver is (as is ODBC), thus jdbc.postgresql.org. However, the Python drivers are not (and in fact there are at least three of those that are pretty widely used). > > David J. > -- Command Prompt, Inc. http://the.postgres.company/ +1-503-667-4564 PostgreSQL Centered full stack support, consulting and development. Everyone appreciates your honesty, until you are honest with them. Unless otherwise stated, opinions are my own.
support-tiger wrote: > It would be great if the Ruby and Node drivers can be brought under the > Postgres team umbrella and make them as reliable and clearly documented as > the Python or jdbc drivers. Sadly, the PostgreSQL development group does not have the manpower to maintain or document client drivers. Excepting libpq (the C library) and ecpg (the embedded C preprocessor), every driver is maintained by external groups. Maybe you can contribute to the effort of maintaining those external drivers yourselves, or at least improving their docs. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:
A database is only as good as the ability to use it. Using ORM's with
Postgres seems dumb, like having a Ferrari but only allowed to drive 25
kmh. Really, for getting data in and out with Postgres, SQL is pretty
simple. The ORM abstractions take as much effort as simply writing
direct SQL statements and convenience functions with straight sql.
Ruby: The "pg" driver seems to be tweaked to work okay with active
record, and maybe Sequel, but we are having problems with with straight
ruby/sinatra. (see comments above re ORM's). Since we changed to
Fedora and Ruby 2.3, we are now having breaking connections and JSONB
query problems. This could be from lack of clear docs and examples - we
cant' find any good examples or tutorials with JSONB. It seems with this
driver it is Rails or the highway.
Nodejs: The docs and tutorials for "node-postgres" are a mess. The
github readme, examples, and wikis are all different. Then there is
"native" and "pooling" and prepared statement alternatives. Callback
hell is bad enough, then to have lots of different approaches.
Async/Await should help this issue (whenever it arrives in Node) but
meanwhile, it's a painful struggle. Whatever examples exist are for
such trivial use, they fall short of helping with real-life examples,
like sequential results.
Python: The Postgres team supported driver psycopg2 seems to work great
(but we really don't want to rewrite lots of code into python
It would be great if the Ruby and Node drivers can be brought under the
Postgres team umbrella and make them as reliable and clearly documented
as the Python or jdbc drivers.
I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously. Of course you can make an ORM hose your DB, but any developer could just as easily write SQL which does the same. Also, remember that PG is based on volunteer effort and if drivers for some languages are more mature and better supported than others, that's because core-pg-developers or someone in "the inner circle", or someone else, has use for them. Just because language X or framework Y popus up doesn't mean the PG-project should maintain drivers for them or support them.
So - yes, it would be great if all the drivers for all languages would be made as reliable and clearly documented as the most supported ones.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
On Thu, Aug 11, 2016 at 4:20 PM, Andreas Joseph Krogh <andreas@visena.com> wrote:
>Nodejs: The docs and tutorials for "node-postgres" are a mess.På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:A database is only as good as the ability to use it. Using ORM's with
Postgres seems dumb, like having a Ferrari but only allowed to drive 25
kmh. Really, for getting data in and out with Postgres, SQL is pretty
simple. The ORM abstractions take as much effort as simply writing
direct SQL statements and convenience functions with straight sql.
Ruby: The "pg" driver seems to be tweaked to work okay with active
record, and maybe Sequel, but we are having problems with with straight
ruby/sinatra. (see comments above re ORM's). Since we changed to
Fedora and Ruby 2.3, we are now having breaking connections and JSONB
query problems. This could be from lack of clear docs and examples - we
cant' find any good examples or tutorials with JSONB. It seems with this
driver it is Rails or the highway.
Nodejs: The docs and tutorials for "node-postgres" are a mess. The
github readme, examples, and wikis are all different. Then there is
"native" and "pooling" and prepared statement alternatives. Callback
hell is bad enough, then to have lots of different approaches.
Async/Await should help this issue (whenever it arrives in Node) but
meanwhile, it's a painful struggle. Whatever examples exist are for
such trivial use, they fall short of helping with real-life examples,
like sequential results.
Python: The Postgres team supported driver psycopg2 seems to work great
(but we really don't want to rewrite lots of code into python
It would be great if the Ruby and Node drivers can be brought under the
Postgres team umbrella and make them as reliable and clearly documented
as the Python or jdbc drivers.I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously. Of course you can make an ORM hose your DB, but any developer could just as easily write SQL which does the same. Also, remember that PG is based on volunteer effort and if drivers for some languages are more mature and better supported than others, that's because core-pg-developers or someone in "the inner circle", or someone else, has use for them. Just because language X or framework Y popus up doesn't mean the PG-project should maintain drivers for them or support them.So - yes, it would be great if all the drivers for all languages would be made as reliable and clearly documented as the most supported ones.
I blatantly disagree with your comment above. It sounds as if you are just making an excuse for not reading them. There are also many fine books that have been written
that cover PostgreSQL for novices and advanced users. No one is going to spoon feed you and there is no such thing as a "Vulcan Mind Meld" to give you instant knowledge.
Like any fine tool, you have to spend some time to learn how to use it properly.
--
Melvin Davidson
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
Attachment
On Thu, Aug 11, 2016 at 10:20 PM, Andreas Joseph Krogh <andreas@visena.com> wrote:
På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously.
And yet everyone I have talked to understands that ORMs are pretty problematic as a concept. They make some things easy but they have some pretty massive downsides. ORMs, it is true, do solve some problems, but they usually create many more in the process. The reason is that as much as relations look like collections of objects, they are best organized along very different principles. While we break down our tables based on functional dependencies between data values, we break down our object models based on how we can encapsulate state changes behind consistent interfaces. The latter is dependent on use, while the former far less so.
Of course you *can* use them well. I remember talking about this with one author or a major ORM and he said that on thing he often does is create views with triggers and then use the ORM against those. This solves the problem above very well. But it still leaves the fact that the database and the application have to share an implicit understanding of an object model and keeping that in sync as the project grows can be troublesome.
But once you have a non-trivial project, the promise that ORMs are often sold on ('you don't have to know SQL') evaporates and you find that you have to know SQL and the ORM well to get half-way decent performance.
Of course you can make an ORM hose your DB, but any developer could just as easily write SQL which does the same. Also, remember that PG is based on volunteer effort and if drivers for some languages are more mature and better supported than others, that's because core-pg-developers or someone in "the inner circle", or someone else, has use for them. Just because language X or framework Y popus up doesn't mean the PG-project should maintain drivers for them or support them.
Agreed on this.
So - yes, it would be great if all the drivers for all languages would be made as reliable and clearly documented as the most supported ones.
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
Attachment
På fredag 12. august 2016 kl. 05:27:42, skrev Chris Travers <chris.travers@gmail.com>:
On Thu, Aug 11, 2016 at 10:20 PM, Andreas Joseph Krogh <andreas@visena.com> wrote:På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously.And yet everyone I have talked to understands that ORMs are pretty problematic as a concept. They make some things easy but they have some pretty massive downsides. ORMs, it is true, do solve some problems, but they usually create many more in the process. The reason is that as much as relations look like collections of objects, they are best organized along very different principles. While we break down our tables based on functional dependencies between data values, we break down our object models based on how we can encapsulate state changes behind consistent interfaces. The latter is dependent on use, while the former far less so.
Of course you *can* use them well. I remember talking about this with one author or a major ORM and he said that on thing he often does is create views with triggers and then use the ORM against those. This solves the problem above very well. But it still leaves the fact that the database and the application have to share an implicit understanding of an object model and keeping that in sync as the project grows can be troublesome.
I don't understand why people bashing ORMs seem to think that once you have an ORM in you project you have to use it for everything. Of course, the ORM doesn't free you from using SQL directly where appropriate. IMO ORMs are best using for CRUD, but not for reporting or batch-processing. In a large project you have both, so combining is, IMO, the best.
But once you have a non-trivial project, the promise that ORMs are often sold on ('you don't have to know SQL') evaporates and you find that you have to know SQL and the ORM well to get half-way decent performance.
I don't believe for a second that having an ORM in a project relieves the developers from knowing SQL.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
On Fri, Aug 12, 2016 at 9:59 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:
På fredag 12. august 2016 kl. 05:27:42, skrev Chris Travers <chris.travers@gmail.com>:On Thu, Aug 11, 2016 at 10:20 PM, Andreas Joseph Krogh <andreas@visena.com>wrote: På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously.And yet everyone I have talked to understands that ORMs are pretty problematic as a concept. They make some things easy but they have some pretty massive downsides. ORMs, it is true, do solve some problems, but they usually create many more in the process. The reason is that as much as relations look like collections of objects, they are best organized along very different principles. While we break down our tables based on functional dependencies between data values, we break down our object models based on how we can encapsulate state changes behind consistent interfaces. The latter is dependent on use, while the former far less so.
Of course you *can* use them well. I remember talking about this with one author or a major ORM and he said that on thing he often does is create views with triggers and then use the ORM against those. This solves the problem above very well. But it still leaves the fact that the database and the application have to share an implicit understanding of an object model and keeping that in sync as the project grows can be troublesome.I don't understand why people bashing ORMs seem to think that once you have an ORM in you project you have to use it for everything. Of course, the ORM doesn't free you from using SQL directly where appropriate. IMO ORMs are best using for CRUD, but not for reporting or batch-processing. In a large project you have both, so combining is, IMO, the best.
The problems I mention above occur when doing CRUD via an ORM (at least all ORMs I have worked with). The fundamental problem is that ORMs make for bad data management in most cases because *either* you structure your database table structures around your object model of your application *or* you add complexity of a logical level structured for your application.
But either way, the database has to have intimate familiarity with the internals of the applications using it. And there isn't an easy way around this.
But once you have a non-trivial project, the promise that ORMs are often sold on ('you don't have to know SQL') evaporates and you find that you have to know SQL and the ORM well to get half-way decent performance.I don't believe for a second that having an ORM in a project relieves the developers from knowing SQL.
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
Attachment
På fredag 12. august 2016 kl. 10:33:19, skrev Chris Travers <chris.travers@gmail.com>:
On Fri, Aug 12, 2016 at 9:59 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:På fredag 12. august 2016 kl. 05:27:42, skrev Chris Travers <chris.travers@gmail.com>:On Thu, Aug 11, 2016 at 10:20 PM, Andreas Joseph Krogh <andreas@visena.com>wrote: På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously.And yet everyone I have talked to understands that ORMs are pretty problematic as a concept. They make some things easy but they have some pretty massive downsides. ORMs, it is true, do solve some problems, but they usually create many more in the process. The reason is that as much as relations look like collections of objects, they are best organized along very different principles. While we break down our tables based on functional dependencies between data values, we break down our object models based on how we can encapsulate state changes behind consistent interfaces. The latter is dependent on use, while the former far less so.
Of course you *can* use them well. I remember talking about this with one author or a major ORM and he said that on thing he often does is create views with triggers and then use the ORM against those. This solves the problem above very well. But it still leaves the fact that the database and the application have to share an implicit understanding of an object model and keeping that in sync as the project grows can be troublesome.I don't understand why people bashing ORMs seem to think that once you have an ORM in you project you have to use it for everything. Of course, the ORM doesn't free you from using SQL directly where appropriate. IMO ORMs are best using for CRUD, but not for reporting or batch-processing. In a large project you have both, so combining is, IMO, the best.The problems I mention above occur when doing CRUD via an ORM (at least all ORMs I have worked with). The fundamental problem is that ORMs make for bad data management in most cases because *either* you structure your database table structures around your object model of your application *or* you add complexity of a logical level structured for your application.
But either way, the database has to have intimate familiarity with the internals of the applications using it. And there isn't an easy way around this.
If you don't like your domain-model to be very close to your physical DB-model, there's nothing preventing you from having a persistence-model, using the ORM, and map that to/from your domain-model. However, I don't see any of these challenges getting easier by throwing the ORM out and having the developers handling everything themselves using SQL directly.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
On Fri, Aug 12, 2016 at 10:58 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:
På fredag 12. august 2016 kl. 10:33:19, skrev Chris Travers <chris.travers@gmail.com>:Of course you *can* use them well. I remember talking about this with one author or a major ORM and he said that on thing he often does is create views with triggers and then use the ORM against those. This solves the problem above very well. But it still leaves the fact that the database and the application have to share an implicit understanding of an object model and keeping that in sync as the project grows can be troublesome.I don't understand why people bashing ORMs seem to think that once you have an ORM in you project you have to use it for everything. Of course, the ORM doesn't free you from using SQL directly where appropriate. IMO ORMs are best using for CRUD, but not for reporting or batch-processing. In a large project you have both, so combining is, IMO, the best.The problems I mention above occur when doing CRUD via an ORM (at least all ORMs I have worked with). The fundamental problem is that ORMs make for bad data management in most cases because *either* you structure your database table structures around your object model of your application *or* you add complexity of a logical level structured for your application.
But either way, the database has to have intimate familiarity with the internals of the applications using it. And there isn't an easy way around this.If you don't like your domain-model to be very close to your physical DB-model, there's nothing preventing you from having a persistence-model, using the ORM, and map that to/from your domain-model. However, I don't see any of these challenges getting easier by throwing the ORM out and having the developers handling everything themselves using SQL directly.
My preference is stored procedures plus service locators, to be honest. It enables a degree of loose coupling and even dynamic discovery that ORMs are generally not well suited to.
But I think the difference may be bigger. ORMs make sense when you want a database for your application. They break down badly when you want an application for your database. Usually I tend to want the latter.
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
Attachment
På fredag 12. august 2016 kl. 11:07:08, skrev Chris Travers <chris.travers@gmail.com>:
[snip]My preference is stored procedures plus service locators, to be honest. It enables a degree of loose coupling and even dynamic discovery that ORMs are generally not well suited to.But I think the difference may be bigger. ORMs make sense when you want a database for your application. They break down badly when you want an application for your database. Usually I tend to want the latter.
I'm in the first group, and I think many are when evaluating ORMs or having an opinion about them. Being in the latter, I more understand you concerns.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
On Fri, Aug 12, 2016 at 10:58 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:
På fredag 12. august 2016 kl. 10:33:19, skrev Chris Travers <chris.travers@gmail.com>:On Fri, Aug 12, 2016 at 9:59 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:På fredag 12. august 2016 kl. 05:27:42, skrev Chris Travers <chris.travers@gmail.com>:On Thu, Aug 11, 2016 at 10:20 PM, Andreas Joseph Krogh <andreas@visena.com> wrote: På torsdag 11. august 2016 kl. 19:13:08, skrev support-tiger <support@tigernassau.com>:I cannot not comment on this. Saying that ORM seems dumb, and working with PG using ORM does not fly, is a very good recipe for not being taken seriously.And yet everyone I have talked to understands that ORMs are pretty problematic as a concept. They make some things easy but they have some pretty massive downsides. ORMs, it is true, do solve some problems, but they usually create many more in the process. The reason is that as much as relations look like collections of objects, they are best organized along very different principles. While we break down our tables based on functional dependencies between data values, we break down our object models based on how we can encapsulate state changes behind consistent interfaces. The latter is dependent on use, while the former far less so.
Of course you *can* use them well. I remember talking about this with one author or a major ORM and he said that on thing he often does is create views with triggers and then use the ORM against those. This solves the problem above very well. But it still leaves the fact that the database and the application have to share an implicit understanding of an object model and keeping that in sync as the project grows can be troublesome.I don't understand why people bashing ORMs seem to think that once you have an ORM in you project you have to use it for everything. Of course, the ORM doesn't free you from using SQL directly where appropriate. IMO ORMs are best using for CRUD, but not for reporting or batch-processing. In a large project you have both, so combining is, IMO, the best.The problems I mention above occur when doing CRUD via an ORM (at least all ORMs I have worked with). The fundamental problem is that ORMs make for bad data management in most cases because *either* you structure your database table structures around your object model of your application *or* you add complexity of a logical level structured for your application.
But either way, the database has to have intimate familiarity with the internals of the applications using it. And there isn't an easy way around this.If you don't like your domain-model to be very close to your physical DB-model, there's nothing preventing you from having a persistence-model, using the ORM, and map that to/from your domain-model. However, I don't see any of these challenges getting easier by throwing the ORM out and having the developers handling everything themselves using SQL directly.
Could you elaborate on this method? Or direct me to docs describing it in more detail? I'm quite interested in this approach, but would like to see what amount of overhead (development and maintenance) this might result in.
På fredag 12. august 2016 kl. 11:24:16, skrev Daevor The Devoted <dollien@gmail.com>:
[snip]If you don't like your domain-model to be very close to your physical DB-model, there's nothing preventing you from having a persistence-model, using the ORM, and map that to/from your domain-model. However, I don't see any of these challenges getting easier by throwing the ORM out and having the developers handling everything themselves using SQL directly.Could you elaborate on this method? Or direct me to docs describing it in more detail? I'm quite interested in this approach, but would like to see what amount of overhead (development and maintenance) this might result in.
I'm not in favor of this method myself as, IMO, it adds an unnecessary mapping-layer which just slows down development. But it might make sense depending on how much control you have on your DB, domain-model, code, developers etc.
I don't have any pointers describing this separation tho, try asking Mr. Google, that's what I'd do.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
På fredag 12. august 2016 kl. 10:33:19, skrev Chris Travers <chris.travers@gmail.com[chris.travers@gmail.com]>: > My preference is stored procedures plus service locators I know your work on the former with respect to the financial app you are working on. Would you care to elaborate a little on the latter (service locators) ? Thanks, Karsten
On Fri, Aug 12, 2016 at 11:24 AM, Daevor The Devoted <dollien@gmail.com> wrote:
On Fri, Aug 12, 2016 at 10:58 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:Could you elaborate on this method? Or direct me to docs describing it in more detail? I'm quite interested in this approach, but would like to see what amount of overhead (development and maintenance) this might result in.
There is overhead, of course but sometimes it cannot be avoided. In a large scientific programming project I helped with, we were using Perl and DBIx::Class and we had to partition a table because of autovacuum and planner considerations (autovacuum not keeping up on one large table, and partition key dramatically affecting which indexes were useful). You can't just get this to "just work" with an orm. So we added a mapping layer
What you do is create a view, and then add your insert/update/delete logic to triggers which allow the view to actually proxy to the other tables properly. Then you use your view as the base relation your ORM hits. Matt Trout had pointed me to that direction some time earlier and it came in really handy.
The same basic approach can be used to create a mapping layer generally.
But again, if you are primarily worried about development time, then that is more important, usually, than information management as a whole.
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
On Fri, Aug 12, 2016 at 11:32 AM, Karsten Hilbert <Karsten.Hilbert@gmx.net> wrote:
På fredag 12. august 2016 kl. 10:33:19, skrev Chris Travers <chris.travers@gmail.com[chris.travers@gmail.com]>:
> My preference is stored procedures plus service locators
I know your work on the former with respect to the financial app you are working on.
Would you care to elaborate a little on the latter (service locators) ?
Sure. What I prefer to do is to allow for a (cacheable) lookup on the basis of some criteria, either:
1. Function name or
2. Function name and first argument type
This assumes that whichever discovery criteria you are using leads to uniquely identifying a function.
Then from the argument list, I know the names and types of the arguments, and the service locator can map them in. This means:
1. You can expose an API which calls arguments by name rather than just position, and
2. You can add arguments of different types without breaking things as long as it is agreed that unknown arguments are passed in as NULL.
In Oracle, of course, one has revision based editions, which allow similar seamless migration, but in both cases you can slowly extend an API around your data in a way that applications can use without having to upgrade all applications using the db at the same time
Thanks,
Karsten
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
On Fri, Aug 12, 2016 at 01:32:33PM +0200, Chris Travers wrote: >>> My preference is stored procedures plus service locators >> >> Would you care to elaborate a little on the latter (service locators) ? >> > > Sure. What I prefer to do is to allow for a (cacheable) lookup on the > basis of some criteria, either: > 1. Function name or > 2. Function name and first argument type > > This assumes that whichever discovery criteria you are using leads to > uniquely identifying a function. > > Then from the argument list, I know the names and types of the arguments, > and the service locator can map them in. This means: > > 1. You can expose an API which calls arguments by name rather than just > position, and > 2. You can add arguments of different types without breaking things as > long as it is agreed that unknown arguments are passed in as NULL. Maybe I am a bit dense. Can you please give an example ? Thanks, Karsten -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
On Sun, Aug 14, 2016 at 12:35 PM, Karsten Hilbert <Karsten.Hilbert@gmx.net> wrote:
On Fri, Aug 12, 2016 at 01:32:33PM +0200, Chris Travers wrote:
>>> My preference is stored procedures plus service locators
>>
>> Would you care to elaborate a little on the latter (service locators) ?
>>
>
> Sure. What I prefer to do is to allow for a (cacheable) lookup on the
> basis of some criteria, either:
> 1. Function name or
> 2. Function name and first argument type
>
> This assumes that whichever discovery criteria you are using leads to
> uniquely identifying a function.
>
> Then from the argument list, I know the names and types of the arguments,
> and the service locator can map them in. This means:
>
> 1. You can expose an API which calls arguments by name rather than just
> position, and
> 2. You can add arguments of different types without breaking things as
> long as it is agreed that unknown arguments are passed in as NULL.
Maybe I am a bit dense. Can you please give an example ?
Ok. Two ways of doing this based on different discovery criteria.. The first would be:
CREATE FUNCTION person_save(in_id int, in_first_name text, in_last_name text, in_date_of_birth date)
RETURNS person LANGUAGE ... as $$ ... $$;
Then you have a service locator that says "I have a person object and want to call person_save." It then looks up the function argument names and calls it something like this:
SELECT * FROM person_save(?, ?, ?, ?)
with parameters
$object->id, $object->first_name, $object->last_name, $object->date_of_birth
The second approach is to tie to the first argument type (think 'self' in Python).
In this case, we'd have a function defined like this:
CREATE FUNCTION save(person) RETURNS person LANGUAGE ... AS $$ ...$$;
Then we have a different service locator that maps this to the safe function as:
SELECT * FROM save(?::person);
with a argument that is basically:
serialize_to_record_form($object)
Of course that's just the start. To make this really usable you have to add some additional functionality but that should be enough to describe the process.
Thanks,
Karsten
--
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
Hello Chris, I am getting closer but ... > > > Sure. What I prefer to do is to allow for a (cacheable) lookup on the > > > basis of some criteria, either: > > > 1. Function name or > > > 2. Function name and first argument type > > > > > > This assumes that whichever discovery criteria you are using leads to > > > uniquely identifying a function. > > > > > > Then from the argument list, I know the names and types of the arguments, > > > and the service locator can map them in. This means: > > > > > > 1. You can expose an API which calls arguments by name rather than just > > > position, and > > > 2. You can add arguments of different types without breaking things as > > > long as it is agreed that unknown arguments are passed in as NULL. > > Ok. Two ways of doing this based on different discovery criteria.. The > first would be: > > CREATE FUNCTION person_save(in_id int, in_first_name text, in_last_name > text, in_date_of_birth date) > RETURNS person LANGUAGE ... as $$ ... $$; > > Then you have a service locator Which is what running where ? > that says How ? Thanks, Karsten > "I have a person object and want to call person_save." It then looks up the function argument names and > calls it something like this: > > SELECT * FROM person_save(?, ?, ?, ?) > > with parameters > $object->id, $object->first_name, $object->last_name, $object->date_of_birth -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
On Sun, Aug 14, 2016 at 3:42 PM, Karsten Hilbert <Karsten.Hilbert@gmx.net> wrote:
Hello Chris,
I am getting closer but ...
> > > Sure. What I prefer to do is to allow for a (cacheable) lookup on the
> > > basis of some criteria, either:
> > > 1. Function name or
> > > 2. Function name and first argument type
> > >
> > > This assumes that whichever discovery criteria you are using leads to
> > > uniquely identifying a function.
> > >
> > > Then from the argument list, I know the names and types of the arguments,
> > > and the service locator can map them in. This means:
> > >
> > > 1. You can expose an API which calls arguments by name rather than just
> > > position, and
> > > 2. You can add arguments of different types without breaking things as
> > > long as it is agreed that unknown arguments are passed in as NULL.
>
> Ok. Two ways of doing this based on different discovery criteria.. The
> first would be:
>
> CREATE FUNCTION person_save(in_id int, in_first_name text, in_last_name
> text, in_date_of_birth date)
> RETURNS person LANGUAGE ... as $$ ... $$;
>
> Then you have a service locator
Which is what running where ?
> that says
How ?
One example is of such a service locator is
http://search.cpan.org/dist/ PGObject-Simple/lib/PGObject/ Simple.pm
http://search.cpan.org/dist/
It runs as a library which helps the program decide how to do the call.
Currently it looks in the system catalogs but you still have the ordinal syntax too.
Currently it looks in the system catalogs but you still have the ordinal syntax too.
One minor issue currently is that object properties override named arguments when it should probably be the other way around.
I have a composite type mapper also on CPAN but it is far less mature and the documentation is not really very good yet.
The basic approach is:
1. Loop up the argument names
2. Strop any in_ off the beginning (in_ being a convention used to avoid name collision with underlying columns)
3. Map those back in as named arguments or object properties.
4. Any unknown inputs, we supply NULL (UNKNOWN) to.
Thanks,
Karsten
> "I have a person object and want to call person_save." It then looks up the function argument names and
> calls it something like this:
>
> SELECT * FROM person_save(?, ?, ?, ?)
>
> with parameters
> $object->id, $object->first_name, $object->last_name, $object->date_of_birth--
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.
On Sun, Aug 14, 2016 at 04:02:19PM +0200, Chris Travers wrote: > One example is of such a service locator is > http://search.cpan.org/dist/PGObject-Simple/lib/PGObject/Simple.pm > > It runs as a library which helps the program decide how to do the call. > Currently it looks in the system catalogs but you still have the ordinal > syntax too. > > One minor issue currently is that object properties override named > arguments when it should probably be the other way around. Aha, now I understand. Thank you, Karsten -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346