Thread: Rationale for aversion to the central database?
I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to other web development communities.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
Let's see.... There are two major issues when writing business logic in database: 1. Portability. Being tied to a single database engine is not always a good idea. When you write business logic in database,you have to write and maintain your store procedures for every database engine you want to support. That can bereally complicated, and will surely take pretty much time, as programming languages for different databases are very differentfrom each other. And it's permanent: Every time you make a change to a store procedure, you must make that changefor every supported database. 2. Performance. I still don't have it clear, but, presumably, plpgsql(among others) may not have the same performance asJava, Ruby, Python, C++, or any other programming language. Also, when your application runs outside of the database server,having business logic on database will make your app use more resources from the database server, thus rendering isslow for all other operations. This goes against separating database and application. However, there are some cases when you may want or need to use business logic on database: when you need to fetch large volumesof data to produce some report. This is the case of some accounting reports in complex ERPs. The only advantage storeprocedures have is they run INSIDE the database, so there's no TCP/IP overhead and no network latency when the storeprocedure make a large query. Even running in the same host, fetching large volumes of data will always be faster froma store procedure. Setting this considerable advantage of store procedures, I still try to avoid business logic programming on database. Inthe very specific cases when I need to take advantage of this, I try to make it the most simple, more near to data collectingthan business logic, so the application receives processed or summarized data, and processes it as needed. Regards, Alvaro Aguayo Operations Manager Open Comb Systems E.I.R.L. Office: (+51-1) 3377813 | Mobile: (+51) 995540103 | (+51) 954183248 Web: www.ocs.pe ----- Original Message ----- From: "Guyren Howe" <guyren@gmail.com> To: "PostgreSql-general" <pgsql-general@postgresql.org> Sent: Sunday, 8 April, 2018 16:39:49 Subject: Rationale for aversion to the central database? I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to otherweb development communities. When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb databucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantialaversion to have multiple applications access one database, or even the reverse: all abstraction should be atthe application layer. My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly*easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adoptan aversion to sophisticated use of the database in that case. This attitude has just continued to today, even as many of them have switched to Postgres. This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community. Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than anotherthing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. Sowhy do folks continue to use MySQL? I find this mystifying.
1. With a micro service based architecture these days, it is difficult to justify putting all logic in a central database as you can only scale up in a database. Business logic in things like Spark can make a claim for scale out solution.
2. All RDBMS have a non portable stored proc language, making migration a pain.
On 04/08/2018 02:39 PM, Guyren Howe wrote: > I am a Rails developer at a medium-large size company. I’ve mostly > worked at smaller companies. I’ve some exposure to other web development > communities. > > When it comes to databases, I have universally encountered the attitude > that one should treat the database as a dumb data bucket. There is a > *very* strong aversion to putting much of any business logic in the > database. I encounter substantial aversion to have multiple applications > access one database, or even the reverse: all abstraction should be at > the application layer. > > My best theory is that these communities developed at a time when > Windows was more dominant, and just generally it was *significantly* > easier to use MySQL than Postgres for many, particularly new, > developers. And it is pretty reasonable to adopt an aversion to > sophisticated use of the database in that case. > > This attitude has just continued to today, even as many of them have > switched to Postgres. > > This is only a hypothesis. I am now officially researching the issue. I > would be grateful for any wisdom from this community. > > > Aside: it is rare to find a situation in life or anywhere where one > widely adopted thing is worse in *every way* than another thing, but > this certainly was and largely still continues to be the case when one > compares MySQL and Postgres. So why do folks continue to use MySQL? I > find this mystifying. In general I see it as a case of people working where they are comfortable. So folks that understand and like SQL do the heavy lifting there and use application logic to just work with the output of the database business logic. Folks that are comfortable with a language other then SQL use that language to do the business logic and see the database as just the dumb data store you refer to. The rise of frameworks over databases has also contributed to this in my opinion. Mostly because they encourage the notion that there is such a thing as universal SQL that operates independent of the underlying database. While it is possible it usually leads to a very simple SQL model that can work over multiple database engines. So again you end up with the database as a data bucket. -- Adrian Klaver adrian.klaver@aklaver.com
On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" <guyren@gmail.com> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to other web development communities.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
===============
Hi there. This issue is close to my heart and I'm with you. I am however very comfortable with using psql and PL/pgSQL and I am very opinionated.
I feel *very* strongly that a database that actually matters and where RI is critical, i.e., any PG db I handle, should make sense on its own and be *highly* usable on its own. It should not be dependent on some particular external application code to use it or make sense of things. It follows that I think nonintuituve exceptions/gotchas should be *clear* at a db level, likely using functions to encapsulate that information.
Sure, PL/pgSQL may possibly be slow at some things like doing lots of bigint math, but I would probably use C and ECPG for the appropriate cases.
Not a large percentage of programmers these days know how fast db tasks can be because they are used to working with relatively slow tools and frameworks. ( Yes, typical Python.)
I am also highly mystified by the dumbstore approach and frankly, I think that folks should KNOW their tools better. Not knowing how to use your database effectively typically results in unnecessary and often very convoluted application code, from my experience.
I keep hearing about db portability but I have yet to see cases where db logic was an issue. But to be honest, I haven't seen many migrations at all. Why? Because I think that it rarely ever happens. If I had to do it, I sure as heck hope that the db was "clean" and understandable without
having to review some probably awful app. code. Why would anyone migrate *away* from PG anyway? :)
One advantage to using logic and functions in the db is that you can fix things immediately without having to make new application builds. That in itself is a huge advantage, IMO.
Cheers,
-g
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
===============
Hi there. This issue is close to my heart and I'm with you. I am however very comfortable with using psql and PL/pgSQL and I am very opinionated.
I feel *very* strongly that a database that actually matters and where RI is critical, i.e., any PG db I handle, should make sense on its own and be *highly* usable on its own. It should not be dependent on some particular external application code to use it or make sense of things. It follows that I think nonintuituve exceptions/gotchas should be *clear* at a db level, likely using functions to encapsulate that information.
Sure, PL/pgSQL may possibly be slow at some things like doing lots of bigint math, but I would probably use C and ECPG for the appropriate cases.
Not a large percentage of programmers these days know how fast db tasks can be because they are used to working with relatively slow tools and frameworks. ( Yes, typical Python.)
I am also highly mystified by the dumbstore approach and frankly, I think that folks should KNOW their tools better. Not knowing how to use your database effectively typically results in unnecessary and often very convoluted application code, from my experience.
I keep hearing about db portability but I have yet to see cases where db logic was an issue. But to be honest, I haven't seen many migrations at all. Why? Because I think that it rarely ever happens. If I had to do it, I sure as heck hope that the db was "clean" and understandable without
having to review some probably awful app. code. Why would anyone migrate *away* from PG anyway? :)
One advantage to using logic and functions in the db is that you can fix things immediately without having to make new application builds. That in itself is a huge advantage, IMO.
Cheers,
-g
> > I am however very comfortable with using psql and PL/pgSQL and I am >very opinionated. > Nothing wrong with this approach and it may very well work 90% of the time. Until ... a day comes when you need to migrate out of PG to another RDBMS. Good luck at that time.
Greetings, * Alvaro Aguayo Garcia-Rada (aaguayo@opensysperu.com) wrote: > 1. Portability. Being tied to a single database engine is not always a good idea. When you write business logic in database,you have to write and maintain your store procedures for every database engine you want to support. That can bereally complicated, and will surely take pretty much time, as programming languages for different databases are very differentfrom each other. And it's permanent: Every time you make a change to a store procedure, you must make that changefor every supported database. The portability claim tends to be both a false one and often, when realized, results in a solution where you aren't using the database for anything complicated and you'd be better off with a much simpler data store. You also don't actually offer any justification for the claim that being tied to a single database engine is not always a good idea- why is that? With commercial databases it tends to be because you are at the behest of some very large commercial company- but that isn't an issue with PostgreSQL. The next argument may be that the project might go away and force a move to another database, but PG's track record is hard to beat in this area given the history and the number of people working to make it better and keep it maintained year after year. Ultimately, you really should be thinking of the database as the language of your data. You wouldn't write your app in multiple different programming languages, would you? What if Python suddently goes away as a language, or Java does? Would you write every piece of software twice, so that you can flip over to using Python instead of Java on a whim? > 2. Performance. I still don't have it clear, but, presumably, plpgsql(among others) may not have the same performance asJava, Ruby, Python, C++, or any other programming language. Also, when your application runs outside of the database server,having business logic on database will make your app use more resources from the database server, thus rendering isslow for all other operations. This goes against separating database and application. No, plpgsql doesn't have the same performance characteristics as Java, Ruby, Python, C++, or many other languages- but that's why it isn't the only language which is supported in PostgreSQL. You can write back-end functions in another of those languages, plus quite a few others including Javascript, R, Perl. As for the question about if it'll actually result in the database server being more taxed or not- that really depends. Aggregates are a great example- is it going to be cheaper for the database to run 'sum()' across a data set and give you the result, or for it to take every row from that table and ship it to a client? There are certainly examples which can go the other way too, of course, but it's really something to think about on an individual basis, not to make a broad stroke decision about, particularly when you're arguing that you'll get better performance by moving the code away from the data, that tends to not be the case. > However, there are some cases when you may want or need to use business logic on database: when you need to fetch largevolumes of data to produce some report. This is the case of some accounting reports in complex ERPs. The only advantagestore procedures have is they run INSIDE the database, so there's no TCP/IP overhead and no network latency whenthe store procedure make a large query. Even running in the same host, fetching large volumes of data will always befaster from a store procedure. This is what I'm getting at above, but I would caution that looping over a table in a stored procedure is generally much less performant than finding a way to express what you want in SQL. I'm afraid that the other advantages of doing more in the database aren't really being considered in your arguments above either- things like having constraints all checked in one place, regardless of the application, and the ability to have interfaces defined which multiple applications could operate against and know that they're all going to be getting back the same, consistent, results from the database since it's the same code underneath. Some of that can be done by sharing code between the applications, of course, but there's a great deal more risk there (what about when the applications need to change something that's done in that shared code, so they essentially fork it..? Or the applications have to be upgraded at different times, or a variety of other situations which could lead to that common code diverging, or even when the applications aren't written in the same language...). Just some food for thought. Thanks! Stephen
Attachment
Greetings, * Ravi Krishna (sravikrishna3@gmail.com) wrote: > >> I am however very comfortable with using psql and PL/pgSQL and I am very > >opinionated. > > > Nothing wrong with this approach and it may very well work 90% of the time. > Until ... a day comes when > you need to migrate out of PG to another RDBMS. Good luck at that time. Oh, don't worry, people do that. What I particularly enjoy are the stories (of which there are a number now...) where people moved away from PG for some reason or another, then to another database, to another, and another, and finally back to PG again, much the wiser for it but also rather battle-worn. :) Thanks! Stephen
Attachment
On 9 April 2018 at 07:39, Guyren Howe <guyren@gmail.com> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to other web development communities.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
It is interesting looking at many of the responses to this thread. I see a lot at each extreme - either put lots of stuff inthe database or use the database as just a 'dumb' store and put everything in the application code.
I think the real solution is somewhere in the middle. I've lost count of the number of applications where the application code is jumping through all sorts of hoops to do basic data operations which would be far better handled in the database and can easily be done using just ANSI SQL (so is portable). It drives me crazy when people tell me the database is slow when they are doing 'select * from table' and then filtering and sorting the data in their application. Applications should take advantage of what the database does well. Unfortunately, I see far too many developers who are uncomfortable with SQL, don't know how to structure their queries efficiently (lots of nested sub queries etc, cartesian joins etc).
At the other extreme is those who tend to put almost everything in the database - including business policy and business 'rules' which are probably better categorised as current business strategy. First, I think it is nearly always a mistake to try and enforce business policy with technology. Policies change too often and should be dealt with via administrative measures. Technology can certainly be used to raise alerts regarding policy breeches, but should not be used to enforce policies. Likewise, some business rules are more akin to strategies than being actual static rules and can change with little notice, rhyme or reason. These probably should not be 'hard coded' into the database. Other rules are more stable and unlikely to ever change and are likely good candidates for being encoded in the database as either functions or constraints.
I do feel that often the big problem is with management who fail to understand the time and effort needed to develop a good data model. Developers are put under pressure to deliver functionality and as long as it looks correct at the interface level, all is good. Little thought is really put into long term maintenance or performance. From a developer perspective, time put into becoming an expert in React, Angular, Node, Python etc is probably going to earn them more bonus points than time spent on developing skills in defining good data models or understanding of the power/functionality of the underlying database engine. Of course, this does tend to be short sighted as a good data model will tend to make it easier to add/enhance an application and understanding your database system will make changes and enhancements less daunting.
For me, the sign of a good developer is one who is able to get the balance right. They understand the strengths and weaknesses of ALL the components involved and are able to select the technology mix which suits the problem domain and are able to get the right balance between business responsiveness to change and long term maintenance/viability. Unfortunately, such developers are rare, so it will usually mean there are a team of people with different skills and what will matter is how well they are able to work together as a team and come up with an architecture which satisfies the business requirements.
--
regards,
Tim
--
Tim CrossTim, I'm sorry if I sound like a cheerleader, but boy did you nail this. I would basically say exactly the same thing, just not as well.
On Sun, Apr 8, 2018 at 9:37 PM, Tim Cross <theophilusx@gmail.com> wrote:
On 9 April 2018 at 07:39, Guyren Howe <guyren@gmail.com> wrote:I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to other web development communities.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
It is interesting looking at many of the responses to this thread. I see a lot at each extreme - either put lots of stuff inthe database or use the database as just a 'dumb' store and put everything in the application code.I think the real solution is somewhere in the middle. I've lost count of the number of applications where the application code is jumping through all sorts of hoops to do basic data operations which would be far better handled in the database and can easily be done using just ANSI SQL (so is portable). It drives me crazy when people tell me the database is slow when they are doing 'select * from table' and then filtering and sorting the data in their application. Applications should take advantage of what the database does well. Unfortunately, I see far too many developers who are uncomfortable with SQL, don't know how to structure their queries efficiently (lots of nested sub queries etc, cartesian joins etc).At the other extreme is those who tend to put almost everything in the database - including business policy and business 'rules' which are probably better categorised as current business strategy. First, I think it is nearly always a mistake to try and enforce business policy with technology. Policies change too often and should be dealt with via administrative measures. Technology can certainly be used to raise alerts regarding policy breeches, but should not be used to enforce policies. Likewise, some business rules are more akin to strategies than being actual static rules and can change with little notice, rhyme or reason. These probably should not be 'hard coded' into the database. Other rules are more stable and unlikely to ever change and are likely good candidates for being encoded in the database as either functions or constraints.I do feel that often the big problem is with management who fail to understand the time and effort needed to develop a good data model. Developers are put under pressure to deliver functionality and as long as it looks correct at the interface level, all is good. Little thought is really put into long term maintenance or performance. From a developer perspective, time put into becoming an expert in React, Angular, Node, Python etc is probably going to earn them more bonus points than time spent on developing skills in defining good data models or understanding of the power/functionality of the underlying database engine. Of course, this does tend to be short sighted as a good data model will tend to make it easier to add/enhance an application and understanding your database system will make changes and enhancements less daunting.For me, the sign of a good developer is one who is able to get the balance right. They understand the strengths and weaknesses of ALL the components involved and are able to select the technology mix which suits the problem domain and are able to get the right balance between business responsiveness to change and long term maintenance/viability. Unfortunately, such developers are rare, so it will usually mean there are a team of people with different skills and what will matter is how well they are able to work together as a team and come up with an architecture which satisfies the business requirements.--regards,Tim--Tim Cross
On 2018-04-08 14:39:49 -0700, Guyren Howe wrote: > I am a Rails developer at a medium-large size company. I’ve mostly worked at > smaller companies. I’ve some exposure to other web development communities. > > When it comes to databases, I have universally encountered the attitude that > one should treat the database as a dumb data bucket. There is a *very* strong > aversion to putting much of any business logic in the database. I encounter > substantial aversion to have multiple applications access one database, or even > the reverse: all abstraction should be at the application layer. Like Tim I would like to make a case for finding a middle ground. The sweet spot may be different from project to project and it is certainly different for different teams or companies. I'll try to outline what I like to solve within the database and what I like to put into the application, and why: Everything which is easy to specify *declaratively* goes into the database: Tables (of course), views, constraints, etc. These are basically type declarations for the data which is stored in the database, so it belongs there. Simple stored procedures and triggers go into the database, too. They should make using the database easier and should not surprise the user. This is mostly optional. Anything I would consider "application logic" (when the user does X, then the system should do Y) goes into the application. There is often a distinction between a backend (or data access layer) and a frontend. The backend could theoretically be within the database and be called via stored procedures, but I find it much easier to test and debug code outside of the database (this may be just my lack of knowledge about available tools). In my applications I use SQL heavily. RDBMs are good at processing queries, so use them for that. If all you want is a key-value store, don't use PostgreSQL. I'm not very fond of ORMs. I know what I want to do and can express it in SQL. An ORM makes me translate that into a different (and usually inferior) query language, which is then translated back into SQL. That doesn't make things easier for me. > My best theory is that these communities developed at a time when Windows was > more dominant, and just generally it was *significantly* easier to use MySQL > than Postgres for many, particularly new, developers. I come from Oracle, not MySQL, But I have also used MySQL, and I guess the very wide gap in capabilities between Oracle and MySQL made me cautious about putting too much into the database. There is also the expectation that you should be able to use a different database engine (SQL is a standard, right?) just like you should be able to use a different C compiler, but in practice that never works. And of course I wasn't very impressed with PL/SQL. (PostgreSQL gives you a much wider range of languages for stored procedures than Oracle, but PL/PerlU still isn't quite the same as Perl (And I suspect it's the same for Python). hp -- _ | Peter J. Holzer | we build much bigger, better disasters now |_|_) | | because we have much more sophisticated | | | hjp@hjp.at | management tools. __/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
Attachment
Maintaining your database logic in version control and versioning the deployed code can be a bit problematic.
Conversely, providing a standardized pgsql module through which data is updated and retrieved can help standardize access logic across multiple languages and libraries.
And I concur that database portability is a thing people like to discuss, but rarely occurs.
Portability is important for general ORM tools, less so for corporate projects (there are always exceptions).
Like any tool, needs to be used wisely.
I've worked on a project that has 10s of thousands of lines of business logic in the database and it makes perfect sense for that environment.
--Ray
On Mon, Apr 9, 2018 at 9:45 AM, Ray Cote <rgacote@appropriatesolutions.com> wrote:
Maintaining your database logic in version control and versioning the deployed code can be a bit problematic.Conversely, providing a standardized pgsql module through which data is updated and retrieved can help standardize access logic across multiple languages and libraries.And I concur that database portability is a thing people like to discuss, but rarely occurs.Portability is important for general ORM tools, less so for corporate projects (there are always exceptions).Like any tool, needs to be used wisely.I've worked on a project that has 10s of thousands of lines of business logic in the database and it makes perfect sense for that environment.--Ray
While this discussion is very interesting, it is important to realize that ultimately, it is the needs and policy of the company that decides how
that database is used. The primary purpose of a DBA is to install the proper security, protect the integrity of the data and maintain performance.
Determining whether it is better to place business logic in the database or the application is strictly on a case by case basis.
--
Melvin Davidson
Maj. Database & Exploration Specialist
Universe Exploration Command – UXC
Employment by invitation only!
Maj. Database & Exploration Specialist
Universe Exploration Command – UXC
Employment by invitation only!
On 2018-04-08 19:39:43 -0400, Stephen Frost wrote: > * Alvaro Aguayo Garcia-Rada (aaguayo@opensysperu.com) wrote: > > 1. Portability. Being tied to a single database engine is not always > > a good idea. When you write business logic in database, you have to > > write and maintain your store procedures for every database engine > > you want to support. That can be really complicated, and will surely > > take pretty much time, as programming languages for different > > databases are very different from each other. And it's permanent: > > Every time you make a change to a store procedure, you must make > > that change for every supported database. > > The portability claim tends to be both a false one and often, when > realized, results in a solution where you aren't using the database for > anything complicated and you'd be better off with a much simpler data > store. You also don't actually offer any justification for the claim > that being tied to a single database engine is not always a good idea- > why is that? With commercial databases it tends to be because you are > at the behest of some very large commercial company- but that isn't an > issue with PostgreSQL. Many companies don't like to support multiple database engines. There is the license issue, true, but having DBAs is even more important. If you are an Oracle shop you have your Oracle DBAs, and they know how to make backups (and how to restore them), how to monitor the database, how to track down performance issues, etc. If some application needs MS-SQL or PostgreSQL or MySQL, they won't have that expertise, so they will have to be trained, or you need additional DBAs (maybe on a part-time or support contract basis). So all other things being equal, companies will prefer applications which work with databases they already use. So if you are an application developer, it makes sense to develop your application to work with several databases. It vastly expands your potential clientele. At least that was the situation 10 years ago. These days much software is offered as a service. If the customer sees only a REST API and doesn't have to host the database on their own servers, they won't care about the RDBMS underneath. hp -- _ | Peter J. Holzer | we build much bigger, better disasters now |_|_) | | because we have much more sophisticated | | | hjp@hjp.at | management tools. __/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
Attachment
Peter J. Holzer <hjp-pgsql@hjp.at> writes: > In my applications I use SQL heavily. RDBMs are good at processing > queries, so use them for that. If all you want is a key-value store, > don't use PostgreSQL. I'm not very fond of ORMs. I know what I want to > do and can express it in SQL. An ORM makes me translate that into a > different (and usually inferior) query language, which is then > translated back into SQL. That doesn't make things easier for me. > Could not agree more! My experience has been that ORMs just get in the way. Worse yet, when I've investigated performance problems raised by developers, I've often found it is due to the ORM layer, which is unable to map more complex queries efficiently. The only 'layer' I've ever used which I liked was HugSQL. I quite liked this approach as you write the queries in SQL and these get exposed to the application layer as high level functions, so gives a nice clean interface. > > I come from Oracle, not MySQL, But I have also used MySQL, and I guess > the very wide gap in capabilities between Oracle and MySQL made me > cautious about putting too much into the database. There is also the > expectation that you should be able to use a different database engine > (SQL is a standard, right?) just like you should be able to use a > different C compiler, but in practice that never works. And of course I > wasn't very impressed with PL/SQL. (PostgreSQL gives you a much wider > range of languages for stored procedures than Oracle, but PL/PerlU still > isn't quite the same as Perl (And I suspect it's the same for Python). > > hp Again, totally agree. Nice in theory and reminds me of the 'write once, run everywhere' dream. Very few of the places I've worked have actually maintained cross database functionality for long, if at all. The problem is that while SQL may have a standard, how that standard is implemented is very different. When I have worked at places which tried to be database neutral, they inevitably give up as they find that in the end, they needed to maintain separate SQL or have separate database maintenance teams anyway. You will only get seamless SQL across different databases if your SQL is very basic, in which case, you probably don't need a full blown RDMS anyway. Most of the time, your choice of database will be dictated by your dominate platform in the market your application targets. -- Tim Cross
So far, I have nothing to add, but just one thing. See below: On 09.04.2018 00:37, g@luxsci.net wrote: > One advantage to using logic and functions in the db is that you can > fix things immediately without having to make new application builds. > That in itself is a huge advantage, IMO. This is actually not the case. You want to have those logic tested as thoroughly as possible being so close to your precious data. So, you write migration code that substitutes the old logic, test the whole package, if successful, deploy (and thus run the migration). Cheers, Sven
I have done some big contracts for large financial companies, and for most of them, ANY changes to the DB structure required extensive 3rd party testing and a change control process that sometimes took weeks.
But we did get a waiver for the use of DB 'code' like stored procedures and views, which only had to follow the standard development test / acceptance procedure by separate developer, end user and third party test teams.
For me, the database is more immutable than the application logic and especially the GUI, so it pays to spend a lot of time up front on DB design. Past experience has also lead me to expect that the DBMS will have a much longer shelf life than the application language / toolsets used against it, or at least, over time the languages / toolsets tend to multiply.
For my part, I like to spend a lot of tie in getting an optimal DB design, and also putting a lot of validation logic into the DB.
I also like making expensive use of stored procedures, my experience is that for a data-intensive multi-tool application they are faster and more secure...
On 23 April 2018 at 19:22, Sven R. Kunze <srkunze@mail.de> wrote:
So far, I have nothing to add, but just one thing. See below:
On 09.04.2018 00:37, g@luxsci.net wrote:One advantage to using logic and functions in the db is that you can fix things immediately without having to make new application builds. That in itself is a huge advantage, IMO.
This is actually not the case. You want to have those logic tested as thoroughly as possible being so close to your precious data.
So, you write migration code that substitutes the old logic, test the whole package, if successful, deploy (and thus run the migration).
Cheers,
Sven
On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" <guyren@gmail.com> wrote:
One advantage to using logic and functions in the db is that you can fix things immediately without having to make new application builds. That in itself is a huge advantage, IMO.
I doubt most of us would consider this any kind of advantage outside of the momentary temptation to do it when an app is completely broken and needs to be up in a hurry. Application changes, whether in the dB or in application logic, need to be tested, and they need to be revision controlled and released in a manner that can be easily rolled back in an automated manner. The fact that putting logic in the database can effectively allow developers to make unreleased changes to production apps is specifically one of the problems that I am trying to avoid when I keep most logic in the app instead of the dB. It’s a whole lot harder to make arbitrary manual changes to code in the app, whether interpreted or compiled, if it is running inside a container that cannot be updated. Even if you go in with a shell and update an interpreted file, the next time that container is launched the change will be lost, which is usually sufficient motivation to keep devs from doing that kind of thing.
I’ll put some things in the db, either for performance or because I want that logic to be built into the data and not be part of the application, but I choose those contexts carefully and I write them in as portable a manner as possible. And for those who say migrations don’t happen, I’ve certainly been through a few, usually as part of an acquisition or the like, but sometimes simply because another dB server better meets our needs after a time. And migrating stored procs can be really difficult. Such code usually has less complete unit and integration tests, which makes validating those changes more difficult, too.
But the biggest reason is that databases often have to scale up rather than out, so keeping as much logic in the application code allows my scaling requirements for the dB server to be as minimal as possible. Sure, there are workloads where pushing raw data across the wire will be more work than processing it in the dB, and in those cases, I may do that, but I consider it premature optimization to just assume that is necessary without hard evidence from production examples to suggest otherwise.
Finally, there’s the consistency argument. I want to find all of the logic in one place. Either entirely in the source code or entirely in the dB. Having to trace things from the code to the dB and back again can make it a whole lot harder to see, at a glance, what is happening in the code. Having logic in the dB also means it can be difficult or impossible to have two releases talking to the same schema version at the same time - so canary builds and rolling deployments can be difficult. Of course, schema changes can cause this problem, regardless of whether there are stored procs, but the more of your logic that lives in the db, the more likely it is that your releases will conflict over the db. So I’m more likely to be able to do a rolling release if I keep the db as a dumb data store and keep logic in the application code.
On Sun, 8 Apr 2018 14:39:49 -0700 Guyren Howe <guyren@gmail.com> wrote: > I am a Rails developer at a medium-large size company. I’ve mostly > worked at smaller companies. I’ve some exposure to other web > development communities. > > When it comes to databases, I have universally encountered the > attitude that one should treat the database as a dumb data bucket. > There is a *very* strong aversion to putting much of any business > logic in the database. I encounter substantial aversion to have > multiple applications access one database, or even the reverse: all > abstraction should be at the application layer. > > My best theory is that these communities developed at a time when > Windows was more dominant, and just generally it was *significantly* > easier to use MySQL than Postgres for many, particularly new, > developers. And it is pretty reasonable to adopt an aversion to > sophisticated use of the database in that case. The biggest single problem in most cases is a combination of communiction and attitude. Recall that most programmers are not DBA's and don't have access to create anything in the database. Traditionally the High Priests of Holy Data don't allow mere peons to create or modify content in the databases. Result is that programmers *have* to do most of their logic in code, even if the results put hugely more load on the servers. The DBA's are also not Well Qualified Genious Programmers who obviously know much more than mere DBA's about what the real use of data is supposed to be. They don't need no stinkin' SQL! They have the magic of ORM! Net result: People talk past one another. I've nearly been lynched for creating ten-row temporary tables without prior written permission or even suggesting that we might add indexes to support more effecient use of views. I've also nearly strung programmers up for running hundreds of "select * from table" statements because they weren't willing to figure out how to write a join or simple filter or give me enough data to do it for them. Good example are ORM packages: Most of really hideous code required for joins, group by, coalesce can easily be pushed into views allowing simple select X from Y where or one-line joins. DBA's can craft views that make sense, programmers could use them to simplify their code. Q: When is the last place you worked where DBAs were pigs (vs. chickens)? Q: How often was a DBA involved in the analysis and design of cycle of reporting? Q: How many programming projects have you been on where the hackers were willing to write clear, concise data reqirements and keep them up to date? When that happens people will use the databases as tools, ORM code will be [semi-] maintainable, and we'll live happily for ever after. -- Steven Lembark 1505 National Ave Workhorse Computing Rockford, IL 61103 lembark@wrkhors.com +1 888 359 3508
All,
Just chiming in . . . we’ve taken a somewhat different approach and actually encourage our programmers to build out thier own DBs. We’re using Postgres to aggregate many varied datasources into postgres as a cahing system, and then develop against this aggregated data. Yes, we understand that the realtime stuff is a laggy process. The intent here is to give the programmers a background in DB operation and performance expectations, especially since we primarilry build for Web Use.
bobb
On Apr 27, 2018, at 9:55 AM, Steven Lembark <lembark@wrkhors.com> wrote:On Sun, 8 Apr 2018 14:39:49 -0700
Guyren Howe <guyren@gmail.com> wrote:I am a Rails developer at a medium-large size company. I’ve mostly
worked at smaller companies. I’ve some exposure to other web
development communities.
When it comes to databases, I have universally encountered the
attitude that one should treat the database as a dumb data bucket.
There is a *very* strong aversion to putting much of any business
logic in the database. I encounter substantial aversion to have
multiple applications access one database, or even the reverse: all
abstraction should be at the application layer.
My best theory is that these communities developed at a time when
Windows was more dominant, and just generally it was *significantly*
easier to use MySQL than Postgres for many, particularly new,
developers. And it is pretty reasonable to adopt an aversion to
sophisticated use of the database in that case.
The biggest single problem in most cases is a combination
of communiction and attitude.
Recall that most programmers are not DBA's and don't have access to
create anything in the database. Traditionally the High Priests of
Holy Data don't allow mere peons to create or modify content in the
databases. Result is that programmers *have* to do most of their
logic in code, even if the results put hugely more load on the servers.
The DBA's are also not Well Qualified Genious Programmers who obviously
know much more than mere DBA's about what the real use of data is
supposed to be. They don't need no stinkin' SQL! They have the magic
of ORM!
Net result: People talk past one another.
I've nearly been lynched for creating ten-row temporary tables without
prior written permission or even suggesting that we might add indexes
to support more effecient use of views.
I've also nearly strung programmers up for running hundreds of
"select * from table" statements because they weren't willing to
figure out how to write a join or simple filter or give me enough
data to do it for them.
Good example are ORM packages: Most of really hideous code required
for joins, group by, coalesce can easily be pushed into views allowing
simple select X from Y where or one-line joins. DBA's can craft views
that make sense, programmers could use them to simplify their code.
Q: When is the last place you worked where DBAs were pigs (vs.
chickens)?
Q: How often was a DBA involved in the analysis and design of cycle
of reporting?
Q: How many programming projects have you been on where the hackers
were willing to write clear, concise data reqirements and keep
them up to date?
When that happens people will use the databases as tools, ORM code
will be [semi-] maintainable, and we'll live happily for ever after.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lembark@wrkhors.com +1 888 359 3508
- "The true measure of a man is how he treats someone who can do him absolutely no good."
- - Samuel Johnson
On Apr 27, 2018, at 8:45 , Basques, Bob (CI-StPaul) <bob.basques@ci.stpaul.mn.us> wrote:
Just chiming in . . . we’ve taken a somewhat different approach and actually encourage our programmers to build out thier own DBs. We’re using Postgres to aggregate many varied datasources into postgres as a cahing system, and then develop against this aggregated data. Yes, we understand that the realtime stuff is a laggy process. The intent here is to give the programmers a background in DB operation and performance expectations, especially since we primarilry build for Web Use.
I’d love to see a blog post about this.
On Apr 27, 2018, at 10:46 AM, Guyren Howe <guyren@gmail.com> wrote:On Apr 27, 2018, at 8:45 , Basques, Bob (CI-StPaul) <bob.basques@ci.stpaul.mn.us> wrote:Just chiming in . . . we’ve taken a somewhat different approach and actually encourage our programmers to build out thier own DBs. We’re using Postgres to aggregate many varied datasources into postgres as a cahing system, and then develop against this aggregated data. Yes, we understand that the realtime stuff is a laggy process. The intent here is to give the programmers a background in DB operation and performance expectations, especially since we primarilry build for Web Use.I’d love to see a blog post about this.
Me too!! :c) It’s funny you mention this, I’ve actually been experimenting with different Blogging tech for just this purpose. Not sure when something will pop out the other end though. :c)
bobb
- "Life isn't about waiting for the storm to pass; it's about learning to dance in the rain."
- - Vivian Greene
On Sun, Apr 8, 2018 at 4:39 PM, Guyren Howe <guyren@gmail.com> wrote: > I am a Rails developer at a medium-large size company. I’ve mostly worked at > smaller companies. I’ve some exposure to other web development communities. > > When it comes to databases, I have universally encountered the attitude that > one should treat the database as a dumb data bucket. There is a *very* > strong aversion to putting much of any business logic in the database. I > encounter substantial aversion to have multiple applications access one > database, or even the reverse: all abstraction should be at the application > layer. > > My best theory is that these communities developed at a time when Windows > was more dominant, and just generally it was *significantly* easier to use > MySQL than Postgres for many, particularly new, developers. And it is pretty > reasonable to adopt an aversion to sophisticated use of the database in that > case. > > This attitude has just continued to today, even as many of them have > switched to Postgres. > > This is only a hypothesis. I am now officially researching the issue. I > would be grateful for any wisdom from this community. > > > Aside: it is rare to find a situation in life or anywhere where one widely > adopted thing is worse in *every way* than another thing, but this certainly > was and largely still continues to be the case when one compares MySQL and > Postgres. So why do folks continue to use MySQL? I find this mystifying. This is a very common attitude in the industry, and a very unfortunate one. It's particularly common the so called 'enterprise' stacks -- java, .net etc. It's also completely crazy. SQL isn't _that_ difficult to learn, and in the particular case of postgres, allow for faster implementation for solutions for many clases of data driven problems that competing platforms. SQL is an ideal language for business logic IMSHO -- particularly the non-procedural forms of it. There are a number of casual factors for this attitude: *) outsourcing companies push standard stacks *) major software vendors (ms/ibm/etc) push 'lock-in' toolsets with lack of standardization *) poor understanding of data structure fundamentals is extremely pervasive *) developer salaries for data developers skew higher (causing HR departments to look for lower cost skillsets, totally oblivious to total development costs) *) 'next big thing in data' companies constantly marketing and pushing the 'new paradidm' and against proven, low cost solutions. Hadoop, big data, etc etc. *) developer multiculturalism (why do I need to know anything other than java?) *) perceived slow performance of databases when the problem is really storage *) poor math training disincline learning of set based logic in school *) corporate takeover of C.S. education syllabus -- in many overseas schools the educational programs are written by software vendors and are essentially tool training, or worse positional education for outsourcing grind shops I think things are getting better. SQL is undergoing a kind of renaissance for various reasons, and postgresql in particular is really on a tear. merlin
On April 24, 2018 07:27:59 am PDT, "Sam Gendler" <sgendler@ideasculptor.com> wrote:
On Sun, Apr 8, 2018 at 15:37 g@luxsci.net <g@luxsci.net> wrote:
On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" <guyren@gmail.com> wrote:
One advantage to using logic and functions in the db is that you can fix things immediately without having to make new application builds. That in itself is a huge advantage, IMO.
I doubt most of us would consider this any kind of advantage outside of the momentary temptation to do it when an app is completely broken and needs to be up in a hurry. Application changes, whether in the dB or in application logic, need to be tested, and they need to be revision controlled and released in a manner that can be easily rolled back in an automated manner. The fact that putting logic in the database can effectively allow developers to make unreleased changes to production apps is specifically one of the problems that I am trying to avoid when I keep most logic in the app instead of the dB. It’s a whole lot harder to make arbitrary manual changes to code in the app, whether interpreted or compiled, if it is running inside a container that cannot be updated. Even if you go in with a shell and update an interpreted file, the next time that container is launched the change will be lost, which is usually sufficient motivation to keep devs from doing that kind of thing.
I’ll put some things in the db, either for performance or because I want that logic to be built into the data and not be part of the application, but I choose those contexts carefully and I write them in as portable a manner as possible. And for those who say migrations don’t happen, I’ve certainly been through a few, usually as part of an acquisition or the like, but sometimes simply because another dB server better meets our needs after a time. And migrating stored procs can be really difficult. Such code usually has less complete unit and integration tests, which makes validating those changes more difficult, too.
But the biggest reason is that databases often have to scale up rather than out, so keeping as much logic in the application code allows my scaling requirements for the dB server to be as minimal as possible. Sure, there are workloads where pushing raw data across the wire will be more work than processing it in the dB, and in those cases, I may do that, but I consider it premature optimization to just assume that is necessary without hard evidence from production examples to suggest otherwise.
Finally, there’s the consistency argument. I want to find all of the logic in one place. Either entirely in the source code or entirely in the dB. Having to trace things from the code to the dB and back again can make it a whole lot harder to see, at a glance, what is happening in the code. Having logic in the dB also means it can be difficult or impossible to have two releases talking to the same schema version at the same time - so canary builds and rolling deployments can be difficult. Of course, schema changes can cause this problem, regardless of whether there are stored procs, but the more of your logic that lives in the db, the more likely it is that your releases will conflict over the db. So I’m more likely to be able to do a rolling release if I keep the db as a dumb data store and keep logic in the application code.
=======
I could have worded that better but I think that we're coming at it from different directions. You think of your application as the "master" operator. I think of a PG db as the "master", not a slave. I believe that we shouldn't _have_ to use an external application for the database to be useful and coherent. I like to think of external applications as subservient to the db and not the other way around. Yeah, I know, probably not a popular viewpoint.
Sorry, I don't really understand why it would be so hard to migrate, say pl/pgsql functions. You can maybe expect to write some likely convoluted application code, though. :) Reusable functions in the db that are solid also means that developers don't have to reinvent the wheel in whatever language and debugging also becomes simpler.
Perhaps I'm extreme. In my ideal world, developers might not even know table names! I'm kidding ,sorta...
Thanks much for your comments,
Gerry
=======
I could have worded that better but I think that we're coming at it from different directions. You think of your application as the "master" operator. I think of a PG db as the "master", not a slave. I believe that we shouldn't _have_ to use an external application for the database to be useful and coherent. I like to think of external applications as subservient to the db and not the other way around. Yeah, I know, probably not a popular viewpoint.
Sorry, I don't really understand why it would be so hard to migrate, say pl/pgsql functions. You can maybe expect to write some likely convoluted application code, though. :) Reusable functions in the db that are solid also means that developers don't have to reinvent the wheel in whatever language and debugging also becomes simpler.
Perhaps I'm extreme. In my ideal world, developers might not even know table names! I'm kidding ,sorta...
Thanks much for your comments,
Gerry
On 04/27/2018 05:52 PM, g@luxsci.net wrote:
On April 24, 2018 07:27:59 am PDT, "Sam Gendler" <sgendler@ideasculptor.com> wrote:On Sun, Apr 8, 2018 at 15:37 g@luxsci.net <g@luxsci.net> wrote:
On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" <guyren@gmail.com> wrote:
One advantage to using logic and functions in the db is that you can fix things immediately without having to make new application builds. That in itself is a huge advantage, IMO.I doubt most of us would consider this any kind of advantage outside of the momentary temptation to do it when an app is completely broken and needs to be up in a hurry. Application changes, whether in the dB or in application logic, need to be tested, and they need to be revision controlled and released in a manner that can be easily rolled back in an automated manner. The fact that putting logic in the database can effectively allow developers to make unreleased changes to production apps is specifically one of the problems that I am trying to avoid when I keep most logic in the app instead of the dB. It’s a whole lot harder to make arbitrary manual changes to code in the app, whether interpreted or compiled, if it is running inside a container that cannot be updated. Even if you go in with a shell and update an interpreted file, the next time that container is launched the change will be lost, which is usually sufficient motivation to keep devs from doing that kind of thing.I’ll put some things in the db, either for performance or because I want that logic to be built into the data and not be part of the application, but I choose those contexts carefully and I write them in as portable a manner as possible. And for those who say migrations don’t happen, I’ve certainly been through a few, usually as part of an acquisition or the like, but sometimes simply because another dB server better meets our needs after a time. And migrating stored procs can be really difficult. Such code usually has less complete unit and integration tests, which makes validating those changes more difficult, too.But the biggest reason is that databases often have to scale up rather than out, so keeping as much logic in the application code allows my scaling requirements for the dB server to be as minimal as possible. Sure, there are workloads where pushing raw data across the wire will be more work than processing it in the dB, and in those cases, I may do that, but I consider it premature optimization to just assume that is necessary without hard evidence from production examples to suggest otherwise.Finally, there’s the consistency argument. I want to find all of the logic in one place. Either entirely in the source code or entirely in the dB. Having to trace things from the code to the dB and back again can make it a whole lot harder to see, at a glance, what is happening in the code. Having logic in the dB also means it can be difficult or impossible to have two releases talking to the same schema version at the same time - so canary builds and rolling deployments can be difficult. Of course, schema changes can cause this problem, regardless of whether there are stored procs, but the more of your logic that lives in the db, the more likely it is that your releases will conflict over the db. So I’m more likely to be able to do a rolling release if I keep the db as a dumb data store and keep logic in the application code.
=======
I could have worded that better but I think that we're coming at it from different directions. You think of your application as the "master" operator. I think of a PG db as the "master", not a slave. I believe that we shouldn't _have_ to use an external application for the database to be useful and coherent. I like to think of external applications as subservient to the db and not the other way around. Yeah, I know, probably not a popular viewpoint.
Sorry, I don't really understand why it would be so hard to migrate, say pl/pgsql functions. You can maybe expect to write some likely convoluted application code, though. :) Reusable functions in the db that are solid also means that developers don't have to reinvent the wheel in whatever language and debugging also becomes simpler.
And it's not like the developers don't rewrite the code every time they migrate to the Latest and Greatest Language...
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.
> On Sun, 8 Apr 2018 14:39:49 -0700 > Guyren Howe <guyren@gmail.com<mailto:guyren@gmail.com>> wrote: > > When it comes to databases, I have universally encountered the > attitude that one should treat the database as a dumb data bucket. > There is a *very* strong aversion to putting much of any business > logic in the database. I encounter substantial aversion to have > multiple applications access one database, or even the reverse: all > abstraction should be at the application layer. That seems un-pragmatic to me. IMHO if any business logic needs access to lots of data, it's best implemented by code that resides in the database itself, close to the data. I once had a job where, one night a week, I couldn't go home until a certain program had been run and completed successfully. That meant I wasn't going home until midnight. I realised that the person that wrote it was (at least for the purposes of that program) treating the database like "a dumb data bucket". Millions of records were being selected, transferred over a network to another host, effectively grouped and summarised, then each resulting summary record was inserted into another table, one stored function call at a time (with all the network round trip times that that implies). It took 2-3 hours to complete. I replaced it with a stored function that took two minutes to run and I was able to start going home hours earlier. So, as you can imagine, it would take an incredibly good argument to convince me that business logic shouldn't reside in the database. :-) I've always assumed (perhaps incorrectly) that not wanting business logic in the database (and not using all of the features that databases provide) was just a way to justify programmers not having to learn SQL but it's just another language and paradigm and programmers know so many languages and paradigms anyway that I never understood why knowing SQL was a problem. My assumption is probably wrong. And I guess as long as the resulting software runs as quickly as it needs to, it probably isn't an issue. If it's a widely held view, then it must be workable. I use stored functions exclusively, partly so that the business logic is close to the data it needs to be close to for efficiency, but also for security reasons. Users have no permissions to select, update, insert or delete anything. All they can do is execute stored functions that have been previously created by the database owner who does have those permissions. For a system that's accessible to the public, It's a great way to guarantee that SQL injection can't do any harm (even if all the other protections fail to work or are bypassed). For a system that's only accessible to internal staff, it's a great way to defend against their workstations being infected by malware that goes looking for databases to attack. cheers, raf
On 2018-04-27 22:52:39 +0000, g@luxsci.net wrote: > Perhaps I'm extreme. In my ideal world, developers might not even know table > names! I'm kidding ,sorta... If they don't know the table names, how can they write those stored procedures? hp -- _ | Peter J. Holzer | we build much bigger, better disasters now |_|_) | | because we have much more sophisticated | | | hjp@hjp.at | management tools. __/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
Attachment
On Sat, 28 Apr 2018 08:02:21 +0200 "Peter J. Holzer" <hjp-pgsql@hjp.at> wrote: > On 2018-04-27 22:52:39 +0000, g@luxsci.net wrote: > > Perhaps I'm extreme. In my ideal world, developers might not even > > know table names! I'm kidding ,sorta... > > If they don't know the table names, how can they write those stored > procedures? One of the main reasons DBA's need to be pigs. Much spaghetti can be avoided in ORM frameworks with well-designed views. -- Steven Lembark 1505 National Ave Workhorse Computing Rockford, IL 61103 lembark@wrkhors.com +1 888 359 3508
> That seems un-pragmatic to me. IMHO if any business logic needs > access to lots of data, it's best implemented by code that > resides in the database itself, close to the data. I once had a There is a balance: ETL and reporting code is often iterative and can be more flexable in an external language. Getting the balance right takes some thought, planning -- both of which are usually in short supply on software projects. Testing database code and structures is easy in PG using TAP: <https://blog.2ndquadrant.com/using-postgresql-tap-framework-extensions/> Also easy to test database contents and emedded logic: <https://www.slideshare.net/lembark/object-exercise> Putting all or no logic into the database as a knee-jerk reaction is usually a mistake. Especially with PG testing is easy using extensions and schemas to segregate the code/data being tested. At some point the data will have to hit code, especially in ETL or reporting logic. Balancing the work in and out of the database just makes sense. Which is probaly why it doesn't happen... -- Steven Lembark 1505 National Ave Workhorse Computing Rockford, IL 61103 lembark@wrkhors.com +1 888 359 3508
On 2018-04-28 09:54:27 -0500, Steven Lembark wrote: > On Sat, 28 Apr 2018 08:02:21 +0200 > "Peter J. Holzer" <hjp-pgsql@hjp.at> wrote: > > > On 2018-04-27 22:52:39 +0000, g@luxsci.net wrote: > > > Perhaps I'm extreme. In my ideal world, developers might not even > > > know table names! I'm kidding ,sorta... > > > > If they don't know the table names, how can they write those stored > > procedures? > > One of the main reasons DBA's need to be pigs. Much spaghetti can be > avoided in ORM frameworks with well-designed views. Apparently my remark was too short (and perhaps with too little context) to be clear. As I understood g@luxsci.net, they were arguing for writing lots of stored procedures so that developers wouldn't even have to know the table names (they would just call the procedures) [Although I now see that it was somebody else who proposed that as the only interface]. But those stored procedures don't write themselves. Somebody has to write them and a person who writes code is called a developer. So there needs to be at least one developer who knows the table names - the developer of the stored procedures. And that developer should better be good - API design is hard, and if you take away SQL from your fellow developers you should give them something better (for the task at hand), not something worse. hp -- _ | Peter J. Holzer | we build much bigger, better disasters now |_|_) | | because we have much more sophisticated | | | hjp@hjp.at | management tools. __/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
Attachment
On April 28, 2018 11:18:02 am PDT, "Peter J. Holzer" <hjp-pgsql@hjp.at> wrote:
On 2018-04-28 09:54:27 -0500, Steven Lembark wrote:
> On Sat, 28 Apr 2018 08:02:21 +0200
> "Peter J. Holzer" <hjp-pgsql@hjp.at> wrote:
>
> > On 2018-04-27 22:52:39 +0000, g@luxsci.net wrote:
> > > Perhaps I'm extreme. In my ideal world, developers might not even
> > > know table names! I'm kidding ,sorta...
> >
> > If they don't know the table names, how can they write those stored
> > procedures?
>
> One of the main reasons DBA's need to be pigs. Much spaghetti can be
> avoided in ORM frameworks with well-designed views.
Apparently my remark was too short (and perhaps with too little context)
to be clear.
As I understood g@luxsci.net, they were arguing for writing lots of
stored procedures so that developers wouldn't even have to know the
table names (they would just call the procedures) [Although I now see
that it was somebody else who proposed that as the only interface].
But those stored procedures don't write themselves. Somebody has to
write them and a person who writes code is called a developer. So there
needs to be at least one developer who knows the table names - the
developer of the stored procedures. And that developer should better be
good - API design is hard, and if you take away SQL from your fellow
developers you should give them something better (for the task at hand),
not something worse.
===
Yes, sorry, I meant application developers. Well, actually I meant developers who only look at the db through an ORM and who like it that way. At my current position, the ones called "developers" all use Django, and they practically never look at ( let alone interact with) the db using psql. Forget about writing and using pgsql functions. There are no DBAs or database developers. I am trying to change this culture a bit but I may give up soon. Tragic, really, given the mostly terrible performance that they get with naive Django code. Some of these developers have even gone so far as to proclaim that PG is too slow ( they didn't even configure it !) for their "big" data, which really aren't big, and move on to something like ES, which they really don't need. For example, they have code that could benefit from simple FTS operations, and instead of taking seconds or minutes, could be done in milliseconds. Or perhaps they could not execute certain queries for user interfaces quickly enough with their Django code so they think that they need ES for those queries. So now they have to manage and design their ES implementations, activities which frankly should _not_ be taken lighlty, and manage their data in the PG db which they eventually come back to for at least some sane data integrity. Now they have to perform very long and convoluted processes to keep both in sync. Ouch. Know your tools!
But there is no all or nothing, as others have pointed out. Some things are better done and more easily managed outside of the DB, at least for me.
For example, I tried doing lots of bigint math in pgsql and I moved to external C code for that.
That C code still called tried and true pgsql convenience functions for getting at the correct data however, and these were the same functions that my Java code used for other activities.
Gerry