On Fri, 20 Feb 2009 06:50:22 -0800
David Fetter <david@fetter.org> wrote:
> > The reason behind this appears to be moving some of the checks
> > into the database and away from the application.
>
> Since a useful database has *many* applications instead of "the"
> application, I think this is an excellent move. Single Point of
> Truth and all that.
I generally prefer code clearness and security over presumed
performance gains but I was wondering if "checks" may have an impact
on performances and if pg does some optimisation over them.
eg. suppose I'm:
insert into bla (a,b,c) from select a,b,c from bhu.
And bla.a and bhu.a have the same constraint/data type etc...
Is postgresql going to check if bhu.a fit in bla.a every time?
I may expect this kind of optimisation is done on type and domains
but would be too expensive/smart to do it on checks since postgresql
should understand the equivalence or inclusion of some checks.
So if a lot of stuff is moving around tables... I'd use domains and
user defined types rather than checks.
reasonable?
If postgresql does this kind of optimisation... pushing "checks" in
the DB is going to gain more extra points compared to doing checks
at the application level.
What I find a bit annoying is politely deal with the error once it
is reported back to the application *and* connection and *bandwidth*
costs of moving clearly wrong data back and forward.
If you've a good mapping between pg types and the application
language/library types it becomes easier to keep in sync those
checks otherwise it is a really boring job and DB checks becomes just
one more security net to maintain.
In some places you REALLY appreciate/need that layer... sometimes it
just get in the way.
--
Ivan Sergio Borgonovo
http://www.webthatworks.it