Avi,
> I understand why it will not be implemented with overloaded functions.
> Is there a possibility to allow this only for functions that are not
> overloaded?
Unfortunately, no. There's simply no way for the database to tell the
difference between a function call relying on defaults, and one with the
wrong parameters. SQL Server's approach with defaults works *because* SQL
Server does not support overloaded procedures.
> The SQL function solution is really not going to help in
> my case since the function builds a select statement dynamically based
> on which parameters have a non-null value. The number of parameters is
> something like 12 or 13 and the control on which parameters are set is
> determined by a complex combination of program logic and user
> selections. What I did to solve this problem was to force all
> variables to be initialized to null and then set the non-null ones
> before the call to the function.
This sounds like a good solution to me.
BTW, named parameters for PostgreSQL Functions are on the to-do list, but I
don't think anyone is currently working on them.
> very large tables (some of our tables are > 5M rows) :-) . What makes
> it more impressive is the fact that SS runs on a 4 CPU machine with 2
> GB of memory while PostgreSQL on a single CPU machine with 384M memory
> running SuSE 8.2. In the near future I will be moving the PostgreSQL
> database to a similar configuration as SS. It will be interested to
> compare them then.
That's a very nice testimonial! Thanks.
BTW, you will probably wish to join the PGSQL-Performance mailing list to make
sure that you can tune your PostgreSQL database properly.
--
Josh Berkus
Aglio Database Solutions
San Francisco