Kevin:
Second machine config parameters:
shared_buffers = 8GB
work_mem = 1 GB (was 512MB)
maintenace_work_mem = 4 GB
#seq_page_cost = 1.0
#cpu_tuple_cost = 0.01
#cpu_index_tuple_cost = 0.005
#cpu_operator_cost = 0.0025
random_page_cost = 2.0
effective_cache_size = 110GB
I try to change from_collapse_limit, join_collapse_limit and io_con, w/o success.
I create a database with this tables only, vaccum analyze them and test with only my connection to postgresql.
Now we have another querys(all with aggregates) that the time is 15x - 20x slower than Oracle and SQL Server.
All tables have indexes (btree) with fields in the where/order/group parameters.
Maxim:
The developer is changing from a Desktop application (ODBC with Use Declare/Fetch, 'single' querys with local summing and aggregation) for a client/server web application (.NET, most querys with aggregate). Unfortunattly we cant change this querys, but I will try your solution to see what happens.
Take a look at another big query generated by the development tool. Oracle/SQL Server runs the same query (with the same data but in a slow machine) in about 2 seconds:
Best regards,
Alexandre