Hello Richard,
> Perhaps look into clustering the tables.
Good idea : I will try to go further into this way.
> > There is no index on the aggregate table since the criterias, their
> > number and their scope are freely choosen by the customers.
>
> Hmm... not convinced this is a good idea.
Long days ago, when my application used Informix, I've try to index the
aggregate table: It was a nightmare to manage all these indexes (and their
volume) for a uncertain benefit.
> If you don't have any indexes and the table isn't clustered then PG has
> no choice but to scan the entire table for every query. As you note,
> that's going to destroy your cache. You can increase the RAM but sooner
> or later, you'll get the same problem.
I agree with you : You remarks take me not to rely to the cache features.
> 3. Split your tables into two - common fields, uncommon fields, that way
> filtering on the common fields might take less space.
> 4. Split your tables by date, one table per month or year. Then re-write
> your customers' queries on-the-fly to select from the right table. Will
> only help with queries on date of course.
That forces me to rewrite my query generator which is already a very complex
program (in fact the heart of the system)
> 5. Place each database on its own machine or virtual machine so they
> don't interfere with each other.
I'm afraid I don't have the money for that. As Simon and Gustavo suggested,
I will check my SCSI disks first.
Thank a lot for your advises !
Amicalement,
Patrick