Steve wrote:
> Hi,
>
> I've been running postgres on my server for over a year now and the
> tables have become huge. I have 3 tables that have data over 10GB each
> and these tables are read very very frequently. In fact, heavy searches
> on these tables are expected every 2 to 3 minutes. This unfortunately
> gives a very poor response time to the end user and so I'm looking at
> other alternatives now.
This depend on the query that you are running on it:
Are you performing queries using the like operator? If yes did you define
an index on it using the right operator class ?
Are you performing queries on a calculated field ? If yes then you need
to construct a sort of materialized view.
If you are on Linux did you mounted your data partition with the option
noatime ?
Please provide use more information on your queries and on your datas,
your configurations...
Usualy split your tables on multiple disk is the last optimization step,
are you sure did you already reach the bootleneck of your sistem ?
Regards
Gaetano Mendola