Christopher Kings-Lynne wrote:
>> Probably by carefully partitioning their data. I can't imagine anything
>> being fast on a single table in 250,000,000 tuple range. Nor can I
>> really imagine any database that efficiently splits a single table
>> across multiple machines (or even inefficiently unless some internal
>> partitioning is being done).
>
>
> Ah, what about partial indexes - those might help. As a kind of
> 'semi-partition'.
He could also you schemas to partition out the information within the
same database.
J
>
> Chris
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - jd@commandprompt.com - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL