* Herv? Piedvache (herve@elma.fr) wrote:
> Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> > * Herv? Piedvache (herve@elma.fr) wrote:
> > > Is there any solution with PostgreSQL matching these needs ... ?
> >
> > You might look into pg_pool. Another possibility would be slony, though
> > I'm not sure it's to the point you need it at yet, depends on if you can
> > handle some delay before an insert makes it to the slave select systems.
>
> I think not ... pgpool or slony are replication solutions ... but as I have
> said to Christopher Kings-Lynne how I'll manage the scalabilty of the
> database ? I'll need several servers able to load a database growing and
> growing to get good speed performance ...
They're both replication solutions, but they also help distribute the
load. For example:
pg_pool will distribute the select queries amoung the servers. They'll
all get the inserts, so that hurts, but at least the select queries are
distributed.
slony is similar, but your application level does the load distribution
of select statements instead of pg_pool. Your application needs to know
to send insert statements to the 'main' server, and select from the
others.
> > > Is there any other solution than a Cluster for our problem ?
> >
> > Bigger server, more CPUs/disks in one box. Try to partition up your
> > data some way such that it can be spread across multiple machines, then
> > if you need to combine the data have it be replicated using slony to a
> > big box that has a view which joins all the tables and do your big
> > queries against that.
>
> But I'll arrive to limitation of a box size quickly I thing a 4 processors
> with 64 Gb of RAM ... and after ?
Go to non-x86 hardware after if you're going to continue to increase the
size of the server. Personally I think your better bet might be to
figure out a way to partition up your data (isn't that what google
does anyway?).
Stephen