> > > > > I am considering splitting the database into
> > > > > tables residing on separate machines, and connect
> > > > > them on one master node.
> > >
> > > Splitting tables across multiple machines would do
> > > nothing more than make the entire system run at a
> > > snail's pace . . . because you just couldn't move
> > > data between machines quickly enough.
> >
> > In my case, queries typically return ... [a]round 100
> > records at most. [E]ven over 10 Mb ethernet, it would
> > take at most about a second to transfer. This is a
> > much smaller delay than the query time itself, which
> > can take 10 seconds or more.
> >
> > So, splitting the data in such a way that one table is
> > queried, and then tables joined from it are queried in
> > parallel, would cause a signifficant speed-up.
>
> Then do exactly that: run separate PostgreSQL databases on
> multiple machines and build a data abstraction layer that
> does the join manually. If there really are only small
> numbers of rows, then it's just as fast to transfer the
> data to your application machine as to transfer it all
> to a common PostgreSQL machine and then to your application.
I was hoping to do precisely that. But I never got the answer to the
question of does postgres support linking to tables external to it's
database? Is it possible to attach a table from another postgres database
that is on a different server?
> If you really need application transparency
Which would always be nice...
> then things get a bit uglier;
I know.
> it shouldn't be too hard to build your own middleware layer that
> lets you treat the data storage as a single entity.
Yes, but if plans are formed to do something like an "official" postgres
based project do do this, then I'd rather spend time working on development
of that, rather than spend time building my own proprietary solution.
Regards.
Gordan