We have a legacy application that currently uses an old isam style database
(from db/c). Piece by piece, we are rewriting our application to use
Postgres.
In addition to being a software company, we also offer ASP service, and run
the application for many customers on a cluster of Solaris boxes.
I've been pondering the benefits of having:
a database per customer (in one cluster)
one big-a** database (one cluster)
a few clusters, a few combined databases
We perform pg_dumps every evening, one per database. If we need to replace
some rows for a particular table, we would have to put back the entire
database somewhere, extract the rows we need, and transfer those to
production. Besides for backing up every table separately for every
database, is there a saner way to handle this?
It is EXTREMELY important that our ASP customers do not have access to each
others data. Some of the access to the data is by JDBC connection. Some
is by ODBC connection. Other than views, is their some way to secure the
data (that is not a maintenance nightmare)?
Thanks,
Naomi