Re: Big Tables vs. many Tables vs. many Databases - Mailing list pgsql-general

From Uwe C. Schroeder
Subject Re: Big Tables vs. many Tables vs. many Databases
Date
Msg-id 200402182355.49896.uwe@oss4u.com
Whole thread Raw
In response to Big Tables vs. many Tables vs. many Databases  ("Dirk Olbertz" <olbertz.dirk@gmx.de>)
List pgsql-general
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


PostgreSQL certainly has a limit somewhere, well I never hit it.
I would never put them together into one database. If my math is correct
you're talking up to 75000 tables. Even if the database can handle that many
tables, why would you ? What happens if that one database gets corrupted
somehow ? You also lose a bunch of ways to scale the application using that
database. You'll need a humangous server, because normally if you handle 500
libraries there is at least 500 connections (assuming that each library wants
to access the database). Depending on what your application is doing you
might end up using a lot of memory. So maybe you want to spread that into
several databases on several servers. Monolithic design is nice when it comes
to maintenance - but it's actually very bad when it comes to failures. Think
you have a hardware failure on the server (shit happens you know ...) - with
everything in that one database on that one server you're dead in the water.
If you spread that onto several machines you're usually better off. Except
maybe you can afford something like a SUN HA E10k or similar, you know,
something with redundancy,automatic replication etc.

My $0.02


On Wednesday 18 February 2004 04:44 pm, Dirk Olbertz wrote:
> Hi there,
>
> I'm currently about to redesign a database which you could compare with a
> database for managing a library. Now this solution will not only manage one
> library, but 100 to 500 of them. Currently, eg. all the data about the
> inventory (books) is held in one table for all the libraries.
>
> Is it useful to spread this to one table for each library, by eg. giving it
> an id as a postfix?
>
> For one library, we currently need about 150 tables, so that would easily
> increase a lot if there would be a set of this tables for each library. On
> the other hand, there are only a very few tables (2-5), which are used by
> all libraries. All the rest does not interact with each other - and don't
> think about exchanging books betweens libs, as the library is only an
> example...
>
> One other solution would be to make one database for each library. What do
> you think of that? Does PostgreSQL has any problems with that much tables?
> Would it better to spread the data by databases?
>
> Thanks for your opinions,
>   Dirk
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: don't forget to increase your free space map settings

- --
    UC

- --
Open Source Solutions 4U, LLC    2570 Fleetwood Drive
Phone:  +1 650 872 2425        San Bruno, CA 94066
Cell:   +1 650 302 2405        United States
Fax:    +1 650 872 2417
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFANGwFjqGXBvRToM4RAsrMAKDAmQqrlqUEdbqA/2FjEsAQk6heMACfXGuI
/dA2xWmt1ZiLmv9QNO+604U=
=Uzxj
-----END PGP SIGNATURE-----


pgsql-general by date:

Previous
From: Alex
Date:
Subject: VACUUM Question
Next
From: Richard Huxton
Date:
Subject: Re: Big Tables vs. many Tables vs. many Databases