Thread: reasonable limit to number of schemas in a database?
I'm considering re-architecting a database to make use of many, many schemas.... over time, it would probably grow to be on the order of 3,000 or so, with each schema holding ~100 tables. Is that an absurd amount, or well within postgres' limits? I haven't been able to find any information on what penalties one pays as you increase the schema count.
Ben <bench@silentmedia.com> writes: > I'm considering re-architecting a database to make use of many, many > schemas.... over time, it would probably grow to be on the order of 3,000 > or so, with each schema holding ~100 tables. Is that an absurd amount, or > well within postgres' limits? I haven't been able to find any information > on what penalties one pays as you increase the schema count. The number of schemas doesn't scare me so much as the number of tables. Are you using a filesystem that can cope gracefully with 300K files in one directory? How many of these tables do you anticipate any one session touching? (That last translates to catalog cache and lock table space...) Generally, when someone proposes a scheme like this, they are thinking that N identical tables are somehow better than one table with an additional key column. The latter is usually the better design, unless you have special requirements you didn't mention. regards, tom lane
am Wed, dem 25.04.2007, um 1:14:15 -0400 mailte Tom Lane folgendes: > Ben <bench@silentmedia.com> writes: > > I'm considering re-architecting a database to make use of many, many > > schemas.... over time, it would probably grow to be on the order of 3,000 > > or so, with each schema holding ~100 tables. Is that an absurd amount, or > > well within postgres' limits? I haven't been able to find any information > > on what penalties one pays as you increase the schema count. > > The number of schemas doesn't scare me so much as the number of tables. > Are you using a filesystem that can cope gracefully with 300K files in > one directory? How many of these tables do you anticipate any one http://people.planetpostgresql.org/greg/index.php?/archives/37-The-million-table-challenge.html ;-) Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net
I currently am using a normal system like the one you suggest, in which every user puts their data into a single schema, and uses keys to keep things separate. The problem comes in database upgrades. They're not common, but as I ramp up the number of users, it becomes increasingly infeasible to upgrade everybody at once. But everybody using the same schema has to be on the same schema version. Each session will probably touch most if not all of the tables eventually, but will only touch a dozen or so from each schema with any regularity. Is the 300k files/directory my only real bottleneck, or should I worry about catalog cache and lock table space too? How would I overcome those last two? On Apr 24, 2007, at 10:14 PM, Tom Lane wrote: > The number of schemas doesn't scare me so much as the number of > tables. > Are you using a filesystem that can cope gracefully with 300K files in > one directory? How many of these tables do you anticipate any one > session touching? (That last translates to catalog cache and lock > table > space...) > > Generally, when someone proposes a scheme like this, they are thinking > that N identical tables are somehow better than one table with an > additional key column. The latter is usually the better design, > unless > you have special requirements you didn't mention. > > regards, tom lane