Thomas Lockhart <lockhart@fourpalms.org> writes:
>> The web site lists a lot of the limitations for postgresql, but not the
>> max number of tables that are allowed in a given db. Anyone care to come
>> up with a number?
> It is limited by your filesystem's ability to handle large directories.
> I have noticed in the past (not with PostgreSQL per se) that ~3000 files
> in a directory is noticably slower that a typically small directory.
A couple of further notes --- there are Unix filesystems that don't suck
with large directories, but I'm not sure whether any of the ones in
common use have smart directory handling. The typical approach is that
file opens, creates, deletes require a linear scan of the directory.
Depending on your Postgres usage, this may or may not pose any
noticeable performance problem for you, even with thousands of tables in
the DB. A backend that's only accessing a small-to-moderate number of
tables will open them only once per session, so in a lot of scenarios
the fact that the OS is a tad slow to open the files won't add up to
anything much.
But still, you'd probably have performance problems once you got to the
range of hundreds of thousands of tables in a database.
Before 7.1, each "large object" counted as a separate table, so it
wasn't too hard to hit this range even with a simple DB design. We don't
do that anymore, though.
regards, tom lane