Lincoln Yeoh wrote:
> I'm not an expert.
>
> Turn off tab completion? It's probably scanning through all the
> possible table names and the algorithm used is probably not designed
> for that number. And with 42000 tables, tab completion may not be
> that helpful.
>
> Don't use ext2/ext3? There are other filesystems on Linux which
> perform decently with thousands of files in a directory. AFAIK ext2
> and ext3 don't allow you to have single large files anyway - also not
> sure if postgresql BLOBs will hit those filesystem limits or
> postgresql splits BLOBs or hits its own limits first - I'd just store
> multi-GB stuff out of the DB.
>
> At 01:24 PM 2/20/2005 +0000, Phil Endecott wrote:
>
>> Dear Postgresql experts,
>>
>> I have a single database with one schema per user. Each user has a
>> handful of tables, but there are lots of users, so in total the
>> database has thousands of tables.
>>
>> I'm a bit concerned about scalability as this continues to grow. For
>> example I find that tab-completion in psql is now unusably slow; if
>> there is anything more important where the algorithmic complexity is
>> the same then it will be causing a problem. There are 42,000 files
>> in the database directory. This is enough that, with a
>> "traditional" unix filesystem like ext2/3, kernel operations on
>> directories take a significant time. (In other applications I've
>> generally used a guide of 100-1000 files per directory before adding
>> extra layers, but I don't know how valid this is.)
PostgreSQL breaks tables down into 1GB segments, and oversized attributes
get stored
Into TOAST tables, compressed.
I don't know if this helps in your case, however.
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749