Thread: Create/Erase 5000 Tables in PostGRE SQL in execution Time

Create/Erase 5000 Tables in PostGRE SQL in execution Time

From
"Orlando Giovanny Solarte Delgado"
Date:
I am designing a system that it takes information of several databases
distributed in Interbase (RDBMS). It is a system web and each user can to do
out near 50 consultations for session. I can have simultaneously around 100
users. Therefore I can have 5000 consultations simultaneously. Each
consultation goes join to a space component in Postgis, therefore I need to
store each consultation in PostgreSQL to be able to use all the capacity of
PostGIS. The question is if for each consultation in  execution time build a
table in PostGRESQL I use it and then I erase it. Is a system efficient this
way? Is it possible to have 5000 tables in PostGRESQL? How much performance?

Thanks for your help!



Orlando Giovanny Solarte Delgado
Ingeniero en Electrónica y Telecomunicaciones
Universidad del Cauca, Popayan. Colombia.
E-mail Aux: orlandos@gmail.com



Orlando Giovanny Solarte Delgado
Ingeniero en Electrónica y Telecomunicaciones
Universidad del Cauca
E-mail Aux: orlandos@gmail.com



Re: Create/Erase 5000 Tables in PostGRE SQL in execution

From
Sergey Moiseev
Date:
Orlando Giovanny Solarte Delgado wrote:
> I am designing a system that it takes information of several databases
> distributed in Interbase (RDBMS). It is a system web and each user can
> to do out near 50 consultations for session. I can have simultaneously
> around 100 users. Therefore I can have 5000 consultations
> simultaneously. Each consultation goes join to a space component in
> Postgis, therefore I need to store each consultation in PostgreSQL to
> be able to use all the capacity of PostGIS. The question is if for
> each consultation in  execution time build a table in PostGRESQL I use
> it and then I erase it. Is a system efficient this way? Is it possible
> to have 5000 tables in PostGRESQL? How much performance?
>
Use TEMP tables.

--
wbr, Sergey Moiseev

Re: Create/Erase 5000 Tables in PostGRE SQL in execution

From
Christopher Browne
Date:
> Orlando Giovanny Solarte Delgado wrote:
>> I am designing a system that it takes information of several databases
>> distributed in Interbase (RDBMS). It is a system web and each user can
>> to do out near 50 consultations for session. I can have simultaneously
>> around 100 users. Therefore I can have 5000 consultations
>> simultaneously. Each consultation goes join to a space component in
>> Postgis, therefore I need to store each consultation in PostgreSQL to
>> be able to use all the capacity of PostGIS. The question is if for
>> each consultation in  execution time build a table in PostGRESQL I use
>> it and then I erase it. Is a system efficient this way? Is it possible
>> to have 5000 tables in PostGRESQL? How much performance?
>>
> Use TEMP tables.

Hmm.  To what degree do temp tables leave dead tuples lying around in
pg_class, pg_attribute, and such?

I expect that each one of these connections will leave a bunch of dead
tuples lying around in the system tables.  The system tables will need
more vacuuming than if the data was placed in some set of
more-persistent tables...

None of this seems forcibly bad; you just need to be sure that you
vacuum the right things :-).

It is a big drag if system tables get filled with vast quantities of
dead tuples; you can't do things like reindexing them without shutting
down the postmaster.
--
(reverse (concatenate 'string "moc.liamg" "@" "enworbbc"))
http://linuxdatabases.info/info/x.html
"Listen,  strange women, lyin'  in ponds,  distributin' swords,  is no
basis  for a  system of  government. Supreme  executive  power derives
itself from a mandate from  the masses, not from some farcical aquatic
ceremony."  -- Monty Python and the Holy Grail

Re: Create/Erase 5000 Tables in PostGRE SQL in execution

From
Sergey Moiseev
Date:
Christopher Browne wrote:
>> Orlando Giovanny Solarte Delgado wrote:
>>> It is a system web and each user can
>>> to do out near 50 consultations for session. I can have simultaneously
>>> around 100 users. Therefore I can have 5000 consultations
>>> simultaneously. Each consultation goes join to a space component in
>>> Postgis, therefore I need to store each consultation in PostgreSQL to
>>> be able to use all the capacity of PostGIS. The question is if for
>>> each consultation in  execution time build a table in PostGRESQL I use
>>> it and then I erase it. Is a system efficient this way? Is it possible
>>> to have 5000 tables in PostGRESQL? How much performance?

>> Use TEMP tables.

> Hmm.  To what degree do temp tables leave dead tuples lying around in
> pg_class, pg_attribute, and such?
> I expect that each one of these connections will leave a bunch of dead
> tuples lying around in the system tables.  The system tables will need
> more vacuuming than if the data was placed in some set of
> more-persistent tables...
> None of this seems forcibly bad; you just need to be sure that you
> vacuum the right things :-).

Since there is pg_autovacuum you don't need to think about it.

--
Wbr, Sergey Moiseev