Thread: Limit to number of tables and table drops
I am creating a web application that uses very small temporary tables (i.e. tables that are created and then destroyed nearly immediately) to store state data for a particular browser "session". A table might have four columns and four rows each containing a small piece of text. The "sessions" last a minute at best. I can contemplate (hope for?) as many as 100,000 sessions in an hour. That would mean 100,000 table creations and destructions in an hour ( or getting on 2000 per minute). Will that stress a postgres database? Stress defined as slow response or exceeding storage. What are the measures one would take to minimize stress? I have looked at the documentation but don't find something exactly apposite to this question. Thanks, John
John Gage <jsmgage@numericable.fr> writes: > I am creating a web application that uses very small temporary tables > (i.e. tables that are created and then destroyed nearly immediately) > to store state data for a particular browser "session". > A table might have four columns and four rows each containing a small > piece of text. > The "sessions" last a minute at best. > I can contemplate (hope for?) as many as 100,000 sessions in an hour. > That would mean 100,000 table creations and destructions in an hour > ( or getting on 2000 per minute). Well, that's going to create an awful lot of churn in the system catalogs. You could probably make it work with sufficiently aggressive autovacuum settings, but I wonder why you are designing it like that. Seems like it would be better to have one, persistent, table with the four payload columns plus a session ID column. regards, tom lane
This is the answer I wanted. The application is going to serve two different functions, teaching and review. For teaching, these issues don't count, because the level of activity will be very low. Review involves the entire cohort of students and will require this redesign. Thank you very much for answering, John On May 21, 2010, at 3:33 PM, Tom Lane wrote: > John Gage <jsmgage@numericable.fr> writes: >> I am creating a web application that uses very small temporary tables >> (i.e. tables that are created and then destroyed nearly immediately) >> to store state data for a particular browser "session". > >> A table might have four columns and four rows each containing a small >> piece of text. > >> The "sessions" last a minute at best. > >> I can contemplate (hope for?) as many as 100,000 sessions in an hour. >> That would mean 100,000 table creations and destructions in an hour >> ( or getting on 2000 per minute). > > Well, that's going to create an awful lot of churn in the system > catalogs. You could probably make it work with sufficiently > aggressive > autovacuum settings, but I wonder why you are designing it like that. > Seems like it would be better to have one, persistent, table with the > four payload columns plus a session ID column. > > regards, tom lane