On Tue, Mar 1, 2016 at 10:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Aleksander Alekseev <a.alekseev@postgrespro.ru> writes: >> There are applications that create and delete a lot of temporary >> tables. Currently PostgreSQL doesn't handle such a use case well. > > True. > >> Fast temporary tables work almost as usual temporary tables but they >> are not present in the catalog. Information about tables is stored in >> shared memory instead. This way we solve a bloating problem. > > I think you have no concept how invasive that would be. Tables not > represented in the catalogs would be a disaster, because *every single > part of the backend* would have to be modified to deal with them as > a distinct code path --- parser, planner, executor, loads and loads > of utility commands, etc. I do not think we'd accept that. Worse yet, > you'd also break client-side code that expects to see temp tables in > the catalogs (consider psql \d, for example). > > I think a workable solution to this will still involve catalog entries, > though maybe they could be "virtual" somehow.
Yeah, I have a really hard time believing this can ever work. There are MANY catalog tables potentially involved here - pg_class, pg_attribute, pg_attrdef, pg_description, pg_trigger, ... and loads more - and they all can have OID references to each other. If you create a bunch of fake relcache and syscache entries, you're going to need to give them OIDs, but where will those OIDs come from? What guarantees that they aren't in use, or won't be used later while your temporary object still exists? I think making this work would make parallel query look like a minor feature.
The global temp tables can decrease these issues. Only few informations should be private - and can be accessed via extra function call. Almost all information can be shared in stable catalogue.
The private data are rownumbers, column statistics and the content (filenode). Any other can be used from catalogue.