Thread: temp schemas
I am noticing a large number of temp schemas in my database. We use temp tables but it doesn't appear that the schemas get dropped for some reason. This greatly slows down how long it takes pgAdmin to connect because it retrieves thousands of pg_temp_% schemas. We mainly use "drop on commit" and see that the tables are gone after the transaction is complete. Any ideas why we have so many temp schemas? Is it safe to just drop them all? Jon
"Roberts, Jon" <Jon.Roberts@asurion.com> writes: > I am noticing a large number of temp schemas in my database. We use > temp tables but it doesn't appear that the schemas get dropped for some > reason. That's intentional. There doesn't seem a lot of value in dropping a catalog entry that'll just have to be created again later. > This greatly slows down how long it takes pgAdmin to connect > because it retrieves thousands of pg_temp_% schemas. Why have you got thousands of them? If you are running with thousands of active backends, may I suggest a connection pooler? (It might be a good idea to fix pgAdmin so it ignores other sessions' temp schemas, though.) regards, tom lane
On Thu, Aug 28, 2008 at 7:23 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > (It might be a good idea to fix pgAdmin so it ignores other sessions' > temp schemas, though.) It doesn't use them itself so normally you'll never see any. It will show all of them if the Show System Objects option is turned on (it defaults to off). -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com
> -----Original Message----- > > "Roberts, Jon" <Jon.Roberts@asurion.com> writes: > > I am noticing a large number of temp schemas in my database. We use > > temp tables but it doesn't appear that the schemas get dropped for some > > reason. > > That's intentional. There doesn't seem a lot of value in dropping a > catalog entry that'll just have to be created again later. > This is what I'm trying to understand. At what point does PostgreSQL determine it needs to create a new temp schema versus reusing an existing one? Maybe we are doing something incorrectly in our code. > > This greatly slows down how long it takes pgAdmin to connect > > because it retrieves thousands of pg_temp_% schemas. > > Why have you got thousands of them? If you are running with thousands > of active backends, may I suggest a connection pooler? > I don't know. It looks like a bug to me where a temp table is created and dropped on commit but the next time the function executes and creates a new temp table, it does this in another temp schema. It does this over and over until I have thousands of temp schemas that aren't used. On Wednesday, we had 170,243 temp schemas and today, we have 173,384. > (It might be a good idea to fix pgAdmin so it ignores other sessions' > temp schemas, though.) > It looks this SQL is executing when connecting with pgAdmin which doesn't exclude temp schemas. Even though I don't have it configured to show temp schemas, the SQL doesn't exclude these records. This is with 1.8.2 of pgAdmin. SELECT CASE WHEN nspname LIKE E'pg\\_temp\\_%%' THEN 1 WHEN (nspname LIKE E'pg\\_%') THEN 0 ELSE 3 END AS nsptyp, nsp.nspname, nsp.oid, pg_get_userbyid(nspowner) AS namespaceowner, nspacl, description, has_schema_privilege(nsp.oid, 'CREATE') as cancreate FROM pg_namespace nsp LEFT OUTER JOIN pg_description des ON des.objoid=nsp.oid WHERE NOT ((nspname = 'pg_catalog' and (SELECT count(*) FROM pg_class WHERE relname = 'pg_class' AND relnamespace = nsp.oid) > 0) OR (nspname = 'pgagent' and (SELECT count(*) FROM pg_class WHERE relname = 'pga_job' AND relnamespace = nsp.oid) > 0) OR (nspname = 'information_schema' and (SELECT count(*) FROM pg_class WHERE relname = 'tables' AND relnamespace = nsp.oid) > 0) OR (nspname = 'dbo' and (SELECT count(*) FROM pg_class WHERE relname = 'systables' AND relnamespace = nsp.oid) > 0) OR (nspname = 'sys' and (SELECT count(*) FROM pg_class WHERE relname = 'all_tables' AND relnamespace = nsp.oid) > 0)) ORDER BY 1, nspname Jon
On Fri, Aug 29, 2008 at 1:40 PM, Roberts, Jon <Jon.Roberts@asurion.com> wrote: >> Why have you got thousands of them? If you are running with thousands >> of active backends, may I suggest a connection pooler? >> > > I don't know. It looks like a bug to me where a temp table is created > and dropped on commit but the next time the function executes and > creates a new temp table, it does this in another temp schema. It does > this over and over until I have thousands of temp schemas that aren't > used. Is this vanilla PG, or your Greenplum install? > It looks this SQL is executing when connecting with pgAdmin which > doesn't exclude temp schemas. Even though I don't have it configured to > show temp schemas, the SQL doesn't exclude these records. This is with > 1.8.2 of pgAdmin. Oh, that's weird - that code is filtering out system objects client side for some reason. I'll look at that. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com
> On Fri, Aug 29, 2008 at 1:40 PM, Roberts, Jon <Jon.Roberts@asurion.com> > wrote: > > >> Why have you got thousands of them? If you are running with thousands > >> of active backends, may I suggest a connection pooler? > >> > > > > I don't know. It looks like a bug to me where a temp table is created > > and dropped on commit but the next time the function executes and > > creates a new temp table, it does this in another temp schema. It does > > this over and over until I have thousands of temp schemas that aren't > > used. > > Is this vanilla PG, or your Greenplum install? Good memory. Both. GP is based on 8.2.6 while our PostgreSQL installs are 8.3.0 and 8.3.1. Here is an example from my local PG database: C:\>psql elt0n elt0n Welcome to psql 8.3.0, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit Warning: Console code page (437) differs from Windows code page (1252) 8-bit characters might not work correctly. See psql reference page "Notes for Windows users" for details. elt0n=# select nspname from pg_namespace where nspname like 'pg_temp%'; nspname --------- (0 rows) elt0n=# create temporary table t1 on commit drop as select 'jon'::text as col1; SELECT elt0n=# select nspname from pg_namespace where nspname like 'pg_temp%'; nspname ----------- pg_temp_1 (1 row) elt0n=# \c elt0n elt0n You are now connected to database "elt0n". elt0n=# create temporary table t1 on commit drop as select 'jon'::text as col1; SELECT elt0n=# select nspname from pg_namespace where nspname like 'pg_temp%'; nspname ----------- pg_temp_1 pg_temp_2 (2 rows) elt0n=# select version(); version ----------------------------------------------------- PostgreSQL 8.3.0, compiled by Visual C++ build 1400 (1 row) elt0n=# So now I have two pg_temp_% schemas that don't seem to get reused ever and if I have lots of functions using temp tables, this really slows down a connection to PG with pgAdmin. Jon
"Roberts, Jon" <Jon.Roberts@asurion.com> writes: > This is what I'm trying to understand. At what point does PostgreSQL > determine it needs to create a new temp schema versus reusing an > existing one? Maybe we are doing something incorrectly in our code. They're associated with backends' slot numbers in a shared memory array ("MyBackendId" in the code). New backends always take the lowest unused slot, so what should happen is that the low-numbered pg_temp_n schemas are re-used constantly, and it should certainly never be possible for one numbered higher than max_connections to appear. > On Wednesday, we had 170,243 temp schemas and today, we have 173,384. This seems flat out impossible in a stock PG build. I see from elsewhere in the thread that you are running a Greenplum-modified backend. What I conclude is that Greenplum has broken the intended behavior, probably by redefining how MyBackendId is set. Better ask them. regards, tom lane
On Fri, Aug 29, 2008 at 2:07 PM, Dave Page <dpage@pgadmin.org> wrote: > On Fri, Aug 29, 2008 at 1:40 PM, Roberts, Jon <Jon.Roberts@asurion.com> wrote: > > Oh, that's weird - that code is filtering out system objects client > side for some reason. I'll look at that. I've committed a change to filter out temp schemas server-side if 'show system objects' is turned off. That should help optimise this oddity in the way GP is creating the schemas. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com