Speeding up query pulling comments from pg_catalog - Mailing list pgsql-performance

From Ken Tanzer
Subject Speeding up query pulling comments from pg_catalog
Date
Msg-id CAD3a31UEEojeQ9VYZHO0u2j0eyyh5jaYd8Pc=7yfJEHrfzELFg@mail.gmail.com
Whole thread Raw
Responses Re: Speeding up query pulling comments from pg_catalog
List pgsql-performance
Hi.  I've got an app that queries pg_catalog to find any table columns that have comments.  After setting up PgBadger, it was #2 on my list of time consuming queries, with min/max/avg duration of 199/2351/385 ms (across ~12,000 executions logged).

I'm wondering if there are any ways to speed this query up, including if there are better options for what to query.

I'm running on 9.6.14 on CentOS 7.

I've copied the EXPLAIN below.  Let me know if additional info would be helpful.  Thanks in advance!

Ken

ag_reach=> EXPLAIN (ANALYZE, VERBOSE,BUFFERS,TIMING, COSTS)
SELECT c.relname AS table,a.attname AS column,pg_catalog.col_description(a.attrelid, a.attnum) AS comment
FROM pg_catalog.pg_attribute a, pg_class c
WHERE  a.attrelid = c.oid
AND pg_catalog.col_description(a.attrelid, a.attnum) IS NOT NULL;
                                                                                                                                              QUERY PLAN                                                                                                                                              
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=197.09..22533.42 rows=39858 width=160) (actual time=92.538..386.047 rows=8 loops=1)
   Output: c.relname, a.attname, col_description(a.attrelid, (a.attnum)::integer)
   Hash Cond: (a.attrelid = c.oid)
   Buffers: shared hit=81066
   ->  Seq Scan on pg_catalog.pg_attribute a  (cost=0.00..11718.81 rows=41278 width=70) (actual time=76.069..369.410 rows=8 loops=1)
         Output: a.attrelid, a.attname, a.atttypid, a.attstattarget, a.attlen, a.attnum, a.attndims, a.attcacheoff, a.atttypmod, a.attbyval, a.attstorage, a.attalign, a.attnotnull, a.atthasdef, a.attisdropped, a.attislocal, a.attinhcount, a.attcollation, a.attacl, a.attoptions, a.attfdwoptions
         Filter: (col_description(a.attrelid, (a.attnum)::integer) IS NOT NULL)
         Rows Removed by Filter: 40043
         Buffers: shared hit=80939
   ->  Hash  (cost=144.82..144.82 rows=4182 width=68) (actual time=15.932..15.934 rows=4183 loops=1)
         Output: c.relname, c.oid
         Buckets: 8192  Batches: 1  Memory Usage: 473kB
         Buffers: shared hit=103
         ->  Seq Scan on pg_catalog.pg_class c  (cost=0.00..144.82 rows=4182 width=68) (actual time=0.015..7.667 rows=4183 loops=1)
               Output: c.relname, c.oid
               Buffers: shared hit=103
 Planning time: 0.408 ms
 Execution time: 386.148 ms
(18 rows)



--
AGENCY Software  
A Free Software data system
By and for non-profits
(253) 245-3801

learn more about AGENCY or
follow the discussion.

pgsql-performance by date:

Previous
From: mgbii bax
Date:
Subject: Re: Searching in varchar column having 100M records
Next
From: Tom Lane
Date:
Subject: Re: Speeding up query pulling comments from pg_catalog