The query cache - first snapshot (long) - Mailing list pgsql-hackers

From Karel Zak
Subject The query cache - first snapshot (long)
Date
Msg-id Pine.LNX.3.96.1000719095207.25313A-100000@ara.zf.jcu.cz
Whole thread Raw
Responses Re: The query cache - first snapshot (long)  (Bruce Momjian <pgman@candle.pha.pa.us>)
quiet? Re: The query cache - first snapshot (long)  (Karel Zak <zakkr@zf.jcu.cz>)
List pgsql-hackers
The Query Cache~~~~~~~~~~~~~~~(excuse me, if you obtain this email twice; first I sent it with patch in 
attache, but this list has probably some limit, because email still not in
the list. Hmm...)
Now, the patch is available at: 
   ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_qcache-07182000.patch.tar.gz 

The patch must be used for current (07/18/2000) CVS version. Because code in 
the CVS is under very active development, you can load full PG source with 
query cache from:
       ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_qcache-07182000.tar.gz
or you can download source from CVS (source from 07/18/2000):

export CVSROOT=":pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot"
cvs login
cvs co -D "07/18/2000 12:00" pgsql
cd pgsql/src
patch -p1 < pqsql-qcache-07182000.patch

The Query Cache and new SPI description=======================================
    Note: cache is based on new memory design.  Implementation~~~~~~~~~~~~~~  The qCache allows to save queryTree and
queryPlan.Available are two space   for data caching.          LOCAL - data are cached in backend non-shared memory and
dataaren't               available in other backends.                           SHARE - data are cached in backend
sharedmemory and data are                 visible in all backends.         Because size of share memory pool is limited
andit's set during        postmaster start, the qCache must remove all old planns if pool is         full. You can mark
eachentry as "REMOVEABLE" or "NOTREMOVEABLE".          The removeable entry is removed if pool is full and entry is
last        in list that keep track usage of entry.            A not-removeable entry must be removed via
qCache_Remove()or         the other routines. The qCache not remove this entry itself.   All records in the qCache are
cachedin the hash table under some key. The  qCache knows two alternate of key --- "KEY_STRING" and "KEY_BINARY". A
keymust be always less or equal "QCACHE_HASH_KEYSIZE" (128b)     The qCache API not allows to access to shared memory,
allcached planns    that API returns are copy to CurrentMemoryContext or to defined context.   All (qCache_ ) routines
lockshmem itself (exception is   qCache_RemoveOldest_ShareRemoveAble()).
 
       - for locking is used spin lock.
  Memory management  ~~~~~~~~~~~~~~~~~  The qCache use for qCache's shared pool organized via memory contexts
independenton standard aset/mcxt, but use compatible API --- it allows   to use standard palloc() (it is very needful
forbasic plan-tree operations,   an example for copyObject()). The qCache memory management is very simular   to
currentaset.c code. It is chunked blocks too, but the block is smaller   - 1024b.
 
  The number of blocks is available set in postmaster 'argv' via option  '-Z'.
  For planns storing is used separate MemoryContext for each plan, it   is good idea (Hiroshi's ?), bucause create new
contextis simple and   inexpensive and allows easy destroy (free) cached plan. This method is   used in my SPI overhaul
insteadTopMemoryContext feeding.
 
  Postmaster  ~~~~~~~~~~  The query cache memory is init during potmaster startup. The size of  query cache pool is set
via'-Z <number-of-blocks>' switch --- default   is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached
planns.One query needs somewhere 3-10 blocks, for example query like
 
       PREPARE sel AS SELECT * FROM pg_class;
  needs 10Kb, because table pg_class has very much columns.   -- 
  Note: for development I add SQL function: "SELECT qcache_state();",        this routine show usage of qCache.
SPI~~~   I a little overwrite SPI save plan method and remove TopMemoryContext   "feeding" (already discussed).
   Standard SPI:
       SPI_saveplan() - save each plan to separate standard memory context.
       SPI_freeplan() - free plan.
   By key SPI:
       It is SPI interface for query cache and allows save planns to SHARED       or LOCAL cache 'by' arbitrary key
(stringor binary). Routines:
 
       SPI_saveplan_bykey()        - save plan to query cache
       SPI_freeplan_bykey()        - remove plan from query cache
       SPI_fetchplan_bykey()        - fetch plan saved in query cache
       SPI_execp_bykey()        - execute (via SPI) plan saved in query                                 cache 
       - now, users can write functions that save planns to shared memory        and planns are visible in all backend
andare persistent arcoss        connection. 
 
       Example:       ~~~~~~~       /* ----------        * Save/exec query from shared cache via string key        *
----------       */       int        keySize = 0;                       flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;
char        *key = "my unique key";              res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);
       if (res == SPI_ERROR_PLANNOTFOUND)        {               /* --- not plan in cache - must create it --- */
                      void *plan;
 
               plan = SPI_prepare(querystr, valnum, valtypes);               SPI_saveplan_bykey(plan, key, keySize,
flag);                             res = SPI_execute(plan, values, Nulls, tcount);       }              elog(NOTICE,
"Processed:%d", SPI_processed);
 

PREPARE/EXECUTE~~~~~~~~~~~~~~~  * Syntax:              PREPARE <name> AS <query>                [ USING type, ... typeN
]               [ NOSHARE | SHARE | GLOBAL ]              EXECUTE <name>                [ INTO [ TEMPORARY | TEMP ] [
TABLE] new_table ]               [ USING val, ... valN ]                      [ NOSHARE | SHARE | GLOBAL ]
 
       DEALLOCATE PREPARE                [ <name> [ NOSHARE | SHARE | GLOBAL ]]               [ ALL | ALL INTERNAL ]

    I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead    this?) --- what mean SQL standard
guru?
  * Where:       NOSHARE --- cached in local backend query cache - not accessable                   from the others
backendsand not is persisten a across                   conection.
 
       SHARE   --- cached in shared query cache and accessable from                   all backends which work over same
database.
       GLOBAL  --- cached in shared query cache and accessable from                   all backends and all databases. 
       - default is 'SHARE'         Deallocate:                              ALL          --- deallocate all users's
plans
               ALL INTERNAL --- deallocate all internal plans, like planns                                cached via
SPI.It is needful if user                                alter/drop table ...etc.
 
  * Parameters:              "USING" part in the prepare statement is for datetype setting for       paremeters in the
query.For example:
 
       PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;
       EXECUTE sel USING 'pg%';       
  * Limitation:              - prepare/execute allow use full statement of SELECT/INSERT/DELETE/         UPDATE.
-possible is use union, subselects, limit, ofset, select-into
 

Performance:~~~~~~~~~~~* the SPI
       - I for my tests a little change RI triggers to use SPI by_key API       and save planns to shared qCache
insteadto internal RI hash table.
 
       The RI use very simple (for parsing) queries and qCache interest is        not visible. It's better if backend
veryoften startup and RI check        always same tables. In this situation speed go up --- 10-12%.        (This
snapshotnot include this RI change.)
 
       But all depend on how much complicate for parser is query in        trigger.
* PREPARE/EXECUTE              - For tests I use query that not use some table (the executor is        in boredom
state),but is difficult for the parser. An example:
 
       SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast        (date_part('year', timestamp 'now') AS
text);                        - (10000 * this query):
 
       standard select:        54 sec       via prepare/execute:     4 sec   (93% better)
       IMHO it is nod bad.       - For standard query like:
       SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE        r.relowner = u.usesysid;
       it is with PREPARE/EXECUTE 10-20% faster.

I will *very glad* if someone try and test patch; some discussion is wanted 
too.
Thanks.                                   Karel

PS. Excuse me, my English is poor and this text is long --- it is not good    combination...

pgsql-hackers by date:

Previous
From: Peter Mount
Date:
Subject: RE: System tables since 7.0.0
Next
From: Pavel.Janik@linux.cz (Pavel Janík ml.)
Date:
Subject: Re: pg_dump with BLOBS & V7.0.2 UPDATED