Re: pg_stat_statements with query tree based normalization - Mailing list pgsql-hackers

From Peter Geoghegan
Subject Re: pg_stat_statements with query tree based normalization
Date
Msg-id CAEYLb_UC+D0VDJsNgjxwxCsta3GGyGfkwvRgCoZNvX=W3unRLw@mail.gmail.com
Whole thread Raw
In response to Re: pg_stat_statements with query tree based normalization  (Peter Geoghegan <peter@2ndquadrant.com>)
List pgsql-hackers
The latest revision of the patch, along with latest revision of
regression tests are attached.

This patch handles synchronisation of the planner and executor
plugins. This was always going to be a problem, regardless of what
implementation was chosen (with the exception of plan hashing). I
think that the idea of hashing Plans themselves from an ExecutorEnd
hooked-in function, while interesting and probably worth pursuing at
some other point, is not a good match for pg_stat_statements today.
Any useful tool that did that would be orthogonal to what
pg_stat_statements is supposed to be - it would be affected by
completely external factors like planner cost constants. What would
the canonicalized representation of the query string look like then,
and how would that interact with the selectivity of constants,
particularly over time? What would the DBA's workflow be to isolate
poorly performing plans - how exactly is this tool useful? These are
questions that do not have simple answers, and the fact that I'm not
aware of a successful precedent makes me hesitant to sink too much
time into the idea just yet.

The other proposal that Tom had in the other thread, to directly hash
the raw parse tree, had a number of disadvantages to my mind. Firstly,
it prevents us from benefiting from additional normalisation performed
by the parser itself in later stages, like ignoring noise words.
Secondly, it prevents us from creating a less-complete version of this
that can be used with Postgres 8.4+ stock binaries. We therefore
cannot get field experience/ bug reports with the tool in advance of
the release of 9.2. Tom was pretty sceptical of this second reason.
However, I think that it's an issue of major concern to the PostgreSQL
community. For example, the immediate reaction of Depesz, writing on
pg_stat_statements when it was initially committed in 2009 was:

"To sum it up. I think that functionality of the module would be
greatly enhanced if it would store queries without parameters (instead
of “select 2 + 3″ -> “select $1 + $2″, or something like this) –
otherwise, with real-world databases, the buffer for queries will fill
up too soon, and it will not “catch” the fact that “select * from
table where id = 3″ and “select * from table where id = 23″ is
practically the same."

It would be great to have this functionality, but it would be even
better if we could get it to people today, rather than in a couple of
years time, assured by the fact that it is largely the same as the
pg_stat_statements that ships with 9.2. That was not the primary
consideration that informed the design of this patch though - I
genuinely thought that after the rewriting stage was the correct time
to hash the query tree, as it is at that point that it most accurately
reflects what the query will actually do without involving how it will
do it. This made good sense to me.

I'm now sometimes using a selective serialisation (or "jumble", as
I've termed it to reflect the fact that it is selective), and other
times using the query string instead. Basically, I'll only use a
jumble if:

* The query is not nested (i.e. it is directly issued by a client).
This generally isn't a problem as nested queries are almost always
prepared. The one case that this is a problem that I'm aware of is if
someone is dynamically executing SQL with SPI, using plpgsql's
execute, for example. However, that pattern already has plenty of
problems (It doesn't set FOUND, etc).

* We're not in legacy/compatibility mode (and we're not by default)

* The query is not prepared. It's not possible to synchronise Planner
and Executor hooks as things stand, at least not without adding
additional infrastructure and complex logic for keeping track of
things.

* The query is not a utility statement (including DECLARE CURSOR
statements) - It is a select, update, delete or insert.

You might find the fact that it serializes at this later stage
objectionable, because of the potential for that to interact with
things like search_path in ways that might not be appreciated - this
was something that I thought was an advantage. You might also find it
strange that it does one or the other. It might do this for some query
that is prepared but not another similar one, which is not prepared.
One solution to this could be to hash relationOids along with the
query string in pgss_ExecutorEnd for prepared queries. I don't think
this inconsistency is especially worth concerning ourselves with
though - a better solution might be to simply document it, as it isn't
hard to understand.

Likely other areas of concern with this patch are:

* My modifications to scan.l and to a lesser extent gram.y to
accommodate the new length field in Const nodes are not particularly
elegant. It would be useful to get an expert's input here. In
particular, note the strlen() calls within the SET_YYCCOC_LEN() macro.
It is unfortunate that by making the YYLTYPE representation closer to
that of the default (which has column and row positions for both the
start and end of lexemes), it reverberates further than I'd like,
including necessitating modifications to plpgsql, which is why the
patch is so large. It's a mostly mechanical change though.

* The fact that JUMBLE_SIZE is a compile time constant, whereas the
query size is not and has not been, which is important for the
purposes of determining query equivalency with large queries.
Basically, it's not quite obvious when you've run out of space to
store a query jumble, whereas it was obvious when that happens with
the query string.

* The possibility of two substantively different queries having the
same jumble, and therefore actually being considered equivalent. The
basic problem is that I'm not serialising NULL pointers - the
serialization logic is currently faulty. It's pretty simple to do this
with something like a binary tree, but I thought I'd get input here
before doing something better - it's probably possible to cheat and
not have to serialise every single NULL pointer, which would be quite
wasteful (consider the last bullet point), but I wouldn't like to
attempt that without input.

* Lack of documentation. The code is fairly heavily commented though.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

Attachment

pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: psql output locations
Next
From: Dimitri Fontaine
Date:
Subject: Re: Command Triggers