Thread: contrib/pg_stat_statements 1202
Here is an update version of contrib/pg_stat_statements. New modifications from the last version are: 1. New counters in struct Instrumentation. 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section. 3. Buffer statistics counsters are not reset to zero anymore. ---- 1. New counters in struct Instrumentation. [in include/executor/instrument.h, backend/executor/instrument.c] The following fields are added. They are used by pg_stat_statements and EXPLAIN ANALYZE VERBOSE. getrusage() is called for each nodes. Write-counters are not included because I think they are not so useful. buffer_gets; /* # of buffer hits */ buffer_hits; /* # of buffer gets */ buffile_reads; /* # of buffile reads */ utime; /* user time in sec */ stime; /* sys time in sec */ ---- 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section. [in backend/commands/explain.c] I borrowed the idea from Vladimir, "Buffer pool statistics in Explain Analyze". http://archives.postgresql.org/message-id/1d709ecc0810111624i7419b179v1789b2ca681987c0@mail.gmail.com Here is an sample output. We'd better to add a linebreak before the 'actual' section because the line is too wide to display. =# EXPLAIN ANALYZE VERBOSE SELECT * FROM accounts; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on accounts (cost=0.00..2688.29 rows=101829 width=97) (actual time=0.072..119.633 rows=100000 loops=1 gets=1670reads=1638 local_reads=0 CPU=0.06/0.03 sec) Output: aid, bid, abalance, filler Total runtime: 209.556 ms (3 rows) ---- 3. Buffer statistics counsters are not reset to zero anymore. [in storage/buffer/bufmgr.c] ResetBufferUsage() is save the current counters in global variables as baseline and buffer statistics are measured in difference from them because the counters are used by struct Instrumentation. ---- Comments welcome. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
Attachment
2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
I do not get the point of "VERBOSE".
As far as I understand, explain analyze (without verbose) will anyway add overhead for calculation of "gets/hits/cpu". Why discard that information in "non verbose" mode? Just to make the investigation more complex?
Write-counters are not included because I think they are not so useful.
Never say never. I guess they (or just one counter for accumulated writes) could be useful for monitoring operations that spill to the disk. For instance, PostgreSQL does not show the amount of temp used for the join.
buffer_gets; /* # of buffer hits */
buffer_hits; /* # of buffer gets */
buffile_reads; /* # of buffile reads */
I guess it makes sense expanding "buffile reads" into "buffer file reads" or just "file reads"
Here is an sample output. We'd better to add a linebreak before
the 'actual' section because the line is too wide to display.
I wish there was a way to get the results of explain into some table. I wish it was the default output "format". That would make life of pgAdmin easier, and improve readability even in psql. Do not you think there is something wrong with having "cost=... rows=... loops=..." in each and every row?
ResetBufferUsage() is save the current counters in global variables as
baseline and buffer statistics are measured in difference from them
because the counters are used by struct Instrumentation.
That would definitely work well for Instrumentation (it will not notice resetBufferUsage any more), however that will not isolate the guys who do the reset. I am afraid the idea of having api for "reset" is broken and it makes sense removing that function. However it looks like it is out of scope of this patch.
Regards,
Vladimir Sitnikov
"Vladimir Sitnikov" <sitnikov.vladimir@gmail.com> writes: > I wish there was a way to get the results of explain into some table. I wish > it was the default output "format". That would make life of pgAdmin easier, > and improve readability even in psql. Do not you think there is something > wrong with having "cost=... rows=... loops=..." in each and every row? A number of people have suggesting we switch to XML. An alternative would be to build up a tuplestore of data and then send that to the client in a separate result set. That's kind of nice because it would give us a way to send both the real results and the explain results. And at least we already have an api for accessing result sets. Oracle's approach is to have the explain command stuff the results into a table. That has advantages for tools, especially if you want to be able to look at plans generated by other sessions. But it's pretty awkward for the simple case. I'm sure there are dozens of ways to skin this cat. Anyone have any more? We probably just have to pick one and run with it. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support!
"Vladimir Sitnikov" <sitnikov.vladimir@gmail.com> writes:A number of people have suggesting we switch to XML.
> I wish there was a way to get the results of explain into some table. I wish
> it was the default output "format". That would make life of pgAdmin easier,
> and improve readability even in psql. Do not you think there is something
> wrong with having "cost=... rows=... loops=..." in each and every row?
I do not see much benefit of XML:
* XML is not human-readable
* Plain old result set is even easier to process since it is the main PostgreSQL interface at this point
The only benefit of XML I could imagine is it could provide a nicer markup for sort/hash/etc nodes. It is not that nice to have a column "sort method" that would be empty nearly for all the rows. At the same time it looks fine to have a column with xml inside for any additional information execution node wants provide (like topN-allrows sort / number of batches in hash join or whatever)
An alternative would be to build up a tuplestore of data and then send that to
the client in a separate result set. That's kind of nice because it would give
us a way to send both the real results and the explain results. And at least
we already have an api for accessing result sets.
Sounds good. As for me, current output of explain is not very easy to read: it suits well only for "find timings for particular node" workflow only (I mean, the source is a particular node, the result is timings/rows/buffers/etc). However from my point of view, when it comes to query tuning the main workflow is "find node by suspicious timings". If all the relevant data were displayed in the same column it would be easier to read. Consider all the row counts in the very first column.
Oracle's approach is to have the explain command stuff the results into a
table. That has advantages for tools, especially if you want to be able to
look at plans generated by other sessions.
I do not believe that workflow makes sense. I have never ever thought of it.
External table makes sense if you have several output formats (say, create a formatting function for psql and let pgAdmin format the plan on its own)
Regards,
Vladimir Sitnikov
On Fri, 5 Dec 2008, Vladimir Sitnikov wrote: >> Oracle's approach is to have the explain command stuff the results into a >> table. That has advantages for tools, especially if you want to be able to >> look at plans generated by other sessions. > > I do not believe that workflow makes sense. I have never ever thought of it. The main benefit is that you can track how EXPLAIN plans change over time. Archive a snapshot of what the plans for all your common queries look like every day, and the day one of them blows up and starts doing something wrong you've got a *lot* more information to work with for figuring out what happened--whether it was a minor query change, some stats that got slowly out of whack, etc. I wouldn't just immediately dismiss that workflow as unsensible without thinking about it a bit first, there are some good advantages to it. Greg Sabino Mullane had a neat blog entry on saving plans to tables in PostgreSQL, unfortunately the Planet PostgreSQL outage seems to have eaten it. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
The main benefit is that you can track how EXPLAIN plans change over time.
It is not required to output plan into some table to be able track it over time. If EXPLAIN returns a table, it is up to you to perform "insert into history select * from explain(...)".
Workflow that does not make sense for me is "look at plans generated _into some plan_table_ by other sessions in Oracle".
I am 100% sure it really makes sense have some view like pg_execute_plan that will reveal execution plans for currently running queries (see v$sql_plan in Oracle). However, I would stress once again I have no idea what the sense could be in "one session explained into plan_table, while the other reads that plan".
Does that make sense?
Workflow that does not make sense for me is "look at plans generated _into some plan_table_ by other sessions in Oracle".
I am 100% sure it really makes sense have some view like pg_execute_plan that will reveal execution plans for currently running queries (see v$sql_plan in Oracle). However, I would stress once again I have no idea what the sense could be in "one session explained into plan_table, while the other reads that plan".
Does that make sense?
Vladimir Sitnikov
On Tue, Dec 2, 2008 at 02:35, ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> wrote: > Here is an update version of contrib/pg_stat_statements. Hello again! I was assigned to review this. Submission review: Is the patch in standard form? Yes Does it apply cleanly to current HEAD? Yes (with fuzz) Does it include reasonable tests, docs, etc? Yes Usability review: Does the patch actually implement that? Yes Do we want that? I think so Do we already have it? No Does it follow SQL spec, or the community-agreed behavior? Sure Are there dangers? No Have all the bases been covered? Yes Feature test: Does the feature work as advertised? Yes Are there corner cases the author has failed to consider? No Performance review Does the patch slow down simple tests? Does not seem to... (test.sql) select * from tenk1 a join tenk1 b using (unique1); (dual core machine, --enable-debug, --enable-cassert build) pgbench -c 2 -T60 -n -f test.sql HEAD: tps = 9.674423 PATCH: tps = 9.695784 If it claims to improve performance, does it? Does it slow down other things? Coding review: Does it follow the project coding guidelines? Yes Are there portability issues? No Will it work on Windows/BSD etc? Think so Are the comments sufficient and accurate? I think so Does it do what it says, correctly? Yes Does it produce compiler warnings? No Can you make it crash? No I'm not sure about the new counters in struct Instrumentation or the hooks (but did not see anything obviously wrong with them)... A commiter can better comment on those. Also find attached some very minor verbiage changes. If there is nothing else on your todo list for this Ill mark it as Ready for commiter on the wiki.
Attachment
"Alex Hunsaker" <badalex@gmail.com> wrote: > I was assigned to review this. Thanks for your reviewing. I assume that the basic concepts are ok and focus of discussion is in: - New counters in struct Instrumentation. (bufferusage and CPU usage) - Should EXPLAIN ANALYZE show those counters. > Performance review > HEAD: tps = 9.674423 > PATCH: tps = 9.695784 > > If it claims to improve performance, does it? > Does it slow down other things? The patch should not slow down normal use if you don't use pg_stat_statements module, but it might slow down EXPLAIN ANALYZE because some fields are added in struct Instrumentation and they are counted up per tuple in EXPLAIN ANALYZE. > Also find attached some very minor verbiage changes. Thanks. I'll apply your fixes. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
"Vladimir Sitnikov" <sitnikov.vladimir@gmail.com> wrote: > > 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section. > > > I do not get the point of "VERBOSE". > As far as I understand, explain analyze (without verbose) will anyway add > overhead for calculation of "gets/hits/cpu". Why discard that information in > "non verbose" mode? Just to make the investigation more complex? I thought that output of new counters are too wide and it brakes compatibility of EXPLAIN ANALYZE. On the other hand, we don't have to think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly added in 8.4. However, overheads should be avoided. We could have two kinds of instrumentations, time-only or all-stats. Another idea is to have buffer and cpu counters not in Instrumentation but in QueryDesc (i.e, only one per query). A new field for the counter will be initialized in pg_stat_statements module. EXPLAIN ANALYZE could also use the field but the result is printed only the end of query plan, as the follows. We can avoid too-wide-line problem with the approach. =# EXPLAIN ANALYZE SELECT ...; QUERY PLAN ---------------------------...Total runtime: 24.273 msCPU usage: user=20.2 sys=2.4 # Buffer usage: gets=100reads=10 writes=20 # added linesTempfile i/o: reads=50 writes=50 # (10 rows) > I wish there was a way to get the results of explain into some table. I wish > it was the default output "format". That would make life of pgAdmin easier, > and improve readability even in psql. Do not you think there is something > wrong with having "cost=... rows=... loops=..." in each and every row? I absolutely agree that XML-explain and TABLE-expalin are useful, but they are independent features from pg_stat_statements. So I'd like to discuss them in separated threads (and it will come in 8.5). Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
I thought that output of new counters are too wide and it brakes
> > 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
> >
> I do not get the point of "VERBOSE".
> As far as I understand, explain analyze (without verbose) will anyway add
> overhead for calculation of "gets/hits/cpu". Why discard that information in
> "non verbose" mode? Just to make the investigation more complex?
compatibility of EXPLAIN ANALYZE. On the other hand, we don't have to
think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
added in 8.4. However, overheads should be avoided. We could have
two kinds of instrumentations, time-only or all-stats.
I am not sure if this impact (in terms of compatibility) is really that big. As far as I could understand, pgAdmin parses modified explain analyze output well. However, pgAdmin does not support "verbose".
The impact in terms of "computational" overhead could be measured. I guess it depends only on the number of calls to the instrumentation (that is either rows or loops). Thus, after explaining some basic select from generate_series, one could adjust the timings. We could even incorporate this to EXPLAIN ANALYZE, so it would display "cpu is 0.5s, while instrumentation cpu is 0.3s".
The impact in terms of "computational" overhead could be measured. I guess it depends only on the number of calls to the instrumentation (that is either rows or loops). Thus, after explaining some basic select from generate_series, one could adjust the timings. We could even incorporate this to EXPLAIN ANALYZE, so it would display "cpu is 0.5s, while instrumentation cpu is 0.3s".
Another idea is to have buffer and cpu counters not in Instrumentation
but in QueryDesc (i.e, only one per query). A new field for the counter
will be initialized in pg_stat_statements module. EXPLAIN ANALYZE could
also use the field but the result is printed only the end of query plan,
as the follows. We can avoid too-wide-line problem with the approach.
Single number per query is sufficient only for pg_stat_statements purposes. That will give an insight of what the top consuming queries are (by cpu time, by gets, etc).
However, single "gets=... reads=..." is not sufficient to pinpoint the problem especially in case of complex query (that is comparable to "query returned N rows" vs "this plan node returned N rows") .
However, single "gets=... reads=..." is not sufficient to pinpoint the problem especially in case of complex query (that is comparable to "query returned N rows" vs "this plan node returned N rows") .
=# EXPLAIN ANALYZE SELECT ...;
QUERY PLAN
---------------------------
...
Total runtime: 24.273 ms
CPU usage: user=20.2 sys=2.4 #
Buffer usage: gets=100 reads=10 writes=20 # added lines
Tempfile i/o: reads=50 writes=50 #
(10 rows)
I wish pgAdmin (or whatever client) had an option to fetch that counters for each and every SQL query and display the consumed resources at a separate tab. I mean, even before/after plain "select" (without any explain). That will show you how the query would behave without any instrumentation.
Regards,
Vladimir Sitnikov
On Sun, 7 Dec 2008, Alex Hunsaker wrote: > (dual core machine, --enable-debug, --enable-cassert build) > pgbench -c 2 -T60 -n -f test.sql > > HEAD: tps = 9.674423 > PATCH: tps = 9.695784 Two general suggestions here, not specific to this patch: While it's good to do most testing with debug and cassert turned on, you shouldn't report performance results with those two flags enabled. What if the patch has some large amount of overhead that only shows up when compiled with debug or asserts on? You'd end up reporting a performance loss that doesn't actually exist in a real build. Unfortunately, the only way to get good performance results is to have a parallel build done with those off, in addition to the debug/assert one used to catch bugs. The above pgbench is executing less than 600 actual tests (60 seconds @ 9.7TPS). That seems a bit short to me. If you sorted out the above and run this again, it would be good to let pgbench run for a lot longer than 1 minute, to see if the results show some more significant difference. With this few TPS, it would be nice to let that run for 30 minutes or more if you can find some time to schedule that. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes: > I thought that output of new counters are too wide and it brakes > compatibility of EXPLAIN ANALYZE. On the other hand, we don't have to > think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly > added in 8.4. However, overheads should be avoided. We could have > two kinds of instrumentations, time-only or all-stats. I've got a serious problem with the way that this patch is being presented. It's being named and described as though it's "just another contrib module", but in fact it makes invasive, undocumented changes to the behavior of the core EXPLAIN functionality --- changes that certainly cannot be claimed to having been agreed to by the community, since most of us probably weren't aware that there was any such thing going on inside this patch. Please split this into two separate patches that can be separately evaluated. regards, tom lane
> On the other hand, we don't have to > think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly > added in 8.4. Uh, it exists for me in 8.2.9. Welcome to psql 8.2.9, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g orterminate with semicolon to execute query \q to quit portal=# explain analyze verbose select 1; QUERY PLAN -------------------------------------------------------------------------------- ---- {RESULT :startup_cost 0.00 :total_cost 0.01 :plan_rows 1 :plan_width 0 :targetlist ( {TARGETENTRY :expr {CONST :consttype 23 :constlen 4 :constbyval true :constisnull false :constvalue 4 [ 1 0 0 0 ] } :resno 1 :resname ?column? :ressortgroupref 0 :resorigtbl 0 :resorigcol 0 :resjunk false } ) :qual <> :lefttree <> :righttree <> :initPlan <> :extParam (b) :allParam (b) :nParamExec 0 :resconstantqual <> } Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.010..0.012 rows=1 loops =1)Total runtime: 0.244 ms (35 rows) ...Robert
"Robert Haas" <robertmhaas@gmail.com> writes: >> On the other hand, we don't have to >> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly >> added in 8.4. > > Uh, it exists for me in 8.2.9. The current behaviour is newly added in 8.4. In 8.2 it meant something completely different and quite useless for end-users in any case, so backwards compatibility isn't important. What strikes me as a convenient approach is basically using EXPLAIN VERBOSE as a playground where we feel free to add everything we think of. If people run a command marked "VERBOSE" and complain it prints too much... As stuff matures and becomes indispensable we could consider moving it to the regular EXPLAIN or implement some way to specify precisely which data the user wants. Or just say XML/table data/whatever will solve the problem for us. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support!
"Robert Haas" <robertmhaas@gmail.com> writes: >> On the other hand, we don't have to >> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly >> added in 8.4. > Uh, it exists for me in 8.2.9. EXPLAIN VERBOSE has existed at least back to 7.0, probably further. However, we've felt free to whack around what it outputs, so maybe the backwards-compatibility issue isn't very strong. A possibly stronger complaint is that ANALYZE and VERBOSE have always been orthogonal options to EXPLAIN, and now there'd be some interaction between them. regards, tom lane
> As stuff matures and becomes indispensable we could consider moving it to the > regular EXPLAIN or implement some way to specify precisely which data the user > wants. Or just say XML/table data/whatever will solve the problem for us. I think some way to specify precisely which data the user wants is the way to go. The amount of data that there is to be printed is only going to continue to increase. If the only toggle is a boolean flag to display ALL or NONE of it, then every time someone proposes a new type of output, we're going to argue about whether it's useful enough to be worth the display real estate. I'm not sure what the best way is though. I don't think continuing to add keywords between EXPLAIN and the start of the query is very scalable. Putting parentheses around the option list seems like it might eliminate a lot of grammar headaches: EXPLAIN (option, option, option...) SELECT ... ...Robert
I'm not sure what the best way is though. I don't think continuing to
add keywords between EXPLAIN and the start of the query is very
scalable. Putting parentheses around the option list seems like it
might eliminate a lot of grammar headaches:
Do you think it is required to invent special grammar just for presentation purposes?
I guess database should not deal with presentation. Provided "explain" retuns table, it is up to the client to do the formatting. I do not believe it makes sense creating several different explain outputs, and redo all the work in 8.5.
It still could make sense having several options for "explain" if that would result in different instrumentation (e.g. "explain" vs "explain analyze").
I guess database should not deal with presentation. Provided "explain" retuns table, it is up to the client to do the formatting. I do not believe it makes sense creating several different explain outputs, and redo all the work in 8.5.
It still could make sense having several options for "explain" if that would result in different instrumentation (e.g. "explain" vs "explain analyze").
Regards,
Vladimir Sitnikov
On Tue, Dec 9, 2008 at 01:20, Greg Smith <gsmith@gregsmith.com> wrote: > On Sun, 7 Dec 2008, Alex Hunsaker wrote: > >> (dual core machine, --enable-debug, --enable-cassert build) >> pgbench -c 2 -T60 -n -f test.sql >> >> HEAD: tps = 9.674423 >> PATCH: tps = 9.695784 > > Two general suggestions here, not specific to this patch: > > While it's good to do most testing with debug and cassert turned on, you > shouldn't report performance results with those two flags enabled. What if > the patch has some large amount of overhead that only shows up when compiled > with debug or asserts on? You'd end up reporting a performance loss that > doesn't actually exist in a real build. Unfortunately, the only way to get > good performance results is to have a parallel build done with those off, in > addition to the debug/assert one used to catch bugs. Right, which is part of the reason I noted it was a cassert build. > The above pgbench is executing less than 600 actual tests (60 seconds @ > 9.7TPS). That seems a bit short to me. If you sorted out the above and run > this again, it would be good to let pgbench run for a lot longer than 1 > minute, to see if the results show some more significant difference. With > this few TPS, it would be nice to let that run for 30 minutes or more if you > can find some time to schedule that. Ok thats useful to know as well, thanks! (ill go re-run them) > -- > * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD >
On Mon, Dec 8, 2008 at 23:28, ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> wrote: > > "Alex Hunsaker" <badalex@gmail.com> wrote: > >> I was assigned to review this. > > Thanks for your reviewing. > I assume that the basic concepts are ok and focus of discussion is in: > - New counters in struct Instrumentation. > (buffer usage and CPU usage) > - Should EXPLAIN ANALYZE show those counters. Right, I would split out your next patch in 3 parts: the hooks you need, contrib module and the new counters. I think I saw older versions of the patch that did this... just got lost for this version? > >> Performance review >> HEAD: tps = 9.674423 >> PATCH: tps = 9.695784 >> >> If it claims to improve performance, does it? >> Does it slow down other things? > The patch should not slow down normal use if you don't use > pg_stat_statements module, but it might slow down EXPLAIN ANALYZE > because some fields are added in struct Instrumentation and > they are counted up per tuple in EXPLAIN ANALYZE. Err yes sorry I was just following http://wiki.postgresql.org/wiki/Reviewing_a_Patch, those two did not seem pertainant so I did not answer them.
Yes this is one reasonable option, as is the idea of using XML or a table and making it the client's problem. Neither are going to happen for this release I think. And in any case it will always be useful to have an option to print all the available information anyways so we make as well do that with "verbose". -- Greg On 9 Dec 2008, at 16:35, "Robert Haas" <robertmhaas@gmail.com> wrote: >> As stuff matures and becomes indispensable we could consider moving >> it to the >> regular EXPLAIN or implement some way to specify precisely which >> data the user >> wants. Or just say XML/table data/whatever will solve the problem >> for us. > > I think some way to specify precisely which data the user wants is the > way to go. The amount of data that there is to be printed is only > going to continue to increase. If the only toggle is a boolean flag > to display ALL or NONE of it, then every time someone proposes a new > type of output, we're going to argue about whether it's useful enough > to be worth the display real estate. > > I'm not sure what the best way is though. I don't think continuing to > add keywords between EXPLAIN and the start of the query is very > scalable. Putting parentheses around the option list seems like it > might eliminate a lot of grammar headaches: > > EXPLAIN (option, option, option...) SELECT ... > > ...Robert
On Tue, Dec 9, 2008 at 8:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Dec 9, 2008 at 12:44 PM, Greg Stark <greg.stark@enterprisedb.com> wrote:Agreed.
> Yes this is one reasonable option, as is the idea of using XML or a table
> and making it the client's problem. Neither are going to happen for this
> release I think.
I 100% agree with that point. Thus I suggest output additional information into "explain analyze" since:
1) it will require minimal code change
2) it will be consistent with previous behaviour
3) looks like a natural EXPLAIN's feature improvement
4) will be anyway changed when "table" for explain will come
1) it will require minimal code change
2) it will be consistent with previous behaviour
3) looks like a natural EXPLAIN's feature improvement
4) will be anyway changed when "table" for explain will come
Sounds very nice.
> And in any case it will always be useful to have an option to print all the
> available information anyways so we make as well do that with "verbose".
Can I ask my question once again?
Why don't you want to make "print all the info" the default output format?
As long as it comes to "pgsql-performance", they used to recommend: "please, provide EXPLAIN ANALYZE, and not just EXPLAIN".
If the default output format is not changed in 8.4, this will transform into "please, provide EXPLAIN ANALYZE VERBOSE, not just EXPLAIN ANALYZE or EXPLAIN". Do you really want that?
Why don't you want to make "print all the info" the default output format?
As long as it comes to "pgsql-performance", they used to recommend: "please, provide EXPLAIN ANALYZE, and not just EXPLAIN".
If the default output format is not changed in 8.4, this will transform into "please, provide EXPLAIN ANALYZE VERBOSE, not just EXPLAIN ANALYZE or EXPLAIN". Do you really want that?
Regards,
Vladimir Sitnikov
On Tue, Dec 9, 2008 at 12:44 PM, Greg Stark <greg.stark@enterprisedb.com> wrote: > Yes this is one reasonable option, as is the idea of using XML or a table > and making it the client's problem. Neither are going to happen for this > release I think. Agreed. > And in any case it will always be useful to have an option to print all the > available information anyways so we make as well do that with "verbose". Sounds very nice. ...Robert
On Sun, Dec 7, 2008 at 19:13, Alex Hunsaker <badalex@gmail.com> wrote: > On Tue, Dec 2, 2008 at 02:35, ITAGAKI Takahiro > <itagaki.takahiro@oss.ntt.co.jp> wrote: >> Here is an update version of contrib/pg_stat_statements. > > Hello again! > > I was assigned to review this. ... Some other things I accidentally left out. #define GUCNAME(name) ("statistics." name) Why statistics? Would not something like stat_statements make more sense? Statistics seems fairly arbitrary... Also per the /* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */ Maybe it should be configurable, personally I would want something like # of calls / time. Mainly because I don't for instance really care that my backups get tracked but would be more interested in the things that get called most often that also take the longest. (aka the most bang for the buck, as far as optimizing those goes...) ?
Tom Lane <tgl@sss.pgh.pa.us> wrote: > Please split this into two separate patches that can be separately > evaluated. Sure. I want to disucuss only where to add counters of buffer usage and cpu usage, or they should not be added. However, it seems to affect future of EXPLAIN ANALYZE, so we might also need to discuss about EXPLAIN. I assume we have 3 choices here: 1. Add those counters to struct Instrument. We can get statistics for each line in EXPLAIN ANALYZE, but it might haveoverhead to update counters. 2. Add those counters only to top instruments (one per query). We can get accumulated statistics for each query. It mightbe unsufficient for complex queries. 3. Should not add any counters. No changes to core, but usability of pg_stat_statement module would be very poor... Which should we take? or are there another idea? Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
"Alex Hunsaker" <badalex@gmail.com> wrote: > #define GUCNAME(name) ("statistics." name) > > Why statistics? > Would not something like stat_statements make more sense? Statistics > seems fairly arbitrary... Not to use duplicated "statements" words; variable names contains "statements" already. - stat_statements.max_statements - stat_statements.track_statements seem to be ugly for me, but avoiding arbitrariness might be more important. If there are agreements, I will to change the prefix. > Also per the > /* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */ > > Maybe it should be configurable, personally I would want something > like # of calls / time. Mainly because I don't for instance really > care that my backups get tracked but would be more interested in the > things that get called most often that also take the longest. (aka > the most bang for the buck, as far as optimizing those goes...) Configurability is better, but we need documentations of how to configure them and I have no clear idea for it. Also, we already have means for logging slow queries. We could use the logging for slow queries executed rarely and use the module queries executed many times. Excluding backup scripts is easy; You can create a database role for backup and disable statement-tracking for the user using ALTER ROLE. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
Here is an updated version of pg_stat_statements. [Changes] - A new GUC variable 'explain_analyze_format' is added. - Statistics counters are converted into a variable 'BufferStats'. "Vladimir Sitnikov" <sitnikov.vladimir@gmail.com> wrote: > Can I ask my question once again? > Why don't you want to make "print all the info" the default output format? I added a new GUC variable 'explain_analyze_format' to print all statistics counters in EXPLAIN ANALYZE. We can set special variables into it to get non-default counters. I think everyone don't always require all the info. Of course TABLE-explain and XML-explain could handle the counters better, but this patch should not include them. A variable 'BufferStats' is for cleanup calculations of many separated counters. It is exported (marked as PGDLLIMPORT) so that users can add a user-defined view like pg_stat_session_buffer_usage if needed. Comments welcome. [Sample output of explain_analyze_format] =# SET explain_analyze_format = ' gets=%g reads=%r temp=%t cpu=%p'; =# EXPLAIN ANALYZE SELECT * FROM accounts; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on accounts (cost=0.00..2640.00 rows=100000 width=97) (actual time=0.054..117.046 rows=100000 loops=1 gets=1640reads=1608 temp=0 cpu=125.00) Total runtime: 208.167 ms (2 rows) The following special variables are available: %g : # of shared buffer gets (= %h + %r) %h : # of shared buffer hits %r : # of shared buffer reads %w : # of shared buffer writes %G : # of local buffer gets (= %H + %R) %H : # of local buffer hits %R : # of local buffer reads %W : # of local buffer writes %t : # of buffile reads %T : # of buffile writes %u : user cpu time %s : sys cpu time %p : total cpu times (= %u + %s) Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
Attachment
ITAGAKI Takahiro escreveu: > - A new GUC variable 'explain_analyze_format' is added. I'm afraid that this variable name doesn't tell what it means. What about 'explain_analyze_stats_format' or even 'explain_stats_format'? -- Euler Taveira de Oliveira http://www.timbira.com/
On Fri, Dec 12, 2008 at 02:51, ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> wrote: > Here is an updated version of pg_stat_statements. I took the liberty of splitting this out into 3 patches for easier review: patch: 001-hooks.patch -adds 2 new hooks (startup_hook and shutdown_hook). -fixes missing process_shared_preload_libraries on EXEC_BACKEND -changes int pgstat_track_activity_query_size to PGDLLIMPORT int pgstat_track_activity_query_size -addes sourceText to QueyDesc and modifies callers to populate it patch: 002-explain-fmt.patch -adds new guc explain_analyze_format to print out statistic counters -adds rtime/utime to explain analyze printout -moves statistic counters (ReadBufferCount etc...) into struct Instrumentation via a new struct BufferCounter (allows modules like the proposed easy access) patch: pg_stat_statements-1212.patch the contrib module A few comments: Is there a reason you add sourceText to QueryDesc? AFAICT you can do ActivePortal->sourceText and it will always be populated correctly. I think the explain_analyze_format guc is a clever way of getting around the explain analyze verbose you proposed earlier. But I dont see any doc updates for it. Im still not overly fond of the "statistics." custom guc name, but what can you do... Other than that it looks good, though I admit I have not had the time to sit down and thoroughly test it yet...
Attachment
"Alex Hunsaker" <badalex@gmail.com> wrote: > A few comments: > > Is there a reason you add sourceText to QueryDesc? AFAICT you can do > ActivePortal->sourceText and it will always be populated correctly. That's for nested statements (SQLs called in stored functions). ActivePortal->sourceText shows text of only top most query. > I think the explain_analyze_format guc is a clever way of getting > around the explain analyze verbose you proposed earlier. But I dont > see any doc updates for it. Sure, no docs for now. The guc approach is acceptable? (I'm not sure whether it is the best way...) If ok, I'll write docs for it. > Im still not overly fond of the "statistics." custom guc name, but > what can you do... I have no obsessions with the name. The "pg_stat_statements.*" might be better to avoid confliction of prefix. If so, we'd better to rename variables to kill duplication of "statements" from the names. Ex. statistics.max_statements -> pg_stat_statements.limit statistics.track_statements -> pg_stat_statements.target statistics.saved_file -> pg_stat_statements.saved_file > Other than that it looks good, though I admit I have not had the time > to sit down and thoroughly test it yet... I found another bug in my patch. [pg_stat_statements-1212.patch # pg_stat_statements()] SpinLockAcquire(&entry->mutex); values[i++] = Int64GetDatumFast(entry->counters.calls); values[i++] = Float8GetDatumFast(entry->counters.total_time); values[i++] =Float8GetDatumFast(entry->counters.cpu_user); values[i++] = Float8GetDatumFast(entry->counters.cpu_sys); values[i++]= Int64GetDatumFast(entry->counters.gets); values[i++] = Int64GetDatumFast(entry->counters.reads); values[i++]= Int64GetDatumFast(entry->counters.lreads); values[i++] = Int64GetDatumFast(entry->counters.rows); SpinLockRelease(&entry->mutex); The variables are not protected by spinlock actually when float64 and int64 are passed by reference (especially on 32bit platform). It'd be better to copy values: Counters tmp; /* copy the actual values in spinlock */ SpinLockAcquire(&entry->mutex); tmp = entry->counters; SpinLockRelease(&entry->mutex); /* create a tuple after lock is released. */ values[i++] = Int64GetDatumFast(tmp.calls); values[i++] = Float8GetDatumFast(tmp.total_time); ... Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
On Mon, Dec 22, 2008 at 00:44, ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> wrote: > > "Alex Hunsaker" <badalex@gmail.com> wrote: > >> A few comments: >> >> Is there a reason you add sourceText to QueryDesc? AFAICT you can do >> ActivePortal->sourceText and it will always be populated correctly. > > That's for nested statements (SQLs called in stored functions). > ActivePortal->sourceText shows text of only top most query. > >> I think the explain_analyze_format guc is a clever way of getting >> around the explain analyze verbose you proposed earlier. But I dont >> see any doc updates for it. > > Sure, no docs for now. The guc approach is acceptable? > (I'm not sure whether it is the best way...) > If ok, I'll write docs for it. I dunno, Im hopping that splitting up the patches and making the change more visible some more people might chime in :) >> Im still not overly fond of the "statistics." custom guc name, but >> what can you do... > > I have no obsessions with the name. The "pg_stat_statements.*" might > be better to avoid confliction of prefix. If so, we'd better to rename > variables to kill duplication of "statements" from the names. > > Ex. > statistics.max_statements -> pg_stat_statements.limit > statistics.track_statements -> pg_stat_statements.target How about just pg_stat_statements.track ? > statistics.saved_file -> pg_stat_statements.saved_file I do like the consistency of having the custom gucs be the same as the module name, easy to grep or google for. >> Other than that it looks good, though I admit I have not had the time >> to sit down and thoroughly test it yet... > > I found another bug in my patch. > > [pg_stat_statements-1212.patch # pg_stat_statements()] > SpinLockAcquire(&entry->mutex); > values[i++] = Int64GetDatumFast(entry->counters.calls); > values[i++] = Float8GetDatumFast(entry->counters.total_time); > values[i++] = Float8GetDatumFast(entry->counters.cpu_user); > values[i++] = Float8GetDatumFast(entry->counters.cpu_sys); > values[i++] = Int64GetDatumFast(entry->counters.gets); > values[i++] = Int64GetDatumFast(entry->counters.reads); > values[i++] = Int64GetDatumFast(entry->counters.lreads); > values[i++] = Int64GetDatumFast(entry->counters.rows); > SpinLockRelease(&entry->mutex); > > The variables are not protected by spinlock actually when float64 and > int64 are passed by reference (especially on 32bit platform). > It'd be better to copy values: > > Counters tmp; > /* copy the actual values in spinlock */ > SpinLockAcquire(&entry->mutex); > tmp = entry->counters; > SpinLockRelease(&entry->mutex); > /* create a tuple after lock is released. */ > values[i++] = Int64GetDatumFast(tmp.calls); > values[i++] = Float8GetDatumFast(tmp.total_time); > ... Ive only been testing on 64bit... maybe thats why I never ran into this.
Here is an updated version of contrib/pg_stat_statements patch. "Alex Hunsaker" <badalex@gmail.com> wrote: > >> I think the explain_analyze_format guc is a clever way of getting > >> around the explain analyze verbose you proposed earlier. But I dont > >> see any doc updates for it. Documentation is added. > How about just pg_stat_statements.track ? I renamed the variables to: - pg_stat_statements.limit - pg_stat_statements.track - pg_stat_statements.saved_file I also modified assign_custom_variable_classes() to accept '_' as a prefix character, not only 0-9A-Za-z. > I do like the consistency of having the custom gucs be the same as the > module name, easy to grep or google for. Should I also rename variables used in auto_explain module? It uses 'explain.*' now. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
Attachment
On Thu, Dec 25, 2008 at 23:04, ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> wrote: > Here is an updated version of contrib/pg_stat_statements patch. Im going to do some more testing, give the typedef Chunk stuff another look (you did fix the race/not protected by a spinlock you found earlier right?) . And if all looks good mark it as ready for commiter. (Assuming I find time tonight :)) I think the other questions a -commiter needs to respond to, namely: - explain_analyze_format guc, do we want it? - queryDesc->sourceText changes (look good to me though, and I don't see any other obvious way to do it) - rtime/utime to explain analyze printout - moves statistic counters (ReadBufferCount etc...) into struct Instrumentation via a new struct BufferCounter (looks like a good cleanup regardless...) > Should I also rename variables used in auto_explain module? > It uses 'explain.*' now. Well in the school of second thought you *do* have to manually define them in custom_variable_classes, so maybe its fine. Personally though I would like them to be auto_explain.*... it seems harder to mistake them later as having something to do EXPLAIN.
On Mon, Dec 29, 2008 at 15:46, Alex Hunsaker <badalex@gmail.com> wrote: > On Thu, Dec 25, 2008 at 23:04, ITAGAKI Takahiro > <itagaki.takahiro@oss.ntt.co.jp> wrote: >> Here is an updated version of contrib/pg_stat_statements patch. > > Im going to do some more testing, give the typedef Chunk stuff another > look (you did fix the race/not protected by a spinlock you found > earlier right?) . And if all looks good mark it as ready for > commiter. (Assuming I find time tonight :)) Ok i ran a portion of tpc-h with 4 simultaneous clients doing the same query on a 32bit dual core machine about a hundred times (lets call this A). I then compared the # of calls, # of gets and # rows from pg_stat_statements view to a single run (lets cal this B) to make sure they matched (i.e. made sure A.gets = A.calls * B.gets), that part all looks good. So I think if there was a race you squashed it. My only nit this time around was patch complaining about (Stripping trailing CRs from patch.) but that's no big deal. So Im going to mark it as ready for commmiter.
"Alex Hunsaker" <badalex@gmail.com> writes: > ... So Im going to mark it as > ready for commmiter. Has this patch been tested on Windows? (Or more generally, with EXEC_BACKEND?) The reason I ask is that eyeballing the code suggests a couple of major problems in that area: * the startup/shutdown hooks will be installed in the postmaster process, but the patch expects them to be executed in a child process. I think nothing will happen. * in an EXEC_BACKEND situation, we re-execute process_shared_preload_libraries() when starting a fresh backend (but not in other kinds of child processes, which is why the other problem is a problem). This means re-executing the _PG_init function, which will try to redefine the custom GUC variables, which will fail. I don't think this is really a bug in this patch per se, it's a bug in the custom-GUC support; but nonetheless it looks like a problem. regards, tom lane
On Thu, Jan 1, 2009 at 17:28, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Alex Hunsaker" <badalex@gmail.com> writes: >> ... So Im going to mark it as >> ready for commmiter. > > Has this patch been tested on Windows? (Or more generally, with EXEC_BACKEND?) I was under the impression thats where Itagaki-san develops.You'll note some other specific windows changes: pgstat_track_activity_query_size gains PGDLLIMPORT process_shared_preload_libraries() also seems of intreset: http://archives.postgresql.org/pgsql-hackers/2008-12/msg01416.php varoius carriage returns in the patch... I could be wrong though. > The reason I ask is that eyeballing the code suggests a couple of major > problems in that area: > > * the startup/shutdown hooks will be installed in the postmaster > process, but the patch expects them to be executed in a child process. > I think nothing will happen. I dunno about this one, not very familiar with EXEC_BACKEND > * in an EXEC_BACKEND situation, we re-execute > process_shared_preload_libraries() when starting a fresh backend > (but not in other kinds of child processes, which is why the other > problem is a problem). This means re-executing the _PG_init function, > which will try to redefine the custom GUC variables, which will fail. > I don't think this is really a bug in this patch per se, it's a bug > in the custom-GUC support; but nonetheless it looks like a problem. I see 3 options: - add a GUC_CUSTOM_REDEFINE flag - ignore redefines of custom gucs -change the define_custom_variable() to return a bool (or something)true if it got addedfalse if it was already there Seems to me we could probably just ignore any redefines of custom gucs outright. Im not to worried about some other module picking the same custom guc. And frankly the op should notice. Especially when they go to add it to custom_variable_classes.
I wrote: > * in an EXEC_BACKEND situation, we re-execute > process_shared_preload_libraries() when starting a fresh backend > (but not in other kinds of child processes, which is why the other > problem is a problem). This means re-executing the _PG_init function, > which will try to redefine the custom GUC variables, which will fail. > I don't think this is really a bug in this patch per se, it's a bug > in the custom-GUC support; but nonetheless it looks like a problem. Oh, never mind that part. I was thinking that the child process would already know the real definition of the custom variable at the time it tries to load the shared library, but actually the mechanism for pushing GUC values into EXEC_BACKEND child processes doesn't transfer the whole variable definition. It causes any such values to be loaded as placeholders, and then the later load of the shared library converts the placeholder to a regular variable. So it all works, or nearly anyway: the code fails on a custom variable class whose name alphabetically precedes "custom_variable_class". http://archives.postgresql.org/pgsql-committers/2009-01/msg00008.php regards, tom lane
On Thu, Jan 1, 2009 at 19:59, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I wrote: >> * in an EXEC_BACKEND situation, we re-execute >> process_shared_preload_libraries() when starting a fresh backend >> (but not in other kinds of child processes, which is why the other >> problem is a problem). This means re-executing the _PG_init function, >> which will try to redefine the custom GUC variables, which will fail. >> I don't think this is really a bug in this patch per se, it's a bug >> in the custom-GUC support; but nonetheless it looks like a problem. > > Oh, never mind that part. I was thinking that the child process would > already know the real definition of the custom variable at the time it > tries to load the shared library, but actually the mechanism for pushing > GUC values into EXEC_BACKEND child processes doesn't transfer the whole > variable definition. It causes any such values to be loaded as > placeholders, and then the later load of the shared library converts the > placeholder to a regular variable. > So it all works, or nearly anyway: > the code fails on a custom variable class whose name alphabetically > precedes "custom_variable_class". Cool! Err interesting... > http://archives.postgresql.org/pgsql-committers/2009-01/msg00008.php Yeah I saw your commits just shortly after hitting send on my reply :)
I wrote: > * the startup/shutdown hooks will be installed in the postmaster > process, but the patch expects them to be executed in a child process. > I think nothing will happen. OK, I figured out what is happening there: the patch makes it work by means of this expedient: diff -cprN HEAD/src/backend/storage/lmgr/proc.c pg_stat_statements/src/backend/storage/lmgr/proc.c *** HEAD/src/backend/storage/lmgr/proc.c 2008-12-10 02:03:02.366590000 +0900 --- pg_stat_statements/src/backend/storage/lmgr/proc.c 2008-12-26 14:51:58.062500000 +0900 *************** InitAuxiliaryProcess(void) *** 381,386 **** --- 381,390 ---- if (MyProc != NULL) elog(ERROR, "you already exist"); + #ifdef EXEC_BACKEND + process_shared_preload_libraries(); + #endif + /* * We use the ProcStructLock to protect assignment and releasing of * AuxiliaryProcs entries. I find this mighty Rube Goldbergian. We have a startup hook which is declared in include/storage/ipc.h, but defined and called in bootstrap.c (whence it will actually be executed down inside the startup process). We have a shutdown hook which is also declared in include/storage/ipc.h, but defined and called in bgwriter.c (executed in the bgwriter process, of course). And to make those hooks work in the EXEC_BACKEND case, we have a kluge inserted in proc.c, miles away from where the existing process_shared_preload_libraries() calls are (in postmaster.c), and with extremely high probability of someday resulting in duplicate preload operations if the postmaster.c code gets shuffled. I don't really care for the idea of initializing the pg_stat_statements shared memory down inside the startup process. All the rest of shared memory is initialized by the postmaster process itself, and I think pg_stat_statements should probably work the same way. Normally I am against having more functionality in the postmaster than absolutely necessary, but I don't see any robustness loss in this situation: if we have the pg_stat_statements init code in the startup subprocess, and it dies, we'll abort startup anyway. So we might as well run it directly in the postmaster. I think the right place is probably at the bottom of CreateSharedMemoryAndSemaphores(). This will serve two purposes: to create the pg_stat_statements shared memory when run in the postmaster, or to re-attach to it when starting an EXEC_BACKEND child. The existing coding in the patch that will try to create or attach to the shared memory at any old random instant ought to go away --- if one of these functions is called and doesn't find the pgss shared memory pointer ready-to-go, it should error out or return quietly as appropriate. That would indicate that pg_stat_statements.so got loaded into an already-running backend, which means that the shared memory situation can't possibly be right. As for the shutdown hook, I don't think we need it at all in this design. When loaded into the postmaster process, pg_stat_statements can insert itself into the on_proc_exit or on_shmem_exit hook lists ... it doesn't need a private hook. BTW, as for the question of where to call process_shared_preload_libraries: we currently have postmaster.c doing this for itself, and when spawning a regular backend child in the EXEC_BACKEND case. We don't do it for other child process types; which is the omission that Itagaki-san tried to work around with the above diff hunk. You could argue that this is a case of premature optimization (a/k/a the root of all evil). I think the idea was that we'd not possibly need any loadable libraries installed in special child processes --- but that seems more than a little bit dubious if you think about a loadable library whose goal is to monitor system activity, as opposed to implementing some SQL-callable functionality. Moreover it fails to duplicate what will happen in the non-EXEC_BACKEND case; all child processes will inherit loadable libraries then. So I'm thinking that Itagaki-san had the right idea in wanting to make auxiliary processes load libraries, just the wrong implementation. The right way to make that happen is to rearrange the coding in SubPostmasterMain() so that process_shared_preload_libraries is done in all cases, just after the read_nondefault_variables call. Comments? regards, tom lane
On Fri, Jan 2, 2009 at 20:20, Tom Lane <tgl@sss.pgh.pa.us> wrote: > I wrote: >> * the startup/shutdown hooks will be installed in the postmaster >> process, but the patch expects them to be executed in a child process. >> I think nothing will happen. > > OK, I figured out what is happening there: the patch makes it work by > means of this expedient: <snip> > I find this mighty Rube Goldbergian. We have a startup hook which is > declared in include/storage/ipc.h, but defined and called in bootstrap.c > (whence it will actually be executed down inside the startup process). > We have a shutdown hook which is also declared in include/storage/ipc.h, > but defined and called in bgwriter.c (executed in the bgwriter process, > of course). And to make those hooks work in the EXEC_BACKEND case, we > have a kluge inserted in proc.c, miles away from where the existing > process_shared_preload_libraries() calls are (in postmaster.c), and with > extremely high probability of someday resulting in duplicate preload > operations if the postmaster.c code gets shuffled. Kudos to Itagaki-san for getting that to work? > As for the shutdown hook, I don't think we need it at all in this > design. When loaded into the postmaster process, pg_stat_statements can > insert itself into the on_proc_exit or on_shmem_exit hook lists ... it > doesn't need a private hook. Ok cool. > The right way to make that happen is to rearrange the coding in > SubPostmasterMain() so that process_shared_preload_libraries is > done in all cases, just after the read_nondefault_variables call. This should also fix the rmg hooks patch on EXEC_BACKEND.
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes: > "Alex Hunsaker" <badalex@gmail.com> wrote: >> How about just pg_stat_statements.track ? > I renamed the variables to: > - pg_stat_statements.limit > - pg_stat_statements.track > - pg_stat_statements.saved_file Hmm, this has got a problem: regression=# show pg_stat_statements.limit ; ERROR: syntax error at or near "limit" LINE 1: show pg_stat_statements.limit ; ^ LIMIT is a reserved word ... I thought max_statements was fine, but if people think that's too long I guess we could go with just "max". I'm not enamored of "saved_file" either, it seems like the wrong part of speech somehow. Maybe "save_in_file"? regards, tom lane
I wrote: > I'm not enamored of "saved_file" either, it seems like the wrong > part of speech somehow. Maybe "save_in_file"? Actually ... what is the point of letting users control the filename at all? It seems like the only useful nondefault value is the empty string (to suppress storing the stats). So why not simplify this to a boolean? "pg_stat_statements.save" = on or off. regards, tom lane
ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp> writes: > Here is an updated version of contrib/pg_stat_statements patch. I've committed this with significant revisions. Other than the points already mentioned in previous messages: * I removed the proposed changes to the behavior of the core EXPLAIN code. I think that that should be submitted and discussed as a separate patch, not slide in under the misleading title of being a "contrib module". I'm personally against those changes anyway, on two grounds: 1. The proposed change to track system/user CPU time presents an enormous cost, and no argument has been made to show that there is any comparable benefit. The change causes each EXPLAIN ANALYZE tracking call to invoke getrusage() as well as gettimeofday(). I did a little bit of testing and found that this is over seven times slower on Fedora 9 on x86_64 (Xeon hardware) and over twenty-seven times slower on Darwin (on Core 2 Duo hardware). Considering that EXPLAIN ANALYZE overhead is already higher than anyone would like, you would need a pretty darn convincing argument to persuade us to accept that kind of slowdown. At the very least the code would need to be modified so that it doesn't execute getrusage() unless the user is actually going to look at the results. 2. I'm unconvinced by the proposed changes to accumulate backend-local I/O counters, too. The fact of the matter is that those counters are left over from Berkeley days, a time when PG hackers tended to do their performance measurements in standalone backends (!). They're obviously not the full story now on write measurements, and I don't have any confidence in them as read measurements either, particularly seeing that the wave of the future is likely to be asynchronous read operations (with the posix_fadvise patch and foreseeable follow-on work). I think those counters should more likely be done away with than institutionalized in EXPLAIN ANALYZE output. You can get more reliable information about what's happening from the existing pgstats system-wide I/O counts. * I changed the default track setting to "top". I don't see the likelihood that someone would load this module into their server and not want it turned on. * I'm not entirely seeing the point of a server-wide tracking facility that only counts SELECT/INSERT/UPDATE/DELETE. ISTM this should be modified to count utility commands too; which probably means adding some hooks around ProcessUtility (and what about autovacuum?). I left that work for someone else to do, though. * As already mentioned I find the entry_dealloc logic pretty darn dubious; but I didn't touch that either in this go-round. If we do keep it in its current form, ISTM that usage ought to be proportional to total execution time not total call count. regards, tom lane
Thank you for many works. Tom Lane <tgl@sss.pgh.pa.us> wrote: > 1. The proposed change to track system/user CPU time presents an > enormous cost, and no argument has been made to show that there is any > comparable benefit. > 2. I'm unconvinced by the proposed changes to accumulate backend-local > I/O counters, too. The fact of the matter is that those counters are > left over from Berkeley days, a time when PG hackers tended to do their > performance measurements in standalone backends (!). Ok, I need to reconsider performance and design of I/O counters. I think those information is still useful because we can determine not only which query is bad, but also why the query is bad *without repeating the query in production servers*. But the reworking would be done in 8.5... > * I changed the default track setting to "top". I don't see the > likelihood that someone would load this module into their server > and not want it turned on. Looks good. Setting shared_preload_libraries is enough for normal use. > * I'm not entirely seeing the point of a server-wide tracking facility > that only counts SELECT/INSERT/UPDATE/DELETE. ISTM this should be > modified to count utility commands too; which probably means adding some > hooks around ProcessUtility (and what about autovacuum?). I left that > work for someone else to do, though. Sure, but we should also consider utility commands should not kick out DML queries because maintenance commands take long time typically. > * As already mentioned I find the entry_dealloc logic pretty darn > dubious; but I didn't touch that either in this go-round. If we do > keep it in its current form, ISTM that usage ought to be proportional > to total execution time not total call count. I think the current implementation is not ideal, too. I also don't like using fixed-size shared memory. I'm thinking that shared memory used by extended modules to be allocated from the shared buffer pool in the future. If we could do it, memory management will be flexible and we can load libraries after server starts. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
Tom Lane <tgl@sss.pgh.pa.us> writes: > 2. I'm unconvinced by the proposed changes to accumulate backend-local > I/O counters, too. The fact of the matter is that those counters are > left over from Berkeley days, a time when PG hackers tended to do their > performance measurements in standalone backends (!). They're obviously > not the full story now on write measurements, and I don't have any > confidence in them as read measurements either, particularly seeing that > the wave of the future is likely to be asynchronous read operations > (with the posix_fadvise patch and foreseeable follow-on work). I think > those counters should more likely be done away with than > institutionalized in EXPLAIN ANALYZE output. You can get more reliable > information about what's happening from the existing pgstats system-wide > I/O counts. It's clear that these counters are not the whole picture. But I think that just means we need more counters with more sources of data, not that we need to get rid of the ones we have. There's no reason we shouldn't have counters for advised buffers along with reads. I'm also picturing using dtrace to find out how many reads were satisfied from cache and how many required physical reads for example. The system-wide stats satisfy a very different need from this. The sysadmin or DBA might want to know about system-wide stats but a programmer or a DBA analyzing a specific query needs to know what that query is doing. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services!