Thread: SELECT constant; takes 15x longer on 9.0?
Hackers, Continuing the performance test: DBD, like a number of monitoring systems, does "pings" on the database which look like this: SELECT 'DBD::Pg ping test'; In our test, which does 5801 of these pings during the test, they take an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). Any clue why this would be? -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com
Josh Berkus <josh@agliodbs.com> wrote: > SELECT 'DBD::Pg ping test'; > > In our test, which does 5801 of these pings during the test, they take > an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). > > Any clue why this would be? Did you use the same configure options between them? For example, --enable-debug or --enable-cassert. Regards, --- Takahiro Itagaki NTT Open Source Software Center
On Tue, Apr 6, 2010 at 1:47 AM, Josh Berkus <josh@agliodbs.com> wrote: > Hackers, > > Continuing the performance test: > > DBD, like a number of monitoring systems, does "pings" on the database > which look like this: > > SELECT 'DBD::Pg ping test'; > > In our test, which does 5801 of these pings during the test, they take > an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). did your pings change? on my machine the query ';' completes in about 0.05ms but any select takes 0.19 - 0.25ms. 0.77 is awfully high -- there has to be an explanation. merlin
Josh Berkus <josh@agliodbs.com> writes: > Continuing the performance test: > DBD, like a number of monitoring systems, does "pings" on the database > which look like this: > SELECT 'DBD::Pg ping test'; > In our test, which does 5801 of these pings during the test, they take > an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). There's something wrong with your test setup. Or, if you'd like me to think that there isn't, provide a self-contained test case. I ran a small program that does for (i = 0; i < 10000; i++){ res = PQexec(conn, "SELECT 'DBD::Pg ping test'"); PQclear(res);} and I only see a few percent difference between HEAD and 8.4.3, on two different machines. (It does appear that HEAD is a bit slower for this, which might or might not be something to worry about.) regards, tom lane
On Apr 6, 2010, at 2:32 AM, Takahiro Itagaki wrote: >> In our test, which does 5801 of these pings during the test, they take >> an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). >> >> Any clue why this would be? > > Did you use the same configure options between them? Yes. > For example, --enable-debug or --enable-cassert. No. David
On Apr 6, 2010, at 6:07 AM, Merlin Moncure wrote: >> In our test, which does 5801 of these pings during the test, they take >> an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). > > did your pings change? No. David
On Apr 6, 2010, at 7:28 AM, Tom Lane wrote: > There's something wrong with your test setup. Or, if you'd like me to > think that there isn't, provide a self-contained test case. I ran a > small program that does > > for (i = 0; i < 10000; i++) > { > res = PQexec(conn, "SELECT 'DBD::Pg ping test'"); > PQclear(res); > } > > and I only see a few percent difference between HEAD and 8.4.3, > on two different machines. (It does appear that HEAD is a bit slower > for this, which might or might not be something to worry about.) I'm going to see if I can replicate it on a second box today. Best, David
Tom Lane <tgl@sss.pgh.pa.us> writes: > Josh Berkus <josh@agliodbs.com> writes: >> Continuing the performance test: >> DBD, like a number of monitoring systems, does "pings" on the database >> which look like this: >> SELECT 'DBD::Pg ping test'; >> In our test, which does 5801 of these pings during the test, they take >> an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). > There's something wrong with your test setup. Or, if you'd like me to > think that there isn't, provide a self-contained test case. I did some comparisons and oprofile work against this test case: > for (i = 0; i < [lots]; i++) > { > res = PQexec(conn, "SELECT 'DBD::Pg ping test'"); > PQclear(res); > } In assert-enabled builds, HEAD seems about 10% slower than 8.4 branch tip, but as far as I can tell this is all debug overhead associated with a slightly larger number of catcache entries that are present immediately after startup. In non-assert-enabled builds there's a difference of a percent or so, which appears to be due to increased lexer overhead; oprofile shows these top routines in HEAD: samples % image name symbol name 49787 7.0533 postgres base_yyparse 35510 5.0307 postgres AllocSetAlloc 29135 4.1275 postgres hash_search_with_hash_value 24541 3.4767 postgres core_yylex 15231 2.1578 postgres PostgresMain 14710 2.0840 postgres hash_seq_search 14340 2.0315 postgres LockReleaseAll 13878 1.9661 postgres MemoryContextAllocZeroAligned 10047 1.4234 postgres ScanKeywordLookup 9866 1.3977 postgres LWLockAcquire 9434 1.3365 postgres LockAcquireExtended 8347 1.1825 postgres hash_any 7954 1.1268 postgres ExecInitExpr 7326 1.0379 postgres MemoryContextAlloc 7243 1.0261 postgres AllocSetFree 6787 0.9615 postgres MemoryContextAllocZero 6501 0.9210 postgres internal_flush 5956 0.8438 postgres LWLockRelease versus these in 8.4: samples % image name symbol name 51795 7.2589 postgres AllocSetAlloc 37742 5.2894 postgres base_yyparse 32558 4.5629 postgres hash_search_with_hash_value 17250 2.4175 postgres hash_seq_search 14933 2.0928 postgres AllocSetFree 14902 2.0885 postgres MemoryContextAllocZeroAligned 13219 1.8526 postgres LockReleaseAll 12974 1.8183 postgres SearchCatCache 10885 1.5255 postgres PostgresMain 10592 1.4844 postgres ResourceOwnerReleaseInternal 10462 1.4662 postgres base_yylex 10007 1.4025 postgres hash_any 9553 1.3388 postgres MemoryContextAllocZero 8758 1.2274 postgres LWLockAcquire 8237 1.1544 postgres exec_simple_query 7410 1.0385 postgres LockAcquire 7315 1.0252 postgres MemoryContextCreate 7262 1.0177 postgres MemoryContextAlloc 7220 1.0119 postgres LWLockRelease The only thing that seems to have changed by more than the noise level is that core_yylex (formerly base_yylex) got slower. I suppose this is due to changing over to a re-entrant scanner. The flex manual claims that %option reentrant doesn't cost any performance --- so I suspect that what we are seeing here is additional per-call overhead and not a slowdown that would be important for lexing long queries. I don't think there's anything to worry about there for nontrivial queries. I also tried reconnecting to the server for each query. In that situation HEAD seems to be about 10% slower than 8.4 even without asserts, which might be an artifact of the changes to eliminate the "flat" authentication files. I'm not particularly concerned about that either, since if you're looking for performance, reconnecting to issue a trivial query is not what you should be doing. So I'm not sure where your 15x is coming from, but I don't see it. regards, tom lane
On Tue, Apr 6, 2010 at 12:08 PM, David E. Wheeler <david@kineticode.com> wrote: > On Apr 6, 2010, at 2:32 AM, Takahiro Itagaki wrote: > >>> In our test, which does 5801 of these pings during the test, they take >>> an average of 15x longer to execute on 9.0 as 8.4 ( 0.77ms vs. 0.05ms ). >>> >>> Any clue why this would be? >> >> Did you use the same configure options between them? > > Yes. > >> For example, --enable-debug or --enable-cassert. hmm. ssl? (I don't see any interesting difference in time either btw). can you log in w/psql and confirm the difference there w/timing switch? merlin
On Apr 6, 2010, at 9:08 AM, David E. Wheeler wrote: >> For example, --enable-debug or --enable-cassert. > > No. Oh FFS! I was looking at the wrong build script. It was indeed built with --enable-cassert --enable-debug. Grrr. Well, that's likely the culprit right there. I'm rebuilding without those now and hopefully my tests will be back down to45s. Many apologies for the noise and wasted time. Best, David
On Apr 6, 2010, at 10:17 AM, Tom Lane wrote: > So I'm not sure where your 15x is coming from, but I don't see it. By stupidly having configured with --enable-cassert --enable-debug without realizing it. I've just rebuilt without them andrun the tests again using the default postgresql.conf and I'm back down to 57s and 46s over two runs. Sorry for the wasted time. I knew there had to a be a simple answer. Best, David
On Apr 6, 2010, at 10:52 AM, David E. Wheeler wrote: > Oh FFS! I was looking at the wrong build script. It was indeed built with --enable-cassert --enable-debug. Grrr. > > Well, that's likely the culprit right there. I'm rebuilding without those now and hopefully my tests will be back downto 45s. > > Many apologies for the noise and wasted time. And just to close out this thread, I rebuilt without `--enable-cassert --enable-debug` and now the tests pass in 57s and46s over two runs, just like on 8.4 (though with just the default postgresql.conf, unlike my 8.4 install). Phew! Knew it had to be somehting stup^H^H^Himple. Best, David
"David E. Wheeler" <david@kineticode.com> writes: > On Apr 6, 2010, at 10:17 AM, Tom Lane wrote: >> So I'm not sure where your 15x is coming from, but I don't see it. > By stupidly having configured with --enable-cassert --enable-debug without realizing it. I've just rebuilt without themand run the tests again using the default postgresql.conf and I'm back down to 57s and 46s over two runs. Huh. I'm still curious, because in my test the overhead of those options seemed to be about 3x. So there's still something considerably different between what you did and what I did. Are you testing a separate connection per ping query? I think I neglected to compare that case with and without assert overhead. regards, tom lane
David E. Wheeler wrote: > By stupidly having configured with --enable-cassert --enable-debug without realizing it. I've just rebuilt without themand run the tests again using the default postgresql.conf and I'm back down to 57s and 46s over two runs. > Every performance test I run, regardless of where the binaries come from or how I thought they were built, starts like this: postgres=# show debug_assertions; debug_assertions ------------------ off(1 row) It's a really good habit to get into, or even enforce in your testing script if practical. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.us
On Apr 6, 2010, at 11:15 AM, Tom Lane wrote: >> By stupidly having configured with --enable-cassert --enable-debug without realizing it. I've just rebuilt without themand run the tests again using the default postgresql.conf and I'm back down to 57s and 46s over two runs. > > Huh. I'm still curious, because in my test the overhead of those > options seemed to be about 3x. So there's still something considerably > different between what you did and what I did. Are you doing this on a Mac? > Are you testing a separate connection per ping query? I think I > neglected to compare that case with and without assert overhead. No, should be one connection for the entire test suite. Best, David
"David E. Wheeler" <david@kineticode.com> writes: > On Apr 6, 2010, at 11:15 AM, Tom Lane wrote: >> Huh. I'm still curious, because in my test the overhead of those >> options seemed to be about 3x. So there's still something considerably >> different between what you did and what I did. > Are you doing this on a Mac? I hadn't, but since you mention it: 100000 iterations take about 7.5sec with non-assert CVS HEAD and 15sec with asserts, on a 2008 Macbook Pro. Color me still confused. regards, tom lane
On Apr 6, 2010, at 12:50 PM, Tom Lane wrote: > I hadn't, but since you mention it: 100000 iterations take about 7.5sec > with non-assert CVS HEAD and 15sec with asserts, on a 2008 Macbook Pro. > Color me still confused. Well it's not just pings that the bricolage tests were running, of course. Josh, might you have got the numbers wrong when trying to match up query runtimes to their queries in the CSVLOG output?That might explain it. Most of the queries were BINDs. Best, David
2010/4/6 Tom Lane <tgl@sss.pgh.pa.us>: > "David E. Wheeler" <david@kineticode.com> writes: >> On Apr 6, 2010, at 11:15 AM, Tom Lane wrote: >>> Huh. I'm still curious, because in my test the overhead of those >>> options seemed to be about 3x. So there's still something considerably >>> different between what you did and what I did. > >> Are you doing this on a Mac? > > I hadn't, but since you mention it: 100000 iterations take about 7.5sec > with non-assert CVS HEAD and 15sec with asserts, on a 2008 Macbook Pro. > Color me still confused. it is little bit offtopic. Can we add info about assertation to version() output? like postgres=# select version(); version ──────────────────────────────────────────────────────────────────────────────── ───────────────────────────────PostgreSQL 9.0alpha4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.4.3 2010 0127 (Red Hat 4.4.3-4), 32-bit >>> with enabled assertation <<<< (1 row) Regards Pavel Stehule > > regards, tom lane > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers >
Pavel Stehule <pavel.stehule@gmail.com> writes: > it is little bit offtopic. Can we add info about assertation to > version() output? Greg has the right idea: show debug_assertions. regards, tom lane
> Josh, might you have got the numbers wrong when trying to match up query runtimes to their queries in the CSVLOG output?That might explain it. Most of the queries were BINDs. I swept up some DEALLOCATEs by acccident, but those don't appreciably affect the final numbers. -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com
On Tue, Apr 6, 2010 at 3:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Pavel Stehule <pavel.stehule@gmail.com> writes: >> it is little bit offtopic. Can we add info about assertation to >> version() output? > > Greg has the right idea: show debug_assertions. why not the entire set of configure options? merlin
Merlin Moncure <mmoncure@gmail.com> writes: > On Tue, Apr 6, 2010 at 3:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Greg has the right idea: show debug_assertions. > why not the entire set of configure options? We've discussed that before. pg_config already provides that info, and there was some concern about security risks of exposing it inside the database. (In particular, we currently go to some lengths to not expose any file path information to non-superusers.) If there's a reason to expose *individual* configuration options that aren't already easily checkable, we could discuss that. I would be against sticking it into version() output in any case. That function's already overloaded beyond any sane interpretation of its purpose, to the point where it's difficult to make use of the output programmatically. regards, tom lane
Merlin Moncure wrote: <blockquote cite="mid:w2vb42b73151004061313ofb82c383za0cd2a0966c17903@mail.gmail.com" type="cite"><prewrap="">On Tue, Apr 6, 2010 at 3:58 PM, Tom Lane <a class="moz-txt-link-rfc2396E" href="mailto:tgl@sss.pgh.pa.us"><tgl@sss.pgh.pa.us></a>wrote: </pre><blockquote type="cite"><pre wrap="">Greg has theright idea: show debug_assertions. </pre></blockquote><pre wrap=""> why not the entire set of configure options? </pre></blockquote><br /> Given that the standard way to find those is pg_config,there's a couple of reasons why not to prefer that, on top of those Tom already mentioned:<br /><br /> 1) pg_configisn't in the standard PostgreSQL package set in some distributions (it's sometimes in the -devel package), so itmay not be available; debug_assertions is always there if you have psql. For my goals, which include benchmarking scriptsI often distribute to other people, that matters.<br /><br /> 2) It's easy to get pg_config output from your clientthat doesn't actually match the running server, particularly when developing. That's been the source of more thanone of the times I was running a debug build on the server but didn't notice it, and therefore would have produced worthlessperformance numbers. Given that the main slowdowns from having assertions turned on are server side, whether ornot the local client running things like psql have them turned on or not doesn't worry me as much.<br /><br /> 3) It'sa little easier to check the value of "show" in a script to confirm you're not running a bad build than to parse theoutput from pg_config. Here's the recipe I use for shell scripts:<br /><br /> #!/bin/sh<br /> DEBUG=`psql -At -c "showdebug_assertions"`<br /> if [ "$DEBUG" = "on" ] ; then <br /> echo "Debug build - aborting performance test"<br /> exit 1<br /> fi<br /><br /> Pushing this data into something like version() would solve the first two items above, whilemaking the issue of how to parse the results in a test client even harder, given there's already too much junk in onebig string there. You couldn't make the above check much simpler, which makes it hard to justify any alternative approachto grab this data.<br /><br /><pre class="moz-signature" cols="72">-- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support <a class="moz-txt-link-abbreviated" href="mailto:greg@2ndQuadrant.com">greg@2ndQuadrant.com</a> <a class="moz-txt-link-abbreviated"href="http://www.2ndQuadrant.us">www.2ndQuadrant.us</a> </pre>
2010/4/7 Greg Smith <greg@2ndquadrant.com>: > Merlin Moncure wrote: > > On Tue, Apr 6, 2010 at 3:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > > Greg has the right idea: show debug_assertions. > > > why not the entire set of configure options? > > > Given that the standard way to find those is pg_config, there's a couple of > reasons why not to prefer that, on top of those Tom already mentioned: > > 1) pg_config isn't in the standard PostgreSQL package set in some > distributions (it's sometimes in the -devel package), so it may not be > available; debug_assertions is always there if you have psql. For my goals, > which include benchmarking scripts I often distribute to other people, that > matters. > > 2) It's easy to get pg_config output from your client that doesn't actually > match the running server, particularly when developing. That's been the > source of more than one of the times I was running a debug build on the > server but didn't notice it, and therefore would have produced worthless > performance numbers. Given that the main slowdowns from having assertions > turned on are server side, whether or not the local client running things > like psql have them turned on or not doesn't worry me as much. > > 3) It's a little easier to check the value of "show" in a script to confirm > you're not running a bad build than to parse the output from pg_config. > Here's the recipe I use for shell scripts: > > #!/bin/sh > DEBUG=`psql -At -c "show debug_assertions"` > if [ "$DEBUG" = "on" ] ; then > echo "Debug build - aborting performance test" > exit 1 > fi > > Pushing this data into something like version() would solve the first two > items above, while making the issue of how to parse the results in a test > client even harder, given there's already too much junk in one big string > there. You couldn't make the above check much simpler, which makes it hard > to justify any alternative approach to grab this data. > good idea. Can do it to 9.0? It has zero impact on behave and can help to protect us against same bug. I plan one night test fest on begin of may in Prague, and wouldn't do same mistake :). But it is really import feature - maybe can signaled in promt in future. Regards Pavel Stehule > -- > Greg Smith 2ndQuadrant US Baltimore, MD > PostgreSQL Training, Services and Support > greg@2ndQuadrant.com www.2ndQuadrant.us >