Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql) - Mailing list pgsql-hackers
From | Pavel Stehule |
---|---|
Subject | Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql) |
Date | |
Msg-id | CAFj8pRDsBzVZQSP+snX74UduCXa8RheVoEzPKyCLR_5UB+aCBQ@mail.gmail.com Whole thread Raw |
In response to | Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql) (Kirk Wolak <wolakk@gmail.com>) |
Responses |
Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)
|
List | pgsql-hackers |
po 27. 2. 2023 v 5:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:
On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:...> I think like ROW_COUNT, it should not change because of internal commands.
...By using \timing, the user is explicitly opting into any overhead
caused by time-keeping. With this feature, the timing info will be
collected all the time. So do consider evaluating the performance
impact this can cause on people's workloads. They may not care for the
impact in interactive mode, but in automated scripts, even a moderate
performance overhead would be a deal-breaker.Excellent point. I run lots of long scripts, but I usually set \timing on, just because I turn off everything else.I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the most impacted)Honestly, it was imperceptible, Maybe approximating 0.01 secondsWith timing on: ~ seconds 0.28With timing of: ~ seconds 0.27The \timing incurs no realistic penalty at this point. The ONLY penalty we could face is the time towrite it to the variable, and that cannot be tested until implemented. But I will do that. And I willreport the results of the impact. But I do not expect a big impact. We update SQL_COUNT without an issue.And that might be much more expensive to get.Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name improvement).First, the instant you have ANY output, it swamps the impact. (I settled on: SELECT 1 as v \gset xxx) for no outputSecond, the variability of running even a constant script is mind-blowing.Third, I've limited the output... I built this in layers (init.sql initializes the psql variables I use), run_100.sql runsanother file (\i tst_2000.sql) 100 times. Resulting in 200k selects.
This is the very worst case.
But nobody will run from psql 200K selects - can you try little bit more real but still synthetic test case?
create table foo(a int);
begin
insert into foo values(1);
...
insert into foo values(200000);
commit;
Regards
Pavel
Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5 seconds of extra time (Total)So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per statementRoughly: 2.5usUnfortunately, my test lines look like this:Without Timingdone 0.198215 (500) total 98.862548 min 0.167614 avg 0.19772509600000000000 max 0.290659With Timingdone 0.191583 (500) total 100.729868 min 0.163280 avg 0.20145973600000000000 max 0.275787Notice that the With Timing had a lower min, and a lower max. But a higher average.The distance between min - avg AND min - max, is big (those are for 1,000 selects each)Are these numbers at the "So What" Level?While testing, I got the distinct impression that I am measuring something that changes, or that thevariance in the system itself really swamps this on a per statement basis. It's only impact is felton millions of PSQL queries, and that's a couple of seconds...Curious what others think before I take this any further.regards, Kirk
pgsql-hackers by date: