Thread: psql metaqueries with \gexec
Often, I'm faced with a long .sql script that builds some objects, then builds things on top of them.
This means that some of the queries I wish to run are dependent on the state of things that are unknown at the time of writing the script.
I could give up, and make a python script that mostly just strings together SQL statements. That's ugly and cumbersome.
I could do some wizardry like this:
$ create table foo( a integer, b text, c date);$ select coalesce( ( select string_agg(format('create index foo(%I);',attname),E'\n')from pg_attributewhere attrelid = 'foo'::regclassand attnum > 0 order by attnum),'') as sql_statements\gset:sql_statements
For those of you not willing to parse that, that's a dictionary query with a 1-column result set formatted into sql with a ';' appended, string aggregated with a newline delimiter, with the final result set coalesced with an empty string because \gset will error on an empty result set. I then immediately put that psql variable back into the command buffer, where I hope that I meta-wrote valid SQL. If it hurt to read, you can imagine what it was like to write.
I could use \g and pipe the results to another psql session...but that will happen in another transaction where my objects might not exist yet.
I would also like the log to show what commands were run.
For that reason, I created the psql command \gexec
It is like \g and \gset in the sense that it executes the query currently in the buffer. However, it treats every cell in the result set as a query which itself should be immediately executed.
$ create temporary table gexec_temp( a int, b text, c date, d float);CREATE TABLE$ select format('create index on gexec_temp(%I)',attname)from pg_attributewhere attrelid = 'gexec_temp'::regclassand attnum > 0order by attnum
\gexec
create index on gexec_temp(a)CREATE INDEXcreate index on gexec_temp(b)CREATE INDEXcreate index on gexec_temp(c)CREATE INDEXcreate index on gexec_temp(d)CREATE INDEX
Execution order of the statements is top to bottom, left to right.
$ select 'select 1 as ones', 'select x.y, x.y*2 as double from generate_series(1,4) as x(y)'union allselect 'select true as is_true', 'select ''2000-01-01''::date as party_over'\gexecones----1(1 row)y double- ------1 22 43 64 8(4 rows)is_true-------t(1 row)party_over----------01-01-2000(1 row)
Empty result sets do nothing:
$ select 'select 1 as expect_zero_rows ' where false\gexec
The results are just strings which are sent to SendQuery(), where they succeed or fail on their own merits
$ select 'do $$ begin raise notice ''plpgsql block executed''; end;$$' as blockfrom generate_series(1,2)
\gexec
do $$ begin raise notice 'plpgsql block executed'; end;$$NOTICE: plpgsql block executedDOdo $$ begin raise notice 'plpgsql block executed'; end;$$NOTICE: plpgsql block executedDO
I am not sure that "gexec" is the right name for this command. Others considered were \execute_each, \meta, \gmeta, \geach, as well as adding a "<" parameter to the \g command.
Many thanks to Pavel Stěhule for giving me some direction in this endeavor, though he might not agree with the design.
Attachment
On 2/19/16 7:32 PM, Corey Huinker wrote: Wow, and I thought *I* liked metaprogramming! :) I like what you've proposed, though I am wondering if you considered doing something server-side instead? It seems a shame to do all this work and exclude all other tools. I frequently find myself creating a function that is just a wrapper on EXECUTE for this purpose, but obviously that has transactional limitations. FWIW, I also wish we had something better than format() for this stuff. I did create [1] towards that end, but it currently depends on some C code, which is cumbersome. > I am not sure that "gexec" is the right name for this command. Others > considered were \execute_each, \meta, \gmeta, \geach, as well as adding > a "<" parameter to the \g command. \gexec sounds fine to me. I would think \g < would be something done at the shell level... [1] https://github.com/decibel/trunklet-format/blob/master/doc/trunklet-format.asc -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
I like what you've proposed, though I am wondering if you considered doing something server-side instead? It seems a shame to do all this work and exclude all other tools.
I have, but my solutions closely mirror the one you mention in the next paragraph.
I frequently find myself creating a function that is just a wrapper on EXECUTE for this purpose, but obviously that has transactional limitations.
...and query text visibility, and result visibility, and error handling, etc. In this case, we're leveraging the psql environment we'd already set up, and if there's an error, \set ECHO queries shows us the errant SQL as if we typed it ourselves..
FWIW, I also wish we had something better than format() for this stuff. I did create [1] towards that end, but it currently depends on some C code, which is cumbersome.
For the most party, I'm pretty thrilled with format(), though:
- I'll admit to being grumpy about the %1$s notation, but I have no better suggestion.
- I'd also like it if there were a %I variant that accepted schema qualified names and %I-ed both, though I see the inability to tell the difference between a schema dot and a really-named-that dot.
- I'd love it if there were a %C format that took a pg_class oid and formatted the qualified schema name. As it is i just use %s and cast the parameter as ::regclass
[1] https://github.com/decibel/trunklet-format/blob/master/doc/trunklet-format.asc
That's intense. I'll ask you about that in an off-list thread.
FWIW, I also wish we had something better than format() for this stuff. I did create [1] towards that end, but it currently depends on some C code, which is cumbersome.
For the most party, I'm pretty thrilled with format(), though:
- I'll admit to being grumpy about the %1$s notation, but I have no better suggestion.
- I'd also like it if there were a %I variant that accepted schema qualified names and %I-ed both, though I see the inability to tell the difference between a schema dot and a really-named-that dot.
- I'd love it if there were a %C format that took a pg_class oid and formatted the qualified schema name. As it is i just use %s and cast the parameter as ::regclass
The design of the "format" function is not closed. Try to send prototype and patch. The possibility to do PostgreSQL customization was strong reason why we didn't implemented "sprintf" and we implemented "format".
Regards
Pavel
Corey Huinker wrote: > ...and query text visibility, and result visibility, and error handling, > etc. In this case, we're leveraging the psql environment we'd already set > up, and if there's an error, \set ECHO queries shows us the errant SQL as > if we typed it ourselves.. BTW, about error handling, shouldn't it honor ON_ERROR_STOP ? With the patch when trying this: => set ON_ERROR_STOP on => select * from (values ('select 1/0', 'select 1/0')) AS n \gexec it produces two errors: ERROR: division by zero ERROR: division by zero I'd rather have the execution stop immediately after the first error, like it's the case with successive queries entered normally via the query buffer: => \set ON_ERROR_STOP on => select 1/0; select 1/0; ERROR: division by zero as opposed to: => \set ON_ERROR_STOP off => select 1/0; select 1/0; ERROR: division by zero ERROR: division by zero Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite
On Mon, Feb 22, 2016 at 10:08 AM, Daniel Verite <daniel@manitou-mail.org> wrote:
Corey Huinker wrote:
> ...and query text visibility, and result visibility, and error handling,
> etc. In this case, we're leveraging the psql environment we'd already set
> up, and if there's an error, \set ECHO queries shows us the errant SQL as
> if we typed it ourselves..
BTW, about error handling, shouldn't it honor ON_ERROR_STOP ?
With the patch when trying this:
=> set ON_ERROR_STOP on
=> select * from (values ('select 1/0', 'select 1/0')) AS n \gexec
it produces two errors:
ERROR: division by zero
ERROR: division by zero
I'd rather have the execution stop immediately after the first error,
like it's the case with successive queries entered normally via the
query buffer:
=> \set ON_ERROR_STOP on
=> select 1/0; select 1/0;
ERROR: division by zero
as opposed to:
=> \set ON_ERROR_STOP off
=> select 1/0; select 1/0;
ERROR: division by zero
ERROR: division by zero
Yes, I would like it to honor ON_ERROR_STOP. I'll look into that.
Pavel Stehule wrote: > The design of the "format" function is not closed. Try to send prototype > and patch. The possibility to do PostgreSQL customization was strong reason > why we didn't implemented "sprintf" and we implemented "format". Probably not terribly useful here, but for the DDL-deparse patch I came up with a syntax to format JSON objects, which used %-escapes; each escaped element corresponds to a string literal, or to an object. So you'd have %{table}D where the "table" element in the JSON object could be a simple string which is expanded verbatim (plus quoting if necessary), or it could be a JSON object with something like { schema => "public", name => "students" }, where each element is expanded and quoted as necessary; if the "schema" is null or it doesn't exist, it expands only the name, obviously omitting the dot. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Mon, Feb 22, 2016 at 11:30 AM, Corey Huinker <corey.huinker@gmail.com> wrote:
On Mon, Feb 22, 2016 at 10:08 AM, Daniel Verite <daniel@manitou-mail.org> wrote:Corey Huinker wrote:
> ...and query text visibility, and result visibility, and error handling,
> etc. In this case, we're leveraging the psql environment we'd already set
> up, and if there's an error, \set ECHO queries shows us the errant SQL as
> if we typed it ourselves..
BTW, about error handling, shouldn't it honor ON_ERROR_STOP ?
With the patch when trying this:
=> set ON_ERROR_STOP on
=> select * from (values ('select 1/0', 'select 1/0')) AS n \gexec
it produces two errors:
ERROR: division by zero
ERROR: division by zero
I'd rather have the execution stop immediately after the first error,
like it's the case with successive queries entered normally via the
query buffer:
=> \set ON_ERROR_STOP on
=> select 1/0; select 1/0;
ERROR: division by zero
as opposed to:
=> \set ON_ERROR_STOP off
=> select 1/0; select 1/0;
ERROR: division by zero
ERROR: division by zeroYes, I would like it to honor ON_ERROR_STOP. I'll look into that.
Well, that was easy enough. Turns out that pset.on_error_stop is checked in MainLoop, whereas the other pset.on_* vars are checked in SendQuery().
My original idea had been to push each cell into a in-memory temp file handle and call MainLoop() on each. Pavel suggested that temp files of any sort were a bad idea, hence using SendQuery instead. It's probably for the best.
# select 'select 1,2,3', 'select 1/0', 'select 4,5,6'... # \gexec?column? | ?column? | ?column?----------+----------+----------1 | 2 | 3(1 row)Time: 0.151 msERROR: 22012: division by zeroLOCATION: int4div, int.c:719Time: 0.528 ms?column? | ?column? | ?column?----------+----------+----------4 | 5 | 6(1 row)Time: 0.139 msTime: 0.595 ms# \set ON_ERROR_STOP 1# select 'select 1,2,3', 'select 1/0', 'select 4,5,6' \gexec?column? | ?column? | ?column?----------+----------+----------1 | 2 | 3(1 row)Time: 0.137 msERROR: 22012: division by zeroLOCATION: int4div, int.c:719Time: 0.165 msTime: 0.284 ms
Does \set ON_ERROR_STOP mess up regression tests? If not, I'll add the test above (minus the \set VERBOSITY verbose-isms) to the regression.
In the mean time, update patch attached.
In the mean time, update patch attached.
Really attached this time.
Attachment
On 2/22/16 11:47 AM, Alvaro Herrera wrote: > Pavel Stehule wrote: > >> The design of the "format" function is not closed. Try to send prototype >> and patch. The possibility to do PostgreSQL customization was strong reason >> why we didn't implemented "sprintf" and we implemented "format". > > Probably not terribly useful here, but for the DDL-deparse patch I came > up with a syntax to format JSON objects, which used %-escapes; each > escaped element corresponds to a string literal, or to an object. So > you'd have %{table}D where the "table" element in the JSON object could > be a simple string which is expanded verbatim (plus quoting if > necessary), or it could be a JSON object with something like { schema => > "public", name => "students" }, where each element is expanded and > quoted as necessary; if the "schema" is null or it doesn't exist, it > expands only the name, obviously omitting the dot. Where did the "D" in "%{table}D" come from? BTW, the syntax I chose for [1] is similar to format's, except I elected to stick with % instead of $. So you do %parameter_name%type where type is s, L or I. I don't think it'd be hard to support an object with 'schema' and 'name' keys. [1] https://github.com/decibel/trunklet-format/blob/master/doc/trunklet-format.asc#2-template-specification -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
Jim Nasby wrote: > On 2/22/16 11:47 AM, Alvaro Herrera wrote: > >Pavel Stehule wrote: > > > >>The design of the "format" function is not closed. Try to send prototype > >>and patch. The possibility to do PostgreSQL customization was strong reason > >>why we didn't implemented "sprintf" and we implemented "format". > > > >Probably not terribly useful here, but for the DDL-deparse patch I came > >up with a syntax to format JSON objects, which used %-escapes; each > >escaped element corresponds to a string literal, or to an object. So > >you'd have %{table}D where the "table" element in the JSON object could > >be a simple string which is expanded verbatim (plus quoting if > >necessary), or it could be a JSON object with something like { schema => > >"public", name => "students" }, where each element is expanded and > >quoted as necessary; if the "schema" is null or it doesn't exist, it > >expands only the name, obviously omitting the dot. > > Where did the "D" in "%{table}D" come from? The I in %{foo}I was for "identifier" (of course) and I *think* the D was for "double identifiers" (that is, qualified). I expanded the idea afterwards to allow for a third name for things like catalog.schema.name, so I guess it's a misnomer already. It's not released code yet. You can see an example here https://www.postgresql.org/message-id/%3C20150224175152.GI5169%40alvh.no-ip.org%3E just scroll down a few hundred lines to about 7/16ths of the page (yes, really) (One thing I had to come up with was processing of arrays, which you also see in that example JSON -- it's the specifiers that have a colon inside the {}. The part after the colon is used as separator between the array elements, and each element is expanded separately.) -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2/22/16 5:13 PM, Alvaro Herrera wrote: > Jim Nasby wrote: >> On 2/22/16 11:47 AM, Alvaro Herrera wrote: >>> Pavel Stehule wrote: >>> >>>> The design of the "format" function is not closed. Try to send prototype >>>> and patch. The possibility to do PostgreSQL customization was strong reason >>>> why we didn't implemented "sprintf" and we implemented "format". >>> >>> Probably not terribly useful here, but for the DDL-deparse patch I came >>> up with a syntax to format JSON objects, which used %-escapes; each >>> escaped element corresponds to a string literal, or to an object. So >>> you'd have %{table}D where the "table" element in the JSON object could >>> be a simple string which is expanded verbatim (plus quoting if >>> necessary), or it could be a JSON object with something like { schema => >>> "public", name => "students" }, where each element is expanded and >>> quoted as necessary; if the "schema" is null or it doesn't exist, it >>> expands only the name, obviously omitting the dot. >> >> Where did the "D" in "%{table}D" come from? > > The I in %{foo}I was for "identifier" (of course) and I *think* the D > was for "double identifiers" (that is, qualified). I expanded the idea > afterwards to allow for a third name for things like > catalog.schema.name, so I guess it's a misnomer already. > > It's not released code yet. You can see an example here > https://www.postgresql.org/message-id/%3C20150224175152.GI5169%40alvh.no-ip.org%3E > just scroll down a few hundred lines to about 7/16ths of the page (yes, > really) > > (One thing I had to come up with was processing of arrays, which you > also see in that example JSON -- it's the specifiers that have a colon > inside the {}. The part after the colon is used as separator between > the array elements, and each element is expanded separately.) Ahh, very interesting. Something that would probably be helpful for these kind of things is if we had a set of complex types available that represented things like the arguments to a function. Something like (parameter_mode enum(IN, OUT, INOUT), parameter_name name, parameter_type regtype, parameter_default text). A function might be represented by (function_schema name, function_name name, function_parameters ..., function_language, function_options, function_body). In any case, having anything along these lines in core would be useful, assuming that the individual facility was exposed as well (as opposed to only being available inside an event trigger). -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
On 2/22/16 1:01 PM, Corey Huinker wrote: > In the mean time, update patch attached. > > > Really attached this time. I'm getting a warning from this patch: common.c:947:8: warning: variable 'success' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (pset.gexec_flag) ^~~~~~~~~~~~~~~ common.c:995:9: note: uninitialized use occurs here return success; ^~~~~~~ common.c:947:4: note: remove the 'if' if its condition is always false if (pset.gexec_flag) ^~~~~~~~~~~~~~~~~~~~ common.c:937:15: note: initialize the variable 'success' to silence this warning bool success; ^ = '\0' 1 warning generated. (note that I'm using CC='ccache clang -Qunused-arguments -fcolor-diagnostics') > for (r = 0; r < nrows; r++) > { > for (c = 0; c < ncolumns; c++) > { etc... Normally we don't use gratuitous {'s, and I don't think it's helping anything in this case. But I'll let whoever commits this decide. > diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c > index 5f27120..0f87f29 100644 > --- a/src/bin/psql/tab-complete.c > +++ b/src/bin/psql/tab-complete.c > @@ -1280,8 +1280,8 @@ psql_completion(const char *text, int start, int end) > "\\dm", "\\dn", "\\do", "\\dO", "\\dp", "\\drds", "\\ds", "\\dS", > "\\dt", "\\dT", "\\dv", "\\du", "\\dx", "\\dy", > "\\e", "\\echo", "\\ef", "\\encoding", "\\ev", > - "\\f", "\\g", "\\gset", "\\h", "\\help", "\\H", "\\i", "\\ir", "\\l", > - "\\lo_import", "\\lo_export", "\\lo_list", "\\lo_unlink", > + "\\f", "\\g", "\\gexec", "\\gset", "\\h", "\\help", "\\H", "\\i", "\\ir", > + "\\l", "\\lo_import", "\\lo_export", "\\lo_list", "\\lo_unlink", FWIW, it's generally better to leave that kind of re-wrapping to the next pg_indent run. I added tests for ON_ERROR_STOP. New patch attached. The patch still needs to document this feature in the psql docs (and maybe the manpage? not sure how that's generated...) -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com
Attachment
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: not tested Documentation: not tested Still needs documentation. The new status of this patch is: Waiting on Author
I'm getting a warning from this patch:
1 warning generated.
Fixed that one.
(note that I'm using CC='ccache clang -Qunused-arguments -fcolor-diagnostics')for (r = 0; r < nrows; r++)etc...
{
for (c = 0; c < ncolumns; c++)
{
Normally we don't use gratuitous {'s, and I don't think it's helping anything in this case. But I'll let whoever commits this decide.
Good to know in the future. I can remove or leave to the committer.
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 5f27120..0f87f29 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1280,8 +1280,8 @@ psql_completion(const char *text, int start, int end)
"\\dm", "\\dn", "\\do", "\\dO", "\\dp", "\\drds", "\\ds", "\\dS",
"\\dt", "\\dT", "\\dv", "\\du", "\\dx", "\\dy",
"\\e", "\\echo", "\\ef", "\\encoding", "\\ev",
- "\\f", "\\g", "\\gset", "\\h", "\\help", "\\H", "\\i", "\\ir", "\\l",
- "\\lo_import", "\\lo_export", "\\lo_list", "\\lo_unlink",
+ "\\f", "\\g", "\\gexec", "\\gset", "\\h", "\\help", "\\H", "\\i", "\\ir",
+ "\\l", "\\lo_import", "\\lo_export", "\\lo_list", "\\lo_unlink",
FWIW, it's generally better to leave that kind of re-wrapping to the next pg_indent run.
Good to know in the future. Not much point in undoing it now, I suppose.
I added tests for ON_ERROR_STOP. New patch attached.
I was wondering if ON_ERROR_STOP tests were verbotten because you only get to kick the tires on one feature...
The patch still needs to document this feature in the psql docs (and maybe the manpage? not sure how that's generated...)
doc/src/sgml/ref/psql-ref.sgml is the source for both html and man pagers.
I'm on it. I didn't expect the name "gexec" to survive first contact with the community.
Patch attached. Changes are thus:
- proper assignment of success var
- added documentation to psql manpage/html with examples pulled from regression tests.
Not changed are:
- exuberant braces, can remove if someone wants me to
- attempt at line-wrappng the enumerated slash commands, leave that to pg_indent
Attachment
On Mon, Mar 14, 2016 at 7:54 AM, Corey Huinker <corey.huinker@gmail.com> wrote: > Patch attached. Changes are thus: > - proper assignment of success var > - added documentation to psql manpage/html with examples pulled from > regression tests. > > Not changed are: > - exuberant braces, can remove if someone wants me to > - attempt at line-wrappng the enumerated slash commands, leave that to > pg_indent Jim, can you re-review this? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes: > Jim, can you re-review this? I'm not Jim, but I have a question: what's the motivation for the Fortran-order traversal of the result (down rows before across columns)? It seems less than intuitive to do it that way. Perhaps there's a good reason, but I do not see any defense of this choice in the thread. I also note that the patch seems to be missing resetting gexec_flag in some error exit paths, possibly allowing the \gexec to be applied to the next query unexpectedly. It should clear that in all the same places where gfname or gset_prefix get cleared. regards, tom lane
On Sun, Apr 3, 2016 at 4:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> Jim, can you re-review this?
I'm not Jim, but I have a question: what's the motivation for the
Fortran-order traversal of the result (down rows before across columns)?
It seems less than intuitive to do it that way. Perhaps there's a good
reason, but I do not see any defense of this choice in the thread.
If I am understanding you correctly, it does work the way you find intuitive: all results from the first row are executed before any in the second row, so
SELECT a, b UNION ALL SELECT c, d
will execute the queries in order: a, b, c, d as is shown in the changes to the sgml and the test cases.
Did you get the impression of Fortran-ordering from the phrase "top-to-bottom, left-to-right order" in the sgml patch? If so, would calling it "rows first" or something else be more clear?
Or am I misunderstanding you and you find the order a, c, b, d more intuitive?
I also note that the patch seems to be missing resetting gexec_flag
in some error exit paths, possibly allowing the \gexec to be applied
to the next query unexpectedly. It should clear that in all the same
places where gfname or gset_prefix get cleared.
Will do!
I'm only seeing one place where those two vars are deallocated and nulled, and that's at the tail end of SendQuery. Were you expecting more than just that?
Corey Huinker <corey.huinker@gmail.com> writes: > On Sun, Apr 3, 2016 at 4:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> I'm not Jim, but I have a question: what's the motivation for the >> Fortran-order traversal of the result (down rows before across columns)? > If I am understanding you correctly, it does work the way you find > intuitive: all results from the first row are executed before any in the > second row, so Oh, I hadn't checked the code closely enough to realize that, but I see you're right. The patch's documentation seems very confusing on the point, though: + The secondary queries are executed in top-to-bottom, left-to-right order, so the command I took that as meaning what I said above. >> It should clear that in all the same >> places where gfname or gset_prefix get cleared. > I'm only seeing one place where those two vars are deallocated and nulled, > and that's at the tail end of SendQuery. Were you expecting more than just > that? That may be the only place; I've not looked around. regards, tom lane
+ The secondary queries are executed in top-to-bottom, left-to-right order, so the command
I took that as meaning what I said above.
Would using the term https://en.wikipedia.org/wiki/Row-major_order be more clear?
The secondary queries are executed in row-major order, so the command...
If so, it will probably aide in translation as well.
>> It should clear that in all the same
>> places where gfname or gset_prefix get cleared.
> I'm only seeing one place where those two vars are deallocated and nulled,
> and that's at the tail end of SendQuery. Were you expecting more than just
> that?
That may be the only place; I've not looked around.
Yeah, seems that there might have been multiple ones in the past, but all paths now funnel through the sendquery_cleanup: goto.
Corey Huinker <corey.huinker@gmail.com> writes: >>> + The secondary queries are executed in top-to-bottom, >>> left-to-right order, so the command >> I took that as meaning what I said above. > Would using the term https://en.wikipedia.org/wiki/Row-major_order be more > clear? Meh, I suspect a lot of people don't know that term. Perhaps something like "The generated queries are executed in the order in which the rows are returned, and left-to-right within each row if there is more than one column." regards, tom lane
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Apr 3, 2016 at 7:43 PM, Tom Lane <span dir="ltr"><<ahref="mailto:tgl@sss.pgh.pa.us" target="_blank">tgl@sss.pgh.pa.us</a>></span> wrote:<br /><blockquoteclass="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">CoreyHuinker <<a href="mailto:corey.huinker@gmail.com">corey.huinker@gmail.com</a>> writes:<br /> >>>+ The secondary queries are executed in top-to-bottom,<br /> >>> left-to-right order, so thecommand<br /><br /> >> I took that as meaning what I said above.<br /><br /> > Would using the term <a href="https://en.wikipedia.org/wiki/Row-major_order"rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki/Row-major_order</a>be more<br /> > clear?<br /><br /></span>Meh, I suspecta lot of people don't know that term. Perhaps something<br /> like "The generated queries are executed in the orderin which the rows<br /> are returned, and left-to-right within each row if there is more than one<br /> column."<br/><br /> regards, tom lane<br /></blockquote></div><br /></div><div class="gmail_extra"><br/></div><div class="gmail_extra">I like it. Change forthcoming.</div><div class="gmail_extra"><br/></div><div class="gmail_extra"><br /></div><div class="gmail_extra"><br /></div></div>
On Sun, Apr 3, 2016 at 8:42 PM, Corey Huinker <corey.huinker@gmail.com> wrote:
On Sun, Apr 3, 2016 at 7:43 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:
>>> + The secondary queries are executed in top-to-bottom,
>>> left-to-right order, so the command
>> I took that as meaning what I said above.
> Would using the term https://en.wikipedia.org/wiki/Row-major_order be more
> clear?
Meh, I suspect a lot of people don't know that term. Perhaps something
like "The generated queries are executed in the order in which the rows
are returned, and left-to-right within each row if there is more than one
column."
regards, tom laneI like it. Change forthcoming.
Changes since last submission:
Patch attached. Changes are thus:
- rebased
- pset.gexec_flag unconditionally set to false at end of SendQuery
- wording of documentation describing execution order of results
- rebasing allowed for undoing the re-wrap of enumerated slash commands.
Still not changed:
- exuberant braces, can remove if someone wants me to
Attachment
Corey Huinker <corey.huinker@gmail.com> writes: > Patch attached. Changes are thus: > - rebased > - pset.gexec_flag unconditionally set to false at end of SendQuery > - wording of documentation describing execution order of results > - rebasing allowed for undoing the re-wrap of enumerated slash commands. I whacked this around some and committed it. The main thing that was broken is that it didn't work nicely at all if you'd set FETCH_COUNT. I experimented with different approaches to that, and ultimately decided that the best answer is to disable use of ExecQueryUsingCursor for the \gexec master query. We can still let it be used for the individual generated queries, though. I didn't much like the regression test setup, either. Tests that have to be at the end of their test files aren't very nice, unless you give them their very own test file, which checking ON_ERROR_STOP didn't seem worth. To me it's far more important that the code respond to cancel_pressed (which, ahem, it wasn't) and we have no mechanism for testing that in a pg_regress script. So I just dropped that aspect of it and put the test in a more logical place in the file. regards, tom lane
On Mon, Apr 4, 2016 at 3:31 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Corey Huinker <corey.huinker@gmail.com> writes:
> Patch attached. Changes are thus:
> - rebased
> - pset.gexec_flag unconditionally set to false at end of SendQuery
> - wording of documentation describing execution order of results
> - rebasing allowed for undoing the re-wrap of enumerated slash commands.
I whacked this around some and committed it. The main thing that was
Hooray!
broken is that it didn't work nicely at all if you'd set FETCH_COUNT.
Mmm, yeah, I hadn't considered cursor fetches, but in the use cases (at least the ones I can imagine for this) you wouldn't want fetches.
I experimented with different approaches to that, and ultimately decided
that the best answer is to disable use of ExecQueryUsingCursor for the
\gexec master query. We can still let it be used for the individual
generated queries, though.
Fine by me.
I didn't much like the regression test setup, either. Tests that
have to be at the end of their test files aren't very nice, unless
you give them their very own test file, which checking ON_ERROR_STOP
didn't seem worth. To me it's far more important that the code
respond to cancel_pressed (which, ahem, it wasn't) and we have no
mechanism for testing that in a pg_regress script. So I just dropped
that aspect of it and put the test in a more logical place in the file.
I think it was Jim that added the ON_ERROR_STOP check. I wasn't sure how to properly test that.
Thanks for finding (and fixing) the cancel_pressed issue.