Thread: retrieving varchar size
Hello, I was wondering if it would be possible, in the Postgres backend, to send back the defined column size for the varchar data type (and possibly the char() type, i.e., bpchar) on a query? Currently, it just sends back -1 for the size, which makes it difficult in the frontend (i.e., odbc driver) to determine what the size of the column is. Thank you, Byron
Byron Nikolaidis wrote: > > Hello, > > I was wondering if it would be possible, in the Postgres backend, to > send back the defined column size for the varchar data type (and > possibly the char() type, i.e., bpchar) on a query? Currently, it just > sends back -1 for the size, which makes it difficult in the frontend > (i.e., odbc driver) to determine what the size of the column is. While the right solution to this is of course getting the size from backend, there exists a workaround now (assuming that the query is not too expensive). While ASCII cursors always hide the varchar sizes, binary ones return the size in actual data (by zero-padding the returned data to max size), so one can determine the actual max sizes by opening the query in binary cursor and then examining enough records to get one non-null field for each varchar field. Hannu
> > Byron Nikolaidis wrote: > > > > Hello, > > > > I was wondering if it would be possible, in the Postgres backend, to > > send back the defined column size for the varchar data type (and > > possibly the char() type, i.e., bpchar) on a query? Currently, it just > > sends back -1 for the size, which makes it difficult in the frontend > > (i.e., odbc driver) to determine what the size of the column is. > > While the right solution to this is of course getting the size from > backend, there exists a workaround now (assuming that the query is not > too expensive). While ASCII cursors always hide the varchar sizes, > binary ones return the size in actual data (by zero-padding the > returned data to max size), so one can determine the actual max > sizes by opening the query in binary cursor and then examining > enough records to get one non-null field for each varchar field. As of 6.3, this is only true of char() fields. Varchar() is now variable length. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > Hello, > > I was wondering if it would be possible, in the Postgres backend, to > send back the defined column size for the varchar data type (and > possibly the char() type, i.e., bpchar) on a query? Currently, it just > sends back -1 for the size, which makes it difficult in the frontend > (i.e., odbc driver) to determine what the size of the column is. > This is kind of tough to do. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > > > > > Byron Nikolaidis wrote: > > > > > > Hello, > > > > > > I was wondering if it would be possible, in the Postgres backend, to > > > send back the defined column size for the varchar data type (and > > > possibly the char() type, i.e., bpchar) on a query? Currently, it just > > > sends back -1 for the size, which makes it difficult in the frontend > > > (i.e., odbc driver) to determine what the size of the column is. > > This is kind of tough to do. What makes it tough? Is this info not available where needed, or is changing the protocol tough. In the latter case, I would suggest an additional SQL command for open cursors, or a pseudo table for open cursor where you could do a simple select statement: DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE; SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM FOO_CURSOR_INFO_PSEUTOTABLE; > > While the right solution to this is of course getting the size from > > backend, there exists a workaround now (assuming that the query is not > > too expensive). While ASCII cursors always hide the varchar sizes, > > binary ones return the size in actual data (by zero-padding the > > returned data to max size), so one can determine the actual max > > sizes by opening the query in binary cursor and then examining > > enough records to get one non-null field for each varchar field. > > As of 6.3, this is only true of char() fields. Varchar() is now > variable length. As knowing field size is quite essential for Borland applications some solution should be found for this. Hannu
> > In the latter case, I would suggest an additional SQL command for open > cursors, > or a pseudo table for open cursor where you could do a simple select > statement: > > DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE; > > SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM > FOO_CURSOR_INFO_PSEUTOTABLE; The information you want is in pg_attribute.atttypmod. It is normally -1, but is set for char() and varchar() fields, and includes the 4-byte length. See bin/psql/psql.c for a sample of its use. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > > > > In the latter case, I would suggest an additional SQL command for open > > cursors, > > or a pseudo table for open cursor where you could do a simple select > > statement: > > > > DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE; > > > > SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM > > FOO_CURSOR_INFO_PSEUTOTABLE; > > The information you want is in pg_attribute.atttypmod. It is normally > -1, but is set for char() and varchar() fields, and includes the 4-byte > length. See bin/psql/psql.c for a sample of its use. I see everyone writing in terms of length. You do mean precision, don't you? For our purposes, this precision should arrive in the result header. (redundancy in each tuple could be over looked) The goal is to be able to put realistic bounds on memory allocation before the entire result is read in. For this to work, functions must also be able to propagate the their precision. Did I spell doom to this idea?
Attachment
David Hartwig wrote: > > Bruce Momjian wrote: > > > > > > > In the latter case, I would suggest an additional SQL command for open > > > cursors, > > > or a pseudo table for open cursor where you could do a simple select > > > statement: > > > > > > DECLARE CURSOR FOO_CURSOR FOR SELECT * FROM MYTABLE; > > > > > > SELECT _FIELD_NAME,_FIELD_TYPE,_FIELD_SIZE FROM > > > FOO_CURSOR_INFO_PSEUTOTABLE; > > > > The information you want is in pg_attribute.atttypmod. It is normally > > -1, but is set for char() and varchar() fields, and includes the 4-byte > > length. See bin/psql/psql.c for a sample of its use. is this on client side or server side? Last time I checked (it was in 6.2 protocol) it was not sent to client. What I need is the defined max length of varchar (or char), not just actual length of each field of that type. This is used by Borlands BDE, and if this changes, depending on the where clause, it breaks BDE. > I see everyone writing in terms of length. You do mean precision, don't > you? in case varchars have precision, yes ;) > For our purposes, this precision should arrive in the result > header. (redundancy in each tuple could be over looked) The goal is to be > able to put realistic bounds on memory allocation before the entire result is > read in. For this to work, functions must also be able to propagate the > their precision. Yes, the functions should behave as objects, so that you can get metadata on them. So functions should know, depending on max lengths of their arguments, how long strings they return. But even without this functionality, having this info is essential to getting Borland stuff to work. > Did I spell doom to this idea? I hope not. Hannu
> > The information you want is in pg_attribute.atttypmod. It is normally > > -1, but is set for char() and varchar() fields, and includes the 4-byte > > length. See bin/psql/psql.c for a sample of its use. > > I see everyone writing in terms of length. You do mean precision, don't > you? For our purposes, this precision should arrive in the result > header. (redundancy in each tuple could be over looked) The goal is to be > able to put realistic bounds on memory allocation before the entire result is > read in. For this to work, functions must also be able to propagate the > their precision. > > Did I spell doom to this idea? Hmm. The problem is that many of us use the old 'text' type, which doesn't have a defined length. Not sure how to handle this in a portable libpq way? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> is this on client side or server side? > > Last time I checked (it was in 6.2 protocol) it was not sent to client. > > What I need is the defined max length of varchar (or char), not just > actual length of each field of that type. This is used by Borlands BDE, > and if this changes, depending on the where clause, it breaks BDE. Can't you do: select atttypmod from pg_attribute where attrelid = 10003 and attname = 'col1'; That will give the length + 4 bytes. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > Can't you do: > > select atttypmod from pg_attribute > where attrelid = 10003 and attname = 'col1'; > > That will give the length + 4 bytes. > The problem with that theory is this. If the frontend application just executes some random query, such as "select * from table", you really do not know anything about what is coming back. You must rely on the little bit of information the protocol gives you. In the case of Postgres, it gives you the fieldname, datatype, and size for each column in the result. Unfortunately, for varchar and char(n), the size reports -1. This is not very helpful for describing the result set. Your above example works fine (in fact we use that already) when you know the table and column name, as in metadata functions such as SQLColumns() in the ODBC driver. Byron
> The problem with that theory is this. If the frontend application just > executes some random query, such as "select * from table", you really do not > know anything about what is coming back. You must rely on the little bit of > information the protocol gives you. In the case of Postgres, it gives you > the fieldname, datatype, and size for each column in the result. > Unfortunately, for varchar and char(n), the size reports -1. This is not > very helpful for describing the result set. > > Your above example works fine (in fact we use that already) when you know the > table and column name, as in metadata functions such as SQLColumns() in the > ODBC driver. Yep. We could pass back atttypmod as part of the PGresult. I can add that to the TODO list. Would that help? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > > The problem with that theory is this. If the frontend application just > > executes some random query, such as "select * from table", you really do not > > know anything about what is coming back. You must rely on the little bit of > > information the protocol gives you. In the case of Postgres, it gives you > > the fieldname, datatype, and size for each column in the result. > > Unfortunately, for varchar and char(n), the size reports -1. This is not > > very helpful for describing the result set. > > > > Your above example works fine (in fact we use that already) when you know the > > table and column name, as in metadata functions such as SQLColumns() in the > > ODBC driver. > > Yep. We could pass back atttypmod as part of the PGresult. I can add > that to the TODO list. Would that help? Yes, that would do it! Thank you for listening to our ravings on this issue. Byron
> > > > Bruce Momjian wrote: > > > > The problem with that theory is this. If the frontend application just > > > executes some random query, such as "select * from table", you really do not > > > know anything about what is coming back. You must rely on the little bit of > > > information the protocol gives you. In the case of Postgres, it gives you > > > the fieldname, datatype, and size for each column in the result. > > > Unfortunately, for varchar and char(n), the size reports -1. This is not > > > very helpful for describing the result set. > > > > > > Your above example works fine (in fact we use that already) when you know the > > > table and column name, as in metadata functions such as SQLColumns() in the > > > ODBC driver. > > > > Yep. We could pass back atttypmod as part of the PGresult. I can add > > that to the TODO list. Would that help? > > Yes, that would do it! > > Thank you for listening to our ravings on this issue. Added to TODO: * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure This is a good suggestion. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian <maillist@candle.pha.pa.us> writes: > Added to TODO: > * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure > This is a good suggestion. This will require a frontend/backend protocol change, no? If so, right now would be a great time to address it; I'm about halfway through rewriting libpq for the asynchronous-query support we discussed last week, and would be happy to make the client-side mods while I still have the code in my head. As long as we are opening up the protocol, there is an incredibly grotty hack in libpq that I'd like to get rid of. It's hard for me to be sure whether it's even necessary, but: when libpq gets a 'C' response (which the documentation says is a "completed response") it assumes that this is *not* the end of the transaction, and that the only way to be sure that everything's been read is to send an empty query and wait for the empty query's 'I' response to be returned. case 'C': /* portal query command, no rows returned */ /* * since backend may produce more than one result * for some commands need to poll until clear. * Send an empty query down, and keep reading out of * the pipe until an 'I' is received. */ Does this ring a bell with anyone? I'm prepared to believe that it's useless code, but have no easy way to be sure. Needless to say, if there really is an ambiguity then the *right* answer is to fix the protocol so that the end of a query/response cycle is unambiguously determinable. It looks to me like this hack is costing us an extra round trip to the server for every ordinary query. That sucks. regards, tom lane
Yes, it rings a bell alright, When you execute a multiple query (denoted by semicolans) like "set geqo to 'off'; show datestyle; select * from table", you get that multiple returns and MUST read until you get the 'I'. If you don't, your screwed the next time you try and read anything cause all that stuff is still in the pipe. Question though, I didnt think my request would have caused a major protocol change. I though that the '-1' would simply be replaced by the correct size? Byron Tom Lane wrote: > Bruce Momjian <maillist@candle.pha.pa.us> writes: > > Added to TODO: > > * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure > > This is a good suggestion. > > This will require a frontend/backend protocol change, no? > > If so, right now would be a great time to address it; I'm about halfway > through rewriting libpq for the asynchronous-query support we discussed > last week, and would be happy to make the client-side mods while I still > have the code in my head. > > As long as we are opening up the protocol, there is an incredibly grotty > hack in libpq that I'd like to get rid of. It's hard for me to be > sure whether it's even necessary, but: when libpq gets a 'C' response > (which the documentation says is a "completed response") it assumes that > this is *not* the end of the transaction, and that the only way to be > sure that everything's been read is to send an empty query and wait for > the empty query's 'I' response to be returned. > > case 'C': /* portal query command, no rows returned */ > /* > * since backend may produce more than one result > * for some commands need to poll until clear. > * Send an empty query down, and keep reading out of > * the pipe until an 'I' is received. > */ > > Does this ring a bell with anyone? I'm prepared to believe that it's > useless code, but have no easy way to be sure. > > Needless to say, if there really is an ambiguity then the *right* answer > is to fix the protocol so that the end of a query/response cycle is > unambiguously determinable. It looks to me like this hack is costing us > an extra round trip to the server for every ordinary query. That sucks. > > regards, tom lane
Byron Nikolaidis <byronn@insightdist.com> writes: > Yes, it rings a bell alright, When you execute a multiple query > (denoted by semicolans) like "set geqo to 'off'; show datestyle; > select * from table", you get that multiple returns and MUST read > until you get the 'I'. If you don't, your screwed the next time you > try and read anything cause all that stuff is still in the pipe. That seems pretty bogus. What happens if you do select * from table1; select * from table2 ? The way the code in libpq looks, I think the response from the first select would get lost entirely (probably even cause a memory leak). It's not set up to handle receipt of more than one command response in any clean fashion. We'd need to revise the application API to make that work right. Playing around with psql, it seems that you can't actually get psql to submit a multi-command line as a single query; it seems to break it up into separate queries. Which is what libpq can cope with. I think we should either forbid multiple commands per PQexec call, or fix libpq to handle them properly (and hence be able to return a series of PGresults, not just one). > Question though, I didnt think my request would have caused a major > protocol change. I though that the '-1' would simply be replaced by > the correct size? I assumed we'd want to add the restypmod as a new field in PGresult and in the protocol. But I'm just a newbie. regards, tom lane
> > Yes, it rings a bell alright, > > When you execute a multiple query (denoted by semicolans) like "set geqo to > 'off'; show datestyle; select * from table", you get that multiple returns and > MUST read until you get the 'I'. If you don't, your screwed the next time you > try and read anything cause all that stuff is still in the pipe. Good point. If we don't send the empty query, the queued up results get out of sync with the requests. One solution is to handle it the way psql does. It keeps track of the quotes, backslashes, and semicolons in the input string, and sends just one query each time to the backend, and prints the results. Now, with libpq, I think the proper solution would be to scan the input string, and count the number of queries being send, send the whole strings (with the multiple queries) and retrieve that many answers from the backend, discarding all but the last result. If you do that, I can remove the stuff from psql.c. > > Question though, I didnt think my request would have caused a major protocol > change. I though that the '-1' would simply be replaced by the correct size? Well, the -1 is in attlen, which is the type length. text, char, varchar are all varlena(variable length)/-1. atttypmod is the length specified at attribute creation time. It is similar, but not the same as the length, and trying to put the typmod in the length field really messes up the clarity of what is going on. We added atttypmod to clarify the code in the backend, and it should be sent to the front end. Soon, maybe will have atttypmod specifiying the precision of DECIMAL, or currency of MONEY. As far as adding atttypmod to libpq, I say do it. If you look in the backend's BeginCommand(), under the Remote case label, you will see it sending the atttypid to the front end, using the TupleDesc that was passed to it. Just after sending the atttyplen, I can send the atttypmod value, which is an int16. I can do all the backend changes. There are a few places where this would have to be changed in the backend. Other front-end libraries reading this protocol will have to change to to accept this field. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > Byron Nikolaidis <byronn@insightdist.com> writes: > > Yes, it rings a bell alright, When you execute a multiple query > > (denoted by semicolans) like "set geqo to 'off'; show datestyle; > > select * from table", you get that multiple returns and MUST read > > until you get the 'I'. If you don't, your screwed the next time you > > try and read anything cause all that stuff is still in the pipe. > > That seems pretty bogus. What happens if you do > select * from table1; select * from table2 > ? The way the code in libpq looks, I think the response from the > first select would get lost entirely (probably even cause a memory > leak). It's not set up to handle receipt of more than one command > response in any clean fashion. We'd need to revise the application > API to make that work right. > > Playing around with psql, it seems that you can't actually get psql > to submit a multi-command line as a single query; it seems to break > it up into separate queries. Which is what libpq can cope with. Yep, you figured it out. (See earlier posting.) I have now thought about the problem some more, and I think an even better solution would be that if the backend receives multiple commands in a single query, it just returns the first or last result. There is no mechanism in libpq to send a query and get multiple results back, so why not just return one result. No need to cound the number of queries sent, and no reason to send empty queries to the backend looking for the last result. If you want me to do this for the backend, let me know and I will do it. First or last result? What do we return now? > > I think we should either forbid multiple commands per PQexec call, > or fix libpq to handle them properly (and hence be able to return > a series of PGresults, not just one). > > > Question though, I didnt think my request would have caused a major > > protocol change. I though that the '-1' would simply be replaced by > > the correct size? > > I assumed we'd want to add the restypmod as a new field in PGresult > and in the protocol. But I'm just a newbie. restypmod may not be available at the time of returning the result, but the TupleDesc is, and it has the proper atttypmod. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Sat, 25 Apr 1998, Bruce Momjian wrote: > > > > Yes, it rings a bell alright, > > > > When you execute a multiple query (denoted by semicolans) like "set geqo to > > 'off'; show datestyle; select * from table", you get that multiple returns and > > MUST read until you get the 'I'. If you don't, your screwed the next time you > > try and read anything cause all that stuff is still in the pipe. > > Good point. If we don't send the empty query, the queued up results get > out of sync with the requests. > > One solution is to handle it the way psql does. It keeps track of the > quotes, backslashes, and semicolons in the input string, and sends just > one query each time to the backend, and prints the results. > > Now, with libpq, I think the proper solution would be to scan the input > string, and count the number of queries being send, send the whole > strings (with the multiple queries) and retrieve that many answers from > the backend, discarding all but the last result. If you do that, I can > remove the stuff from psql.c. I think for libpq, that would be a good idea, but it would mean that there is a difference in behaviour between the interfaces. The JDBC spec allows for multiple ResultSet's to be returned from a query, and our driver handles this already. Now is this the client libpq, or the backend libpq you are thinking of changing? If it's the backend one, then this will break JDBC with multiple result sets. > > Question though, I didnt think my request would have caused a major protocol > > change. I though that the '-1' would simply be replaced by the correct size? > > Well, the -1 is in attlen, which is the type length. text, char, > varchar are all varlena(variable length)/-1. atttypmod is the length > specified at attribute creation time. It is similar, but not the same > as the length, and trying to put the typmod in the length field really > messes up the clarity of what is going on. We added atttypmod to > clarify the code in the backend, and it should be sent to the front end. > Soon, maybe will have atttypmod specifiying the precision of DECIMAL, or > currency of MONEY. That would be useful. > As far as adding atttypmod to libpq, I say do it. If you look in the > backend's BeginCommand(), under the Remote case label, you will see it > sending the atttypid to the front end, using the TupleDesc that was > passed to it. Just after sending the atttyplen, I can send the > atttypmod value, which is an int16. I can do all the backend changes. > There are a few places where this would have to be changed in the > backend. > > Other front-end libraries reading this protocol will have to change to > to accept this field. As soon as you do it, I'll convert JDBC. -- Peter T Mount peter@retep.org.uk or petermount@earthling.net Main Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk) ************ Someday I may rebuild this signature completely ;-) ************ Work Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk
> > One solution is to handle it the way psql does. It keeps track of the > > quotes, backslashes, and semicolons in the input string, and sends just > > one query each time to the backend, and prints the results. > > > > Now, with libpq, I think the proper solution would be to scan the input > > string, and count the number of queries being send, send the whole > > strings (with the multiple queries) and retrieve that many answers from > > the backend, discarding all but the last result. If you do that, I can > > remove the stuff from psql.c. > > I think for libpq, that would be a good idea, but it would mean that there > is a difference in behaviour between the interfaces. > > The JDBC spec allows for multiple ResultSet's to be returned from a query, > and our driver handles this already. Oh. That prevents us from changing the backend to ignore returning more than one result for multiple queries in a PQexec. Perhaps we need a new return query protocol character like 'J' to denote query returns that are not the LAST return, so libpq can throw them away, and jdbc and process them as normal, but also figure out when it gets the last one. > > Now is this the client libpq, or the backend libpq you are thinking of > changing? If it's the backend one, then this will break JDBC with multiple > result sets. > > > > Question though, I didnt think my request would have caused a major protocol > > > change. I though that the '-1' would simply be replaced by the correct size? > > > > Well, the -1 is in attlen, which is the type length. text, char, > > varchar are all varlena(variable length)/-1. atttypmod is the length > > specified at attribute creation time. It is similar, but not the same > > as the length, and trying to put the typmod in the length field really > > messes up the clarity of what is going on. We added atttypmod to > > clarify the code in the backend, and it should be sent to the front end. > > Soon, maybe will have atttypmod specifiying the precision of DECIMAL, or > > currency of MONEY. > > That would be useful. > > > As far as adding atttypmod to libpq, I say do it. If you look in the > > backend's BeginCommand(), under the Remote case label, you will see it > > sending the atttypid to the front end, using the TupleDesc that was > > passed to it. Just after sending the atttyplen, I can send the > > atttypmod value, which is an int16. I can do all the backend changes. > > There are a few places where this would have to be changed in the > > backend. > > > > Other front-end libraries reading this protocol will have to change to > > to accept this field. > > As soon as you do it, I'll convert JDBC. > > -- > Peter T Mount peter@retep.org.uk or petermount@earthling.net > Main Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk) > ************ Someday I may rebuild this signature completely ;-) ************ > Work Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk > > -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Sun, 26 Apr 1998, Bruce Momjian wrote: [snip] > > I think for libpq, that would be a good idea, but it would mean that there > > is a difference in behaviour between the interfaces. > > > > The JDBC spec allows for multiple ResultSet's to be returned from a query, > > and our driver handles this already. > > Oh. That prevents us from changing the backend to ignore returning more > than one result for multiple queries in a PQexec. Perhaps we need a new > return query protocol character like 'J' to denote query returns that > are not the LAST return, so libpq can throw them away, and jdbc and > process them as normal, but also figure out when it gets the last one. That should be easy enough to implement. -- Peter T Mount peter@retep.org.uk or petermount@earthling.net Main Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk) ************ Someday I may rebuild this signature completely ;-) ************ Work Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk
Bruce Momjian <maillist@candle.pha.pa.us> writes: > Oh. That prevents us from changing the backend to ignore returning more > than one result for multiple queries in a PQexec. Perhaps we need a new > return query protocol character like 'J' to denote query returns that > are not the LAST return, so libpq can throw them away, and jdbc and > process them as normal, but also figure out when it gets the last one. That would require the code processing an individual command in the backend to know whether it was the last one or not, which seems like a very undesirable interaction. Instead, I'd suggest we simply add a new BE->FE message that says "I'm done processing your query and am returning to idle state". This would be sent at the end of *every* query, correct or failing. Trivial to implement: send it at the bottom of the main loop in postgres.c. The more general question is whether we ought to redesign libpq's API to permit multiple command responses to be returned from one query. I think that would be a good idea, if we can do it in a way that doesn't break existing applications for the single-command-per-query case. (BTW, I'm defining "query" as "string handed to PQexec"; perhaps this is backwards from the usual terminology?) Maybe have libpq queue up the results and return the first one, then provide a function to pull the rest from the queue: result = PQexec(conn, query); // process result, eventually free it with PQclear while ((result = PQnextResult(conn)) != NULL) { // process result, eventually free it with PQclear } // ready to send new query An app that didn't use PQnextResult would still work as long as it never sent multiple commands per query. (Question: if the app sends a multi-command query and doesn't call PQnextResult, the next PQexec will know it because the result queue is nonempty. Should PQexec complain, or just silently clear the queue?) One thing that likely would *not* work very nicely is copy in/out as part of a multi-command query, since there is currently no provision for PQendcopy to return result(s). This is pretty braindead IMHO, but I'm not sure we can change PQendcopy's API. Any thoughts? What I'd really like to see is PQendcopy returning a PGresult that indicates success or failure of the copy, and then additional results could be queued up behind that for retrieval with PQnextResult. >>>> Other front-end libraries reading this protocol will have to change >>>> to accept this field. And the end-of-query indicator. I think now is the time to do it if we're gonna do it. Right now, it seems most code is using libpq rather than seeing the protocol directly, so fixing these problems should be pretty painless. But wasn't there some discussion recently of running the protocol directly from Tcl code? If that gets popular it will become much harder to change the protocol. As long as we are opening up the issue, there are some other bits of bad design in the FE/BE protocol: 1. 'B' and 'D' are used as message types for *both* result tuples and StartCopyIn/StartCopyOut messages. You can only distinguish them by context, ie, have you seen a 'T' lately. This is very bad. It's not like we have to do this because we're out of possible message types. 2. Copy In and Copy Out data ought to be part of the protocol, that is every line of copy in/out data ought to be prefixed with a message type code. Fixing this might be more trouble than its worth however, if there are any applications that don't go through PQgetline/PQputline. BTW, I have made good progress with rewriting libpq in an asynchronous style; the new code ran the regression tests on Friday. But I haven't tested any actual async behavior yet. regards, tom lane
> > Bruce Momjian <maillist@candle.pha.pa.us> writes: > > Oh. That prevents us from changing the backend to ignore returning more > > than one result for multiple queries in a PQexec. Perhaps we need a new > > return query protocol character like 'J' to denote query returns that > > are not the LAST return, so libpq can throw them away, and jdbc and > > process them as normal, but also figure out when it gets the last one. > > That would require the code processing an individual command in the > backend to know whether it was the last one or not, which seems like > a very undesirable interaction. I think it is pretty easy to do. > Instead, I'd suggest we simply add a new BE->FE message that says > "I'm done processing your query and am returning to idle state". > This would be sent at the end of *every* query, correct or failing. > Trivial to implement: send it at the bottom of the main loop in > postgres.c. > If you are happy with this, it is certainly better than my idea. > The more general question is whether we ought to redesign libpq's API > to permit multiple command responses to be returned from one query. > I think that would be a good idea, if we can do it in a way that doesn't > break existing applications for the single-command-per-query case. > (BTW, I'm defining "query" as "string handed to PQexec"; perhaps this > is backwards from the usual terminology?) > My idea is to make a PQexecv() just like PQexec, except it returns an array of results, with the end of the array terminated with a NULL, sort of like readv(), except you return an array, rather than supplying one, i.e.: PGresult *resarray; resarray = PQexecv('select * from test; select * from test2'); and it handles by: PGresult *res; for (res = resarray; res; res++) process_result_and_clear(res); free(resarray); You also have to free the array that holds the result pointers, as well as the result pointers themselves. > Maybe have libpq queue up the results and return the first one, then > provide a function to pull the rest from the queue: > > result = PQexec(conn, query); > // process result, eventually free it with PQclear > while ((result = PQnextResult(conn)) != NULL) > { > // process result, eventually free it with PQclear > } > // ready to send new query > > An app that didn't use PQnextResult would still work as long as it > never sent multiple commands per query. (Question: if the app sends > a multi-command query and doesn't call PQnextResult, the next PQexec > will know it because the result queue is nonempty. Should PQexec > complain, or just silently clear the queue?) With my idea, we can properly handle or discard multiple results depending on whether they use PQexec() or PQexecv(). > One thing that likely would *not* work very nicely is copy in/out > as part of a multi-command query, since there is currently no provision > for PQendcopy to return result(s). This is pretty braindead IMHO, > but I'm not sure we can change PQendcopy's API. Any thoughts? What > I'd really like to see is PQendcopy returning a PGresult that indicates > success or failure of the copy, and then additional results could be > queued up behind that for retrieval with PQnextResult. Not sure on this one. If we change the API, we have to have a good reason to do it. API additions are OK. > > >>>> Other front-end libraries reading this protocol will have to change > >>>> to accept this field. > > And the end-of-query indicator. I think now is the time to do it if > we're gonna do it. Right now, it seems most code is using libpq rather > than seeing the protocol directly, so fixing these problems should be > pretty painless. But wasn't there some discussion recently of running > the protocol directly from Tcl code? If that gets popular it will > become much harder to change the protocol. Yep, let's change it now. > > As long as we are opening up the issue, there are some other bits of > bad design in the FE/BE protocol: > > 1. 'B' and 'D' are used as message types for *both* result tuples and > StartCopyIn/StartCopyOut messages. You can only distinguish them by > context, ie, have you seen a 'T' lately. This is very bad. It's not > like we have to do this because we're out of possible message types. Yep, let's use distinct ones. > > 2. Copy In and Copy Out data ought to be part of the protocol, that > is every line of copy in/out data ought to be prefixed with a message > type code. Fixing this might be more trouble than its worth however, > if there are any applications that don't go through PQgetline/PQputline. Again, if we clearly document the change, we are far enough from 6.4 that perl and other people will handle the change by the time 6.4 is released. Changes the affect user apps is more difficult. > BTW, I have made good progress with rewriting libpq in an asynchronous > style; the new code ran the regression tests on Friday. But I haven't > tested any actual async behavior yet. Good. You may need a patch from me for the backend before you can test some of your changes. Let me know what you decide, and I will send you a patch for testing. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > One thing that likely would *not* work very nicely is copy in/out > > as part of a multi-command query, since there is currently no provision > > for PQendcopy to return result(s). This is pretty braindead IMHO, > > but I'm not sure we can change PQendcopy's API. Any thoughts? What Adding a return result to PQendcopy would not be a big deal. Just document it so the few people who do this in application know to free the result. The interface libraries can handle the change. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian <maillist@candle.pha.pa.us> writes: > My idea is to make a PQexecv() just like PQexec, except it returns an > array of results, with the end of the array terminated with a NULL, > [ as opposed to my idea of returning PGresults one at a time ] Hmm. I think the one-at-a-time approach is probably better, mainly because it doesn't require libpq to have generated all the PGresult objects before it can return the first one. Here is an example in which the array approach doesn't work very well: QUERY: copy stdin to relation ; select * from relation What we want is for the application to receive a PGRES_COPY_IN result, perform the data transfer, call PQendcopy, and then receive a PGresult for the select. I don't see any way to make this work if the library has to give back an array of results right off the bat. With the other method, PQendcopy will read the select command's output and stuff it into the (hidden) result queue. Then when the application calls PQnextResult, presto, there it is. Correct logic for an application that submits multi- command query strings would be something like result = PQexec(conn, query); while (result) { switch (PQresultStatus(result)) { ... case PGRES_COPY_IN: // ... copy data here ... if (PQendcopy(conn)) reportError(); break; ... } PQclear(result); result = PQnextResult(conn); } Another thought: we might consider making PQexec return as soon as it's received the first query result, thereby allowing the frontend to overlap its processing of this result with the backend's processing of the rest of the query string. Then, PQnextResult would actually read a new result (or the "I'm done" message), rather than just return a result that had already been stored. I wasn't originally thinking of implementing it that way, but it seems like a mighty attractive idea. No way to do it if we return results as an array. >> I'd really like to see is PQendcopy returning a PGresult that indicates >> success or failure of the copy, and then additional results could be >> queued up behind that for retrieval with PQnextResult. > Not sure on this one. If we change the API, we have to have a good > reason to do it. API additions are OK. Well, we can settle for having PQendcopy return 0 or 1 as it does now. It's not quite as clean as having it return a real PGresult, but it's probably not worth breaking existing apps just to improve the consistency of the API. It'd still be possible to queue up subsequent commands' results (if any) in the result queue. >> 2. Copy In and Copy Out data ought to be part of the protocol, that >> is every line of copy in/out data ought to be prefixed with a message >> type code. Fixing this might be more trouble than its worth however, >> if there are any applications that don't go through PQgetline/PQputline. > Again, if we clearly document the change, we are far enough from 6.4 > that perl and other people will handle the change by the time 6.4 is > released. Changes the affect user apps is more difficult. I have mixed feelings about this particular item. It would make the protocol more robust, but it's not clear that the gain is worth the risk of breaking any existing apps. I'm willing to drop it if no one else is excited about it. regards, tom lane
> > Bruce Momjian <maillist@candle.pha.pa.us> writes: > > My idea is to make a PQexecv() just like PQexec, except it returns an > > array of results, with the end of the array terminated with a NULL, > > [ as opposed to my idea of returning PGresults one at a time ] > > Hmm. I think the one-at-a-time approach is probably better, mainly > because it doesn't require libpq to have generated all the PGresult > objects before it can return the first one. > > Here is an example in which the array approach doesn't work very well: > > QUERY: copy stdin to relation ; select * from relation > > What we want is for the application to receive a PGRES_COPY_IN result, > perform the data transfer, call PQendcopy, and then receive a PGresult > for the select. > > I don't see any way to make this work if the library has to give back > an array of results right off the bat. With the other method, PQendcopy > will read the select command's output and stuff it into the (hidden) > result queue. Then when the application calls PQnextResult, presto, > there it is. Correct logic for an application that submits multi- > command query strings would be something like OK, you just need to remember to throw away any un-called-for results if they do another PQexec without retrieving all the results returned by the backend. > Another thought: we might consider making PQexec return as soon as it's > received the first query result, thereby allowing the frontend to > overlap its processing of this result with the backend's processing of > the rest of the query string. Then, PQnextResult would actually read a > new result (or the "I'm done" message), rather than just return a result > that had already been stored. I wasn't originally thinking of > implementing it that way, but it seems like a mighty attractive idea. > No way to do it if we return results as an array. Yep. > Well, we can settle for having PQendcopy return 0 or 1 as it does now. > It's not quite as clean as having it return a real PGresult, but it's > probably not worth breaking existing apps just to improve the > consistency of the API. It'd still be possible to queue up subsequent > commands' results (if any) in the result queue. OK. > > Again, if we clearly document the change, we are far enough from 6.4 > > that perl and other people will handle the change by the time 6.4 is > > released. Changes the affect user apps is more difficult. > > I have mixed feelings about this particular item. It would make the > protocol more robust, but it's not clear that the gain is worth the > risk of breaking any existing apps. I'm willing to drop it if no one > else is excited about it. It's up to you. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
The historical reason why the POSTGRES backend is required to send multiple result sets is to support cursors on queries involving type inheritance and anonymous target lists. begin declare c cursor for select e.oid, e.* from EMP* e fetch 10 in c ... To handle the command sequence above, frontend applications would need to be provided with a new result descriptor when the "fetch 10 in c" crosses a result set boundary.
> Bruce Momjian <maillist@candle.pha.pa.us> writes: > > My idea is to make a PQexecv() just like PQexec, except it returns an > > array of results, with the end of the array terminated with a NULL, > > [ as opposed to my idea of returning PGresults one at a time ] > > Hmm. I think the one-at-a-time approach is probably better, mainly > because it doesn't require libpq to have generated all the PGresult > objects before it can return the first one. > > Here is an example in which the array approach doesn't work very well: > > QUERY: copy stdin to relation ; select * from relation > > What we want is for the application to receive a PGRES_COPY_IN result, > perform the data transfer, call PQendcopy, and then receive a PGresult > for the select. > > I don't see any way to make this work if the library has to give back > an array of results right off the bat. With the other method, PQendcopy > will read the select command's output and stuff it into the (hidden) > result queue. Then when the application calls PQnextResult, presto, > there it is. Correct logic for an application that submits multi- > command query strings would be something like > > result = PQexec(conn, query); > > while (result) { > switch (PQresultStatus(result)) { > ... > case PGRES_COPY_IN: > // ... copy data here ... > if (PQendcopy(conn)) > reportError(); > break; > ... > } > > PQclear(result); > result = PQnextResult(conn); > } > > > Another thought: we might consider making PQexec return as soon as it's > received the first query result, thereby allowing the frontend to > overlap its processing of this result with the backend's processing of > the rest of the query string. Then, PQnextResult would actually read a > new result (or the "I'm done" message), rather than just return a result > that had already been stored. I wasn't originally thinking of > implementing it that way, but it seems like a mighty attractive idea. > No way to do it if we return results as an array. Or we might even make PQexec return as soon as the query is sent and parsed. It could ruturn a handle to the query that could be used to get results later. This is pretty much exactly in line with the way the Perl DBI stuff works and I think also odbc. queryhandle = PQexec(conn, querystring); while (result = PQgetresult(queryhandle)) { do stuff with result; PQclear(result); } This protocol allows for multiple results per query, and asynchronous operation before getting the result. Perhaps a polling form might be added too: queryhandle = PQexec(conn, querystring); while (1) { handle_user_interface_events(); if (PQready(queryhandle)) { result = PQgetresult(queryhandle); if (result == NULL) break; do stuff with result; PQclear(result); } } -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "(Windows NT) version 5.0 will build on a proven system architecture and incorporate tens of thousands of bug fixes from version 4.0." -- <http://www.microsoft.com/y2k.asp?A=7&B=5>
Michael Hirohama <kamesan@ricochet.net> writes: > The historical reason why the POSTGRES backend is required to send multiple > result sets is to support cursors on queries involving type inheritance and > anonymous target lists. > begin > declare c cursor for > select e.oid, e.* from EMP* e > fetch 10 in c > ... > To handle the command sequence above, frontend applications would need to > be provided with a new result descriptor when the "fetch 10 in c" crosses a > result set boundary. Hmm. I noted the place in libpq where it fails if multiple 'T' (tuple descriptor) messages arrive during a query retrieval. But the comments made it sound like the condition shouldn't occur. Does what you describe actually work in the current backend? The problem on the libpq side is basically that the PGresult structure is not able to represent more than one tuple descriptor. AFAICS, we can't tamper with that without breaking all existing applications. However, if we make the changes being discussed in this thread then it would be a simple matter to return a *series* of PGresult structures for this sort of query. Whether an application is capable of handling that is another story, but at least the data could be passed through. regards, tom lane
Bruce Momjian <maillist@candle.pha.pa.us> writes: > OK, you just need to remember to throw away any un-called-for results if > they do another PQexec without retrieving all the results returned by > the backend. OK, so you feel the right behavior is "throw away unconsumed results" and not "raise an error"? I don't have a strong feeling either way; I'm just asking what the consensus is. regards, tom lane
Tom Lane wrote: > >>>> Other front-end libraries reading this protocol will have to change > >>>> to accept this field. > > And the end-of-query indicator. I think now is the time to do it if > we're gonna do it. Right now, it seems most code is using libpq rather > than seeing the protocol directly, so fixing these problems should be > pretty painless. But wasn't there some discussion recently of running > the protocol directly from Tcl code? If that gets popular it will > become much harder to change the protocol. > Hello, Please remember that the ODBC driver handles the protocol directly, (it does not use libpq). I would assume that if you guys make protocol changes you will post a summary of them on the interfaces list? Byron
> > > > Tom Lane wrote: > > > >>>> Other front-end libraries reading this protocol will have to change > > >>>> to accept this field. > > > > And the end-of-query indicator. I think now is the time to do it if > > we're gonna do it. Right now, it seems most code is using libpq rather > > than seeing the protocol directly, so fixing these problems should be > > pretty painless. But wasn't there some discussion recently of running > > the protocol directly from Tcl code? If that gets popular it will > > become much harder to change the protocol. > > > > Hello, > > Please remember that the ODBC driver handles the protocol directly, (it does > not use libpq). I would assume that if you guys make protocol changes you > will post a summary of them on the interfaces list? Absolutely. Guaranteed. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
dg@illustra.com (David Gould) writes: > Or we might even make PQexec return as soon as the query is sent and parsed. > It could ruturn a handle to the query that could be used to get results later. > Perhaps a polling form might be added too: We're way ahead of you ;-). See last week's discussion on "Proposal for async support in libpq" (it was only on the hackers list, not interfaces). I have already implemented the original proposal, though not tested it fully. The proposal will have to be modified some to deal with this notion of returning multiple results from a single query. I haven't worked out exactly what I'd like to see, but it won't be too far different from what David is envisioning. regards, tom lane
> > Does what you describe actually work in the current backend? > > The problem on the libpq side is basically that the PGresult structure > is not able to represent more than one tuple descriptor. AFAICS, we can't > tamper with that without breaking all existing applications. However, > if we make the changes being discussed in this thread then it would be > a simple matter to return a *series* of PGresult structures for this > sort of query. > > Whether an application is capable of handling that is another story, > but at least the data could be passed through. > I would move forward, and see if anything breaks. If the regression tests pass, that is a good sign it works. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > Bruce Momjian <maillist@candle.pha.pa.us> writes: > > OK, you just need to remember to throw away any un-called-for results if > > they do another PQexec without retrieving all the results returned by > > the backend. > > OK, so you feel the right behavior is "throw away unconsumed results" > and not "raise an error"? > > I don't have a strong feeling either way; I'm just asking what the > consensus is. Throw them away. That is what we have always done, and if they wanted them, they wouldn't have put them all in one pgexec(). -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > > We could pass back atttypmod as part of the PGresult. I can add > > > that to the TODO list. Would that help? > > Yes, that would do it! > > Thank you for listening to our ravings on this issue. > Added to TODO: > * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure > This is a good suggestion. How do we determine atttypmod for queries like select '123' || '456'; ?? I might be able to address this with my upcoming type conversion work but I don't know if we have enough hooks for this right now... - Tom
> > > > > We could pass back atttypmod as part of the PGresult. I can add > > > > that to the TODO list. Would that help? > > > Yes, that would do it! > > > Thank you for listening to our ravings on this issue. > > Added to TODO: > > * Add pg_attribute.atttypmod/Resdom->restypmod to PGresult structure > > This is a good suggestion. > > How do we determine atttypmod for queries like > > select '123' || '456'; > > ?? I might be able to address this with my upcoming type conversion work > but I don't know if we have enough hooks for this right now... No way, I think. This would have a atttypmod of -1, which is true because there is no atttypmod size for this. Once a char()/varchar() goes into a function, anything can come out. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Mon, 27 Apr 1998, Byron Nikolaidis wrote: > Tom Lane wrote: > > > >>>> Other front-end libraries reading this protocol will have to change > > >>>> to accept this field. > > > > And the end-of-query indicator. I think now is the time to do it if > > we're gonna do it. Right now, it seems most code is using libpq rather > > than seeing the protocol directly, so fixing these problems should be > > pretty painless. But wasn't there some discussion recently of running > > the protocol directly from Tcl code? If that gets popular it will > > become much harder to change the protocol. > > > > Hello, > > Please remember that the ODBC driver handles the protocol directly, (it does > not use libpq). I would assume that if you guys make protocol changes you > will post a summary of them on the interfaces list? The JDBC driver is the same, as it too handles the protocol directly. It's the reason why I keep an eye on any discussion that may effect it. -- Peter T Mount peter@retep.org.uk or petermount@earthling.net Main Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk) ************ Someday I may rebuild this signature completely ;-) ************ Work Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk
Michael Hirohama <kamesan@ricochet.net> wrote: > The historical reason why the POSTGRES backend is required to send multiple > result sets is to support cursors on queries involving type inheritance and > anonymous target lists. > begin > declare c cursor for > select e.oid, e.* from EMP* e > fetch 10 in c > ... > To handle the command sequence above, frontend applications would need to > be provided with a new result descriptor when the "fetch 10 in c" crosses a > result set boundary. I tried this and was unable to produce a failure. It looks like the select only returns the set of fields applicable to the base class, regardless of what additional fields may be possessed by some subclasses. Which, in fact, is more or less what I'd expect. Is Michael remembering some old behavior that is no longer implemented? And if so, is the old or new behavior the correct one? regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> wrote on Thu, 30 Apr 1998: >Michael Hirohama <kamesan@ricochet.net> wrote: >> The historical reason why the POSTGRES backend is required to send multiple >> result sets is to support cursors on queries involving type inheritance and >> anonymous target lists. >> begin >> declare c cursor for >> select e.oid, e.* from EMP* e >> fetch 10 in c >> ... >> To handle the command sequence above, frontend applications would need to >> be provided with a new result descriptor when the "fetch 10 in c" crosses a >> result set boundary. > >I tried this and was unable to produce a failure. It looks like the >select only returns the set of fields applicable to the base class, >regardless of what additional fields may be possessed by some >subclasses. Which, in fact, is more or less what I'd expect. > >Is Michael remembering some old behavior that is no longer implemented? >And if so, is the old or new behavior the correct one? > > regards, tom lane Forgive me for my slow memory: I remember now that there was a decision made to not support exploding the expansion of anonymous target lists because of the extra complexity it would introduce into the parser and executor. Thus, Postgres would return at most result set per query processed. Smart users and smart applications would be able to navigate the inheritance hierarchy by explicitly specifying tables and columns as needed. ~~ <kamesan@ricochet.net>