Thread: What about LIMIT in SELECT ?
Hi, I took a look at mysql and was very impressed with possibility to limit number of rows returned from select. This is very useful feature for Web applications when user need to browse results of selection page by page. In my application I have to do full select every time user press button [Next] and show requested page using perl. This works more or less ok for several thousands rows but totally unusable for large selections. But now I'm about to work with big database and I don't know how I'll stay with postgres :-) It'll just doesn't work if customer will wait several minutes just browse next page. Mysql lacks some useful features postgres has (subselects, transaction ..) but for most Web applications I need just select :-) I dont' know how LIMIT is implemented in Mysql and I know it's not in SQL92 standart, but this makes Mysql very popular. Is it difficult to implement this feature in postgres ? Regards, Oleg _____________________________________________________________ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
Hi, my 2 cents... I agree completely, LIMIT would be VERY usefull in web based apps, which is all I run. It does not matter to me if it is not part of a formal standard. The idea is so common that it is a defacto standard. I would not expect it for this release, but could it get put on the TODO list for next time? I am even willing to work at an apprentise level on this with a more expeireanced person that knows this stuff. A note on implimentation: I *used to* :) work with VFP on NT's :( And the way VFP did LIMIT, it would only return the number of rows asked for, BUT it still did the WHOLE search! So on a larger table, which we had (property tax database for the county), if some one put in too vague a query, it would try to collect ALL of the rows as the initial result set, then give you the first x rows of that. This did save on pushing mass amounts of data out to the browser, but it would have been even better if it could have simply aborted the select after having found x rows. Also, it did not have the concept of an offset, so one could not select 100 rows, starting 200 rows in, which would be REALLY usefull for "paging" through data. I do not know if mySQL or any other has such a concept either, but it would be nice. So a properly implemented "LIMIT" could: 1. Save pushing mass amounts of data across the web, that no one wants any way. 2. Stop vague queries from bogging down the server. (On very larg tables this could be critical!) 3. Enable "Paging" of data. (easyer then now (app. level)) 4. Would be a very nice feather in PostgreSQL's cap that could make it even more attractive to those looking at all sorts of databases out there. Have a great day. On Tue, 13 Oct 1998, Oleg Bartunov wrote: > Hi, > > I took a look at mysql and was very impressed with possibility > to limit number of rows returned from select. This is very useful > feature for Web applications when user need to browse results of > selection page by page. In my application I have to do full > select every time user press button [Next] and show requested page > using perl. This works more or less ok for several thousands rows but > totally unusable for large selections. But now I'm about to work > with big database and I don't know how I'll stay with postgres :-) > It'll just doesn't work if customer will wait several minutes just browse > next page. Mysql lacks some useful features postgres has > (subselects, transaction ..) but for most Web applications I need > just select :-) I dont' know how LIMIT is implemented in Mysql and > I know it's not in SQL92 standart, but this makes Mysql very popular. > > Is it difficult to implement this feature in postgres ? > > Regards, > > Oleg > > > _____________________________________________________________ > Oleg Bartunov, sci.researcher, hostmaster of AstroNet, > Sternberg Astronomical Institute, Moscow University (Russia) > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ > phone: +007(095)939-16-83, +007(095)939-23-83 > > Terry Mackintosh <terry@terrym.com> http://www.terrym.com sysadmin/owner Please! No MIME encoded or HTML mail, unless needed. Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3 ------------------------------------------------------------------- Success Is A Choice ... book by Rick Patino, get it, read it!
What is wrong with the already implemented FETCH command? On Tue, 13 Oct 1998, Terry Mackintosh wrote: > Hi, my 2 cents... > > I agree completely, LIMIT would be VERY usefull in web based apps, which > is all I run. It does not matter to me if it is not part of a formal > standard. The idea is so common that it is a defacto standard. > > I would not expect it for this release, but could it get put on the TODO > list for next time? I am even willing to work at an apprentise level on > this with a more expeireanced person that knows this stuff. > > A note on implimentation: > I *used to* :) work with VFP on NT's :( > And the way VFP did LIMIT, it would only return the number of rows asked > for, BUT it still did the WHOLE search! > So on a larger table, which we had (property tax database for the county), > if some one put in too vague a query, it would try to collect ALL of the > rows as the initial result set, then give you the first x rows of that. > > This did save on pushing mass amounts of data out to the browser, but it > would have been even better if it could have simply aborted the select > after having found x rows. > > Also, it did not have the concept of an offset, so one could not select > 100 rows, starting 200 rows in, which would be REALLY usefull for "paging" > through data. I do not know if mySQL or any other has such a concept > either, but it would be nice. > > So a properly implemented "LIMIT" could: > 1. Save pushing mass amounts of data across the web, that no one wants > any way. > 2. Stop vague queries from bogging down the server. > (On very larg tables this could be critical!) > 3. Enable "Paging" of data. (easyer then now (app. level)) > 4. Would be a very nice feather in PostgreSQL's cap that could make it > even more attractive to those looking at all sorts of databases out there. > > Have a great day. > > On Tue, 13 Oct 1998, Oleg Bartunov wrote: > > > Hi, > > > > I took a look at mysql and was very impressed with possibility > > to limit number of rows returned from select. This is very useful > > feature for Web applications when user need to browse results of > > selection page by page. In my application I have to do full > > select every time user press button [Next] and show requested page > > using perl. This works more or less ok for several thousands rows but > > totally unusable for large selections. But now I'm about to work > > with big database and I don't know how I'll stay with postgres :-) > > It'll just doesn't work if customer will wait several minutes just browse > > next page. Mysql lacks some useful features postgres has > > (subselects, transaction ..) but for most Web applications I need > > just select :-) I dont' know how LIMIT is implemented in Mysql and > > I know it's not in SQL92 standart, but this makes Mysql very popular. > > > > Is it difficult to implement this feature in postgres ? > > > > Regards, > > > > Oleg > > > > > > _____________________________________________________________ > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet, > > Sternberg Astronomical Institute, Moscow University (Russia) > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ > > phone: +007(095)939-16-83, +007(095)939-23-83 > > > > > > Terry Mackintosh <terry@terrym.com> http://www.terrym.com > sysadmin/owner Please! No MIME encoded or HTML mail, unless needed. > > Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3 > ------------------------------------------------------------------- > Success Is A Choice ... book by Rick Patino, get it, read it! > > > Marc G. Fournier scrappy@hub.org Systems Administrator @ hub.org scrappy@{postgresql|isc}.org ICQ#7615664
> Hi, my 2 cents... > > I agree completely, LIMIT would be VERY usefull in web based apps, which > is all I run. It does not matter to me if it is not part of a formal > standard. The idea is so common that it is a defacto standard. > > I would not expect it for this release, but could it get put on the TODO > list for next time? I am even willing to work at an apprentise level on > this with a more expeireanced person that knows this stuff. I assume everyone has read the FAQ item: How do I <I>select</I> only the first few rows of a query? See the fetch manual page.<P> This only prevents all row results from being transfered to the client. The entire query must be evaluated, even if you only want just the first few rows. Consider a query that has an order by. There is no way to return any rows until the entire query is evaluated and sorted. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
On Tue, 13 Oct 1998, Marc G. Fournier wrote: > > What is wrong with the already implemented FETCH command? > Ah ... I did not know about it :) Guess I should RTFM. Terry Mackintosh <terry@terrym.com> http://www.terrym.com sysadmin/owner Please! No MIME encoded or HTML mail, unless needed. Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3 ------------------------------------------------------------------- Success Is A Choice ... book by Rick Patino, get it, read it!
On Tue, 13 Oct 1998, Bruce Momjian wrote: > See the fetch manual page.<P> OK, I will. > This only prevents all row results from being transfered to the client. Yes, this is good, but this is only half the problem ... > The entire query must be evaluated, even if you only want just the first ... this is the other half. > few rows. Consider a query that has an order by. There is no way > to return any rows until the entire query is evaluated and sorted. This is where I was hoping one of you guru types might have some insight, -- how to stop short a query at X rows, even if it has an order by. No way? Terry Mackintosh <terry@terrym.com> http://www.terrym.com sysadmin/owner Please! No MIME encoded or HTML mail, unless needed. Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3 ------------------------------------------------------------------- Success Is A Choice ... book by Rick Patino, get it, read it!
Hi all. I didn't follow all the posts about this thread. So this post may be out of center. I think current PostgreSQL lacks the concern to the response to get first rows quickly. For example,queries with ORDER BY clause necessarily include sort steps and process all target rows to get first rows only. So I modified my code for ORDER BY cases and use on trial. I don't understand PostgreSQL sources,so my code is not complete. I modified my code for the following 2 cases. 1.In many cases the following query uses index scan. SELECT * from ... where key > ...; (where (key) is an index) If so,we can omit sort steps from the access plan for the following query. SELECT * from ... where key > ... order by key; Currently cursors without sort steps may be sensitive diffrent from cursors with sort steps. But no one mind it. 2.In many cases the following query uses index scan same as case 1. SELECT * from ... where key < ...;(where (key) is an index) If so and if we scan the index backward,we can omit sort steps from the access plan for the following query. SELECT * from ... where key < ... order by key desc; To achive this(backward scan),I used hidden(provided for the future ?)code that is never executed and is not necessarily correct. In the following cases I didn't modify my code to use index scan, because I couldn't formulate how to tell PostgreSQL optimizer whether the response to get first rows is needed or the throughput to process sufficiently many target rows is needed. 3.The access plan made by current PostgreSQL optimizer for a query with ORDER BY clause doesn't include index scan. I thought the use of Tatsuo's QUERY_LIMIT to decide that the responce is needed. It is sufficient but not necessary ? In Oracle the hints FIRST_ROWS,ALL_ROWS are used. Thanks. Hiroshi Inoue Inoue@tpf.co.jp