Re: Result set granularity.. - Mailing list pgsql-sql
From | Rasmus Aveskogh |
---|---|
Subject | Re: Result set granularity.. |
Date | |
Msg-id | 34062.193.243.134.6.1064859820.squirrel@webmail.defero.se Whole thread Raw |
In response to | Result set granularity.. ("Rasmus Aveskogh" <rasmus@defero.se>) |
List | pgsql-sql |
Well, there is a rowid and a rownum in Oracle. I don't remember which, but one of them is significant to the current recieved result set only. Hence; you can use it to manipulate and restrict the result set rather than the data you're recieving from the tables. I.e. rownum 1 will always be the first row in the result set recived by the select. OID's in PostgreSQL, are, as you also mentioned, an object ID associated with a certain object in the database and is therefore of no use in the situation I mentioned in my earlier question. There is no predictability regarding what OID a certain row in a table will have. -ra > there is a secret column called "oid" added to tables (unless you > tell postgres not to.) > > select oid from my_table limit 10; > > oid > ---------- > 77412298 > 77416344 > 77417164 > 77451631 > 77455960 > 77474124 > 77480067 > 77501568 > 77503239 > 77507326 > (10 rows) > > the oid is 32 unsigned int that is incremented for each object in the > database. it is the postgres version of Oracle's rownum. > > search the docs for more info. > > > On Sat, 27 Sep 2003, Rasmus Aveskogh wrote: >> Since I went from Oracle to PostgreSQL I've been missing the "invisable" >> column 'rownum'. I often used it to lower the granularity of my data. >> For example, say I have a large table containing some sort of >> statistical >> data and want to plot a graph using it. >> If the graph is 600 pixels wide I might as well lower the granularity of >> my incoming data to 600 measure points before plotting. >> >> In Oracle I used to do this by using the modulus operator on the rownum >> column as a restriction. >> >> SELECT <column> FROM <table> WHERE mod(rownum, 5) = 0; >> >> The query above would give me every fifth row of the original result set >> and would save me from sending the data over my database connection and >> do >> the lowering of the granularity in the application. >> >> I have two questions, one dependent on the answer on the other one.. >> >> 1) Is it possible to achieve this any other _easy_ way? >> Perhaps it would be feasible to write a wrapper using a counter which >> makes the SELECT and then return every fifth row to the SELECT calling >> the >> wrapper. But then I assume the data still has to "travel" one step which >> puts on some overhead. >> >> 2) Would it be possible to add one more limit argument to the >> non-standard >> set of LIMIT and OFFET - a 'SCALE n' or 'GRANUL n' argument which would >> return every n row of the initial result set. I think that would be >> gladly accepted for folks working with statistical data. >> >> -ra >> >> ---------------------------(end of broadcast)--------------------------- >> TIP 9: the planner will ignore your desire to choose an index scan if >> your >> joining column's datatypes do not match >> > >