Thread: Limit doesn't work with select unions
I'm doing the following query: select a.id, cfs.id from cfs where a.id = cfs.oid union select b.id, cfs.id from cfs where b.id = cfs.oid limit 1; The limit basically doesn't work. I get the maximum amount of rows back. I try the query select a.id, cfs.id from cfs where a.id = cfs.oid limit 1 union select b.id, cfs.id from cfs where b.id = cfs.oid limit 1; and psql says the syntax is incorrect. Any ideas on how to use limit with union queries? Thanks, Rich
It's fixed in current 6.5 sources. Wait for 6.5.3 or just get REL6_5 tree from cvs Oleg On Mon, 25 Oct 1999, Postgres wrote: > Date: Mon, 25 Oct 1999 01:27:40 -0700 > From: Postgres <postgres@weblynk.com> > To: "pgsql-sql@postgresql.org" <pgsql-sql@postgreSQL.org> > Subject: [SQL] Limit doesn't work with select unions > > I'm doing the following query: > select a.id, cfs.id from cfs where a.id = cfs.oid union select b.id, cfs.id > from cfs where b.id = cfs.oid limit 1; > The limit basically doesn't work. I get the maximum amount of rows back. > I try the query > select a.id, cfs.id from cfs where a.id = cfs.oid limit 1 union select b.id, > cfs.id from cfs where b.id = cfs.oid limit 1; > and psql says the syntax is incorrect. Any ideas on how to use limit with > union queries? > Thanks, > Rich > > > > > ************ > _____________________________________________________________ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
I have a table with about 30 columns and 200K rows. When I perform a vacuum on it (no analyze) it takes usually a little more than 1 hour. Since I do it everyday, for me, this basically means my database, which is now getting about a million queries a month, is down for an hour every day, since all selects to that table wait until the vacuum is done. I have smaller tables, 10 columns, 5K rows, which see, to be very quick with vacuum. Any ideas on how to speed vacuum up? Thanks, Rich
Postgres <postgres@weblynk.com> writes: > I have a table with about 30 columns and 200K rows. When I perform a vacuum > on it (no analyze) it takes usually a little more than 1 hour. Hmm, doesn't seem all that large. One question is whether you are all the way up-to-date on Postgres --- some performance problems in vacuum were fixed recently. (The problems were excessive memory usage, actually, but that could translate to long runtime if the process started to swap. Does the backend that's doing the vacuum seem to grow to a size much larger than it starts at?) I don't recall whether this patch is in 6.5.2 or not, but it will be in 6.5.3, or you could pull the current REL6_5 branch sources from the CVS server. A performance problem that still remains is that vacuum seems unreasonably slow at updating indexes. Some people have found that dropping and recreating indexes around a vacuum nets out faster than letting vacuum do it. You should also ask yourself whether each index on the table is earning its keep --- each one costs time on every insert or update, quite aside from vacuum. Only the indexes that actually get used for your common queries are likely to be worth their overhead. regards, tom lane
Instead of using CVS, will ftp'ing the postgresql.snapshot.tar.gz file do? I looked around for an explanation of what this file is, but didn't find any. ----- Original Message ----- From: Tom Lane <tgl@sss.pgh.pa.us> To: Postgres <postgres@weblynk.com> Cc: pgsql-sql@postgresql.org <pgsql-sql@postgreSQL.org> Sent: Monday, October 25, 1999 7:45 AM Subject: Re: [SQL] Vacuum takes more than 1 hr > Postgres <postgres@weblynk.com> writes: > > I have a table with about 30 columns and 200K rows. When I perform a vacuum > > on it (no analyze) it takes usually a little more than 1 hour. > > Hmm, doesn't seem all that large. One question is whether you are all > the way up-to-date on Postgres --- some performance problems in vacuum > were fixed recently. (The problems were excessive memory usage, > actually, but that could translate to long runtime if the process > started to swap. Does the backend that's doing the vacuum seem to grow > to a size much larger than it starts at?) I don't recall whether this > patch is in 6.5.2 or not, but it will be in 6.5.3, or you could pull > the current REL6_5 branch sources from the CVS server. > > A performance problem that still remains is that vacuum seems > unreasonably slow at updating indexes. Some people have found that > dropping and recreating indexes around a vacuum nets out faster than > letting vacuum do it. You should also ask yourself whether each > index on the table is earning its keep --- each one costs time on > every insert or update, quite aside from vacuum. Only the indexes > that actually get used for your common queries are likely to be worth > their overhead. > > regards, tom lane > > ************
Rich Ryan <postgres@weblynk.com> writes: > Instead of using CVS, will ftp'ing the postgresql.snapshot.tar.gz file do? I > looked around for an explanation of what this file is, but didn't find any. That's a nightly snapshot of current development sources --- not of the stable REL6_5 branch, which is what you probably want. I thought Marc had made up a beta-test tarball of 6.5.3 recently, but I don't see one on the FTP server... regards, tom lane
Any know if there is an operator for the money type? As in create index pcfs on cfs using btree ( price money_ops ); Thanks, Rich
The following command takes about 30 min. to complete CREATE INDEX "cfsoid" on "cfs" using hash ( "ownerid" "float8_ops" ); The table cfs has about 30 columns and 200K rows at the time of creation. Does the time for this command sound right? I'm using the REL6_5 branch from CVS
I'm trying to do a > < on a date field. Not having any luck. My query looks like select * from a where a.cdate > Date('10-20-1999'); I've tried every different date string format I can think of with no dashes, colons, different orderings of the month, date, and year, etc. no luck. Any hints? Thanks, Rich
When I do somthing like select cdate into table newt from oldt; and say, row cdate in table oldt had a default value of 'now', the new table newt doesn't have the default value anymore. This makes sense since it is copying just data, not meta-data into the new table. So, is there a way to alter the cdate row in the new table to have a default? Could not find an alter table command for this. Thanks, Rich