Thread: INSERT and SELECT
Hi, I've tried this query with PostgreSQL, but it doesn't work. Could anyone explain me why and what must I do to get the same result ? Query example : "INSERT INTO table VALUES ( (SELECT oid FROM membre WHERE email="toto@toto.com"), "test");" I get the error : "parse error near select". I've been searched in the manual and with deja.com but I haven't found something about this. This query works with mysql (and it's the only one query that works with this RDBMS ;-) Thanks, Éric. -- Éric BARROCA | Logiciels Libres, Conception Intranet/Extranet, E-Mail: eric@netsystems.fr | Sites Web dynamiques, Commerce électronique, Tel: +33 (0)2 48 21 54 67 | Formations (Unix/Linux, Perl, etc.), Réseaux... GSM: +33 (0)6 16 35 33 25 | NetSystems : <http://www.netsystems.fr>
If you go into psql and type \h INSERT or \h SELECT you'll get the syntax for these: test_capture=> \h INSERT Command: insert Description: insert tuples Syntax: INSERT INTO class_name [(attr1, ...attrN)] VALUES (expr1,..exprN) | SELECT [DISTINCT [ON attrN]] expr1, ...exprN [FROM from_clause] [WHERE qual] [GROUP BY group_list] [HAVING having_clause] [UNION [ALL] SELECT ...]; So what you want is: INSERT INTO table VALUES SELECT oid, 'test' FROM membre WHERE email="toto@toto.com"; Assuming table 'table' already exists. Or test_capture=> \h SELECT Command: select Description: retrieve tuples Syntax: SELECT [DISTINCT [ON attrN]] expr1 [AS attr1], ...exprN [INTO [TABLE] class_name] [FROM from_list] [WHERE qual] [GROUP BY group_list] [HAVING having_clause] [ORDER BY attr1 [ASC|DESC] [USINGop1], ...attrN ] [UNION [ALL] SELECT ...]; SELECT oid, 'test' INTO other_table FROM membre WHERE email="toto@toto.com"); which will create the appropriate funtion for you. HTH, S. On 10 Dec 1999, Eric BARROCA wrote: > Hi, > > I've tried this query with PostgreSQL, but it doesn't work. Could anyone > explain me why and what must I do to get the same result ? > Query example : "INSERT INTO table VALUES ( > (SELECT oid FROM membre WHERE email="toto@toto.com"), > "test");" > I get the error : "parse error near select". > > I've been searched in the manual and with deja.com but I haven't found > something about this. > This query works with mysql (and it's the only one query that works with > this RDBMS ;-) > > > Thanks, > > �ric. > > -- > �ric BARROCA | Logiciels Libres, Conception Intranet/Extranet, > E-Mail: eric@netsystems.fr | Sites Web dynamiques, Commerce �lectronique, > Tel: +33 (0)2 48 21 54 67 | Formations (Unix/Linux, Perl, etc.), R�seaux... > GSM: +33 (0)6 16 35 33 25 | NetSystems : <http://www.netsystems.fr> > > ************ > Stuart C. G. Rison Department of Biochemistry and Molecular Biology 6th floor, Darwin Building, University College London (UCL) Gower Street, London, WC1E 6BT, United Kingdom Tel. 0207 504 2303, Fax. 0207 380 7193 e-mail: rison@biochem.ucl.ac.uk
change the double quotation mark into sigle ones: email='toto@toto.com', 'test' this is sql92 also, so, it's not pgsql's fault ;-). On 10 Dec 1999, Eric BARROCA wrote: > Hi, > > I've tried this query with PostgreSQL, but it doesn't work. Could anyone > explain me why and what must I do to get the same result ? > Query example : "INSERT INTO table VALUES ( > (SELECT oid FROM membre WHERE email="toto@toto.com"), > "test");" > I get the error : "parse error near select". > > I've been searched in the manual and with deja.com but I haven't found > something about this. > This query works with mysql (and it's the only one query that works with > this RDBMS ;-) > > > Thanks, > > �ric. > > -- > �ric BARROCA | Logiciels Libres, Conception Intranet/Extranet, > E-Mail: eric@netsystems.fr | Sites Web dynamiques, Commerce �lectronique, > Tel: +33 (0)2 48 21 54 67 | Formations (Unix/Linux, Perl, etc.), R�seaux... > GSM: +33 (0)6 16 35 33 25 | NetSystems : <http://www.netsystems.fr> > > ************ >
Type database=>\h select Vladimir
Stuart Rison <rison@biochemistry.ucl.ac.uk> writes: > INSERT INTO table VALUES SELECT oid, 'test' FROM membre WHERE > email="toto@toto.com"; I've tried it, but it doesn't work too ("paser error near or at select"), even if I replace " by '. The sample table "table" :- ID int4- login varchar()- password varchar() I'd like to put into the ID culumn the result of (SELECT ...), into the login column "test" and into the password one "testpass". I've tried many queries like : INSERT INTO table (id, login, password) VALUES SELECT oid, 'test', 'testpass' FROM membre WHERE email='toto@toto.com'; or INSERT INTO table (id, login, password) VALUES ((SELECT oid FROM membre WHERE email='toto@toto.com'), 'test', 'testpass'); or INSERT INTO table (id, login, password) VALUES (SELECT oid, 'test', 'testpass' FROM membre WHERE email='toto@toto.com'); ... But nothing give me a result. I get always parse error near select, and I can't figure why. Best Regards, Éric.
As i know PgSQL support subqueryes only in Where part -------------------------------------------------------------- The reboots are for hardware upgrades! "http://www.nmmm.nu; <nmmm@nmmm.nu> ----- Original Message ----- From: Eric BARROCA <eric@netsystems.fr> To: <pgsql-sql@postgreSQL.org> Sent: ïåòúê, Äåêåìâðè 10, 1999 05:46 Subject: [SQL] INSERT and SELECT > Hi, > > I've tried this query with PostgreSQL, but it doesn't work. Could anyone > explain me why and what must I do to get the same result ? > Query example : "INSERT INTO table VALUES ( > (SELECT oid FROM membre WHERE email="toto@toto.com"), > "test");" > I get the error : "parse error near select". > > I've been searched in the manual and with deja.com but I haven't found > something about this. > This query works with mysql (and it's the only one query that works with > this RDBMS ;-) > > > Thanks, > > Éric. > > -- > Éric BARROCA | Logiciels Libres, Conception Intranet/Extranet, > E-Mail: eric@netsystems.fr | Sites Web dynamiques, Commerce électronique, > Tel: +33 (0)2 48 21 54 67 | Formations (Unix/Linux, Perl, etc.), Réseaux... > GSM: +33 (0)6 16 35 33 25 | NetSystems : <http://www.netsystems.fr> > > ************
I wasn't going to respond to this, but there are two, almost correct, replies, so I thought I should edit a bit: On Fri, Dec 10, 1999 at 04:31:32PM +0000, Stuart Rison wrote: > If you go into psql and type \h INSERT or \h SELECT you'll get the syntax > for these: Good idea ;-) > > test_capture=> \h INSERT > Command: insert > Description: insert tuples > Syntax: > INSERT INTO class_name [(attr1, ...attrN)] > VALUES (expr1,..exprN) | > SELECT [DISTINCT [ON attrN]] > expr1, ...exprN > [FROM from_clause] > [WHERE qual] > [GROUP BY group_list] > [HAVING having_clause] > [UNION [ALL] SELECT ...]; > > So what you want is: > > INSERT INTO table VALUES SELECT oid, 'test' FROM membre WHERE > email="toto@toto.com"; Oops, the | is between the VALUES and SELECT clauses, so that should be: INSERT INTO table SELECT oid, 'test' FROM membre WHERE email='toto@toto.com'; Note that the other followup, about quote marks, is also correct. > > > Assuming table 'table' already exists. > > Or > > test_capture=> \h SELECT > Command: select > Description: retrieve tuples > Syntax: > SELECT [DISTINCT [ON attrN]] expr1 [AS attr1], ...exprN > [INTO [TABLE] class_name] > [FROM from_list] > [WHERE qual] > [GROUP BY group_list] > [HAVING having_clause] > [ORDER BY attr1 [ASC|DESC] [USING op1], ...attrN ] > [UNION [ALL] SELECT ...]; > > SELECT oid, 'test' INTO other_table FROM membre WHERE > email="toto@toto.com"); SELECT oid, 'test' INTO other_table FROM membre WHERE email='toto@toto.com'); Just quote marks, here.
> > Stuart Rison <rison@biochemistry.ucl.ac.uk> writes: > > INSERT INTO table VALUES SELECT oid, 'test' FROM membre WHERE > > email=3D"toto@toto.com"; > I've tried it, but it doesn't work too ("paser error near or at select"), > even if I replace " by '. Because VALUES and SELECT are mutually exclusive at that point. Just do INSERT INTO table SELECT ... or INSERT INTO TABLE (attrib, ...) SELECT ... and the parser should be happy. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
On Fri, 10 Dec 1999, Nikolay Mijaylov wrote: > As i know PgSQL support subqueryes only in Where part subquery? ;) No. > > I've tried this query with PostgreSQL, but it doesn't work. Could anyone > > explain me why and what must I do to get the same result ? > > Query example : "INSERT INTO table VALUES ( > > (SELECT oid FROM membre WHERE email="toto@toto.com"), > > "test");" Try: => INSERT INTO table -- NO values, but -> SELECT oid FROM membre WHERE ... ; it may works. --nek;(
I have a problem that I'm hoping someone can help me with. I've built a web-pased application that is back-ended by PGsql.. It works great, I couldn't ask for better preformance. However, I just recently coded another module for it and this module needed to accept text files (or just large text blocks lets say). I ran into the 8k query limit barrier. I need these blocks of text to be searchable and am having to do this with a flat-file based solution right now, it's slow and cumbersome. I would much rather have the text stored in a PG field so I don't have to stray from the design of the search engine (which is designed to search only the pg database, but has been hacked to search flat files too).. I could break the texts up but they really all need to be in a single field, not broken up. I can't think up a way to do it with the 8k query limit. Another problem I see is that well, there are a lot of text files (4000 now, with about 500 added a week), many over 8k in size. Would this be too much for PG to handle saying that I could get all the text inserted? Also feel free to suggest some kind of indexing system for the flat files, the only problem with that will be integration into my existing application but I've left some hooks open so I can deal with it.. Thanks guys! -Mitch
"Mitch Vincent" <mitch@venux.net> writes: > Also feel free to suggest some kind of indexing system for the flat files, Glimpse, http://glimpse.cs.arizona.edu/ If you are trying for a full-text index over a large amount of text, Postgres is really not the right tool --- we don't have the right kind of index support for searching for individual words/phrases within text fields. You can fake it with LIKE or regexp matches but performance will be awful, because the searches will have to scan the whole database. (There is code that partially addresses this in contrib/fulltextindex, but performance will still be nothing to write home about.) Much better to use something that's actually designed for the purpose. Glimpse is a really nice tool: speedy, and its index files are quite small. I've used it for years to keep full-text indexes of all my files. regards, tom lane
> I have a problem that I'm hoping someone can help me with. I've built a > web-pased application that is back-ended by PGsql.. It works great, I > couldn't ask for better preformance. And Bruce just said (on the hackers list) that big tuples aren't an item of big pressure. Reading this less than 24 hours past tells another story. > However, I just recently coded another module for it and this module needed > to accept text files (or just large text blocks lets say). I ran into the 8k > query limit barrier. I need these blocks of text to be searchable and am > having to do this with a flat-file based solution right now, it's slow and > cumbersome. I would much rather have the text stored in a PG field so I > don't have to stray from the design of the search engine (which is designed > to search only the pg database, but has been hacked to search flat files > too).. I could break the texts up but they really all need to be in a single > field, not broken up. I can't think up a way to do it with the 8k query > limit. If you really need to store big texts into a table, a combination of a view, update rules and PL functions can definitely do it. It might look a little complex first, but after understanding the trick, it's pretty neat. Look at this: pgsql=> \d bttest View "bttest" Attribute | Type | Extra -----------+------+------- key | int4 | content | text | View definition: SELECT bttest_shadow."key", bigtext_fetch(bttest_shadow.btid) AS content FROM bttest_shadow; pgsql=> select key, length(content) from bttest; key | length -----+-------- 1 | 3 2 | 9 4 | 135840 (3 rows) Yes, the row with key=4 really contains a text of that size. And I can get it back exactly as it was inserted. And yes, I can do INSERT/UPDATE/DELETE on bttest. Anything behaves as if it where a regular table. But it needs the unlimited querysize, supported only in CURRENT development tree, so you'll have to wait for 7.0 anyway. > Also feel free to suggest some kind of indexing system for the flat files, > the only problem with that will be integration into my existing application > but I've left some hooks open so I can deal with it.. That's another story, and Tom's comment on it says it all. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
I wrote: > Look at this: > > pgsql=> select key, length(content) from bttest; > key | length > -----+-------- > 1 | 3 > 2 | 9 > 4 | 135840 > (3 rows) Rereading my own comment, I just realized that it would be possible to add a real BIG text type with a slightly different technique. This would require another (not syscached) pg_ relation plus support in heap access and some other parts. And it would be a DEFINITELY NOT indexable type. Also it would be subject to get removed someday after we implemented large tuples. But it would drop the pressure from the large tuple item close to zero IMHO. So if I can implement it in a way, that ensures it is easily removable again, why not? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
"Mitch Vincent" wrote: > I have a problem that I'm hoping someone can help me with. I've built a > web-based application that is back-ended by PGsql.. It works great, I > couldn't ask for better preformance. > > However, I just recently coded another module for it and this module needed > to accept text files (or just large text blocks lets say). I ran into the 8k > query limit barrier. I need these blocks of text to be searchable and am > having to do this with a flat-file based solution right now, it's slow and > cumbersome. I realize my stuff does not qualify as anything for Tom Lane to write home about. However, there's a chance that the method I am using fits the purpose you describe (especially considering the quantities you mentioned). The following query is fairly complex, but it takes just a few seconds to complete on our grossly overloaded server: http://wit.mcs.anl.gov/EMP/select_emp_advanced.cgi?E1.ec_code=ec&E1.ec_code.op=%7E%09is+in+range&E1.ec_code.patt=3.4&ec_code.count=1&T1.text=or&T1.text.op=%7E*%09matches+regex.%2C+case-insensitive&T1.text.patt=rattus&T2.text=ocn&T2.text.op=%7E*%09matches+regex.%2C+case-insensitive&T2.text.patt=rat&T3.text=en&T3.text.op=%7E*%09matches+regex.%2C+case-insensitive&T3.text.patt=protease&text.count=1&N1.seg=km&N1.seg.op=%7E%09contained+in&N1.seg.patt=0+..+10&seg.count=1&constraint=E1+%26+%28T1+%7C+T2%29+%26+T3+%26+N1&do=Run+the+query It would have taken at least as much to query any individual term with Glimpse, and it would not be feasible to query things like E1 and N1 with anything but postgres because these queries are based on special data types. Also, I can imagine some difficulty in doing all but trivial relational operations using your average full-text engine. The idea is simple: 1. Do not store the original files in postgres, or, if you are desperate, store them as large objects. The only winning in large objects is that they can be passed to the server and back through a protocol supported by all postgres clients, but you loose to the headache of maintaining the large objects which do need some care. Note that there is no winning at all if your application is already web-based by design. A feasible alternative to both large objects and external file storage is to use compression and uu or base64 encoding which allows one to split their files into nice-looking numbered 61- or 76-character lines. Postgres can handle an awesome lot of those. Check out MIME::Base64 and Compress::Bzip2 modules if you are building your clients in perl. 2. Having decided how to store and transfer the files, split them up into words, load those into postgres and build indices on them. Word indices can be quite efficient because you can use the ^-anchored expressions with them. Also, parts of the data in your text can be numeric, and it would be natural to index those as numbers. Furthermore, if there are portions of your text that represent distinct classes of data it would be equally natural to store those in separate index tables. The are about 300 classes of data in the database I am talking about, which is derived from the published research articles. The following page roughly outlines how I did it: http://wit.mcs.anl.gov/EMP/indexing.html This is not yet a full-text index but it would be trivial to make one based on such paradigm. It won't allow you to search for arbitrary substrings, but if you record the sequence in addition to the absolute location of words, you will be able to search for "phrases" of the AltaVista sorts, or even use proximity criteria like they do. --Gene
i know this is in the list archives (i've done this before, and cant remember now), but they seem to be down right now... how can i manually set the currval of a serial column? it's something like: select nextval('column_name').... isn't it.... tia, rob
the search does not work right, but the mbox is still there. if you click that, you can search that even in your IE, I assume you are using that ;-) On Thu, 16 Dec 1999, rob caSSon wrote: > i know this is in the list archives (i've done this before, and cant > remember now), but they seem to be down right now... > > how can i manually set the currval of a serial column? it's something > like: > > select nextval('column_name').... > > isn't it.... > > tia, > rob > > > ************ >
yeah...i'll scour the mboxes, although i'm actually using netscrape 4.7 on a debian box....;-) rob On Thu, 16 Dec 1999 kaiq@realtyideas.com wrote: > the search does not work right, but the mbox is still there. if you click > that, you can search that even in your IE, I assume you are using that ;-) > > > > i know this is in the list archives (i've done this before, and cant > > remember now), but they seem to be down right now... > > > > how can i manually set the currval of a serial column? it's something > > like: > > > > select nextval('column_name').... > > > > isn't it.... > > > > tia, > > rob
On Thu, 16 Dec 1999, rob caSSon wrote: > yeah...i'll scour the mboxes, although i'm actually using netscrape 4.7 on > a debian box....;-) select setval(sequence_name,value); ie: select setval( 'groups_seq', 89231 ); \df via psql shows: result|function |arguments |description ------+------------+----------------+------------------------- ... int4 |setval |text int4 |sequence set value ... --- Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org "I've learned that you cannot make someone love you. All you can do is stalk them and hope they panic and give in."