Thread: Postgres SQL Syntax
I've a postgres statement that reads: CREATE TABLE "channel" ( "chanid" int NOT NULL default '0', "channum" varchar(10) NOT NULL default '', "freqid" varchar(10) default NULL, "sourceid" int default NULL, . . . PRIMARY KEY ("chanid"), KEY "channel_src" ("channum","sourceid") ); Postgres is choking on the KEY statement at the end. > ERROR: table "channel" does not exist > ERROR: syntax error at or near "(" at character 886 What's the proper postgres syntax for a double key like this? Where can I find good examples of postgres syntax?
"Jim C." <jlistnews@gmail.com> writes: > I've a postgres statement that reads: > CREATE TABLE "channel" ( > "chanid" int NOT NULL default '0', > "channum" varchar(10) NOT NULL default '', > "freqid" varchar(10) default NULL, > "sourceid" int default NULL, > . > . > . > PRIMARY KEY ("chanid"), > KEY "channel_src" ("channum","sourceid") > ); That's not Postgres, and it's not SQL either ... it's a MySQL-ism. Use a separate CREATE INDEX statement. > Where can I find good examples of postgres syntax? In the manual? http://www.postgresql.org/docs/8.2/interactive/index.html (adjust URL to match your PG version) regards, tom lane
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/01/07 19:02, Jim C. wrote: > I've a postgres statement that reads: > > CREATE TABLE "channel" ( Do you *need* object names to be case-sensitive? If not, it's a bad habit to get in to. Makes more work for you and the developers. > "chanid" int NOT NULL default '0', Besides what Tom says, '0' is a string, not an integer. PG takes it, but it's a bad habit. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFFwrRBS9HxQb37XmcRAsj4AJ42ni3Oz4DUbJP5KfU7p390Tr83LACfb2Cu nvS7GBghjPQK4sG1Da3ONZ8= =cCnZ -----END PGP SIGNATURE-----
> Besides what Tom says, '0' is a string, not an integer. PG takes > it, but it's a bad habit. Maybe it is and maybe it isn't. I wouldn't know. I'm merely the unfortunate soul chosen to convert this from MySQL to Postgres. :-/ I've been working on it for a week now. I've got to say that it pains me to know that there is apparently no open standard in use for importing/exporting data from one db to another. XML would do the job, wouldn't it? If I'm wrong, I sure would like to hear about it. Jim C.
> CREATE TABLE "credits" ( > "person" integer NOT NULL default '0', > "chanid" int NOT NULL default '0', > "starttime" timestamp NOT NULL default '1970-01-01 00:00:00+00', > "role" set('actor','director','producer','executive_producer','writer','guest_star','host','adapter','presenter','commentator','guest') NOTNULL default '' > -- CONSTRAINT "chanid_constraint0" UNIQUE ("chanid","starttime","person","role") > -- UNIQUE KEY "chanid" ("chanid","starttime","person","role"), > -- KEY "person" ("person","role") > ); I'm doing this table by table, line by line. Each table, I learn something new about the differences between MySQL and Postgres, I mentally catalog it and I can always look it up in my own code next time for examples. I've a tool that is providing some help but sometimes it chokes. It choked on this one for example. I could use some clues as to how to go about converting this MySQL implementation of roles to Postgres. So far I've been through 5 tables and it is getting easier but I'm still getting stuck now and then. Jim C.
In article <epvm6a$9ee$1@sea.gmane.org>, "Jim C." <jlistnews@gmail.com> writes: > Maybe it is and maybe it isn't. I wouldn't know. I'm merely the > unfortunate soul chosen to convert this from MySQL to Postgres. :-/ > I've been working on it for a week now. I've got to say that it pains me > to know that there is apparently no open standard in use for > importing/exporting data from one db to another. Regarding just the data, you could use "mysqldump --tab=DIR" on the MySQL side and "COPY FROM" on the PostgreSQL side. > XML would do the job, wouldn't it? If you have a problem and try to solve it with XML, then you have two problems.
This is one instance where I think PGAdmin would really help. You know what the schema needs to be, yes? Create it will PGAdmin and you can see what some well-formatted PG code looks like. The majority of the differences in syntax between MySQL and PG are *generally* MySQL's fault. MySQL has more unique extensions and shortcuts for SQL than any other RDBMS I know of. Keep in mind, though, that no database is 100% ANSI SQL only. Indeed, I don't know of any database that is 100% ANSI SQL compliant. The relational model does not translate into computer data very well, and the requirements to implement it contradict some pretty basic computer restrictions. At several points along the way, the demands of the theoretical relational model break down and practical applications are used instead. The limitations of modern computers make a true RDB as envisioned by Boyd and Cobb a virtual impossibility (the most obvious reason being that a computer database can only store computerized data). -- Brandon Aiken CS/IT Systems Engineer -----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Jim C. Sent: Friday, February 02, 2007 11:37 AM To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Postgres SQL Syntax > CREATE TABLE "credits" ( > "person" integer NOT NULL default '0', > "chanid" int NOT NULL default '0', > "starttime" timestamp NOT NULL default '1970-01-01 00:00:00+00', > "role" set('actor','director','producer','executive_producer','writer','guest_s tar','host','adapter','presenter','commentator','guest') NOT NULL default '' > -- CONSTRAINT "chanid_constraint0" UNIQUE ("chanid","starttime","person","role") > -- UNIQUE KEY "chanid" ("chanid","starttime","person","role"), > -- KEY "person" ("person","role") > ); I'm doing this table by table, line by line. Each table, I learn something new about the differences between MySQL and Postgres, I mentally catalog it and I can always look it up in my own code next time for examples. I've a tool that is providing some help but sometimes it chokes. It choked on this one for example. I could use some clues as to how to go about converting this MySQL implementation of roles to Postgres. So far I've been through 5 tables and it is getting easier but I'm still getting stuck now and then. Jim C. ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -------------------------------------------------------------------- ** LEGAL DISCLAIMER ** Statements made in this e-mail may or may not reflect the views and opinions of Wineman Technology, Inc. or its employees. This e-mail message and any attachments may contain legally privileged, confidential or proprietary information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution or copying of this e-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this e-mail message from your computer.
Jim C. wrote: > > I'm doing this table by table, line by line. Each table, I learn > something new about the differences between MySQL and Postgres, I > mentally catalog it and I can always look it up in my own code next time > for examples. > > I've a tool that is providing some help but sometimes it chokes. It > choked on this one for example. > > I could use some clues as to how to go about converting this MySQL > implementation of roles to Postgres. > > So far I've been through 5 tables and it is getting easier but I'm still > getting stuck now and then. You probably want one of the mysql converter projects, e.g. http://pgfoundry.org/projects/mysql2pgsql/ Also read the "converting from other databases" section here: http://www.postgresql.org/docs/techdocs -- Richard Huxton Archonet Ltd
> You probably want one of the mysql converter projects, e.g. > http://pgfoundry.org/projects/mysql2pgsql/ > Also read the "converting from other databases" section here: > http://www.postgresql.org/docs/techdocs I tried several conversion tools and did get some minor success with one or two but mostly they didn't work. The article at this site seems good at first blush but then one notices that it doesn't seem to have been updated any time in the last five years. Jim C.
Attachment
Jim C. wrote: >> You probably want one of the mysql converter projects, e.g. >> http://pgfoundry.org/projects/mysql2pgsql/ >> Also read the "converting from other databases" section here: >> http://www.postgresql.org/docs/techdocs > > I tried several conversion tools and did get some minor success with one > or two but mostly they didn't work. The article at this site seems good > at first blush but then one notices that it doesn't seem to have been > updated any time in the last five years. I have to admit it's been at least a couple of years since I've used any of the mysql2pg scripts. -- Richard Huxton Archonet Ltd
Jim C. wrote: > Richard Huxton wrote: >> Jim C. wrote: >>>> You probably want one of the mysql converter projects, e.g. >>>> http://pgfoundry.org/projects/mysql2pgsql/ >>>> Also read the "converting from other databases" section here: >>>> http://www.postgresql.org/docs/techdocs >>> I tried several conversion tools and did get some minor success with one >>> or two but mostly they didn't work. The article at this site seems good >>> at first blush but then one notices that it doesn't seem to have been >>> updated any time in the last five years. Techdocs are user-contributed. Many are single articles. >> I have to admit it's been at least a couple of years since I've used any >> of the mysql2pg scripts. > > Perhaps this might be an issue the developers wish to address? "The" developers? The various pgfoundry projects all have their own developers. If you think about it, core PostgreSQL developers are unlikely to have a lot of MySQL databases to convert, so they'd be a poor choice to build such a tool anyway. > No > sarcasm here. I know they are busy and may not have the resources any > time soon. If it hasn't been much under consideration for several years > they may want to think about it though. Well, I said that *I* haven't used any for a couple of years, but mysql2pgsql seems to have been updated *today*. http://pgfoundry.org/projects/mysql2pgsql/ What I'd suggest is running it, identifying what doesn't work for you, and submitting a bug-report. Better still, a bug-report with a patch. I'm sure Jose and Joe would appreciate any improvements. -- Richard Huxton Archonet Ltd
Richard Huxton wrote: > Jim C. wrote: >>> You probably want one of the mysql converter projects, e.g. >>> http://pgfoundry.org/projects/mysql2pgsql/ >>> Also read the "converting from other databases" section here: >>> http://www.postgresql.org/docs/techdocs >> >> I tried several conversion tools and did get some minor success with one >> or two but mostly they didn't work. The article at this site seems good >> at first blush but then one notices that it doesn't seem to have been >> updated any time in the last five years. > > I have to admit it's been at least a couple of years since I've used any > of the mysql2pg scripts. Perhaps this might be an issue the developers wish to address? No sarcasm here. I know they are busy and may not have the resources any time soon. If it hasn't been much under consideration for several years they may want to think about it though. Jim C.
Attachment
On 2/2/07, Jim C. <jlistnews@gmail.com> wrote: > > Besides what Tom says, '0' is a string, not an integer. PG takes > > it, but it's a bad habit. > > Maybe it is and maybe it isn't. I wouldn't know. I'm merely the > unfortunate soul chosen to convert this from MySQL to Postgres. :-/ > > I've been working on it for a week now. I've got to say that it pains me > to know that there is apparently no open standard in use for > importing/exporting data from one db to another. XML would do the job, > wouldn't it? > > If I'm wrong, I sure would like to hear about it. the open standard to convert data from one database to another, unfortunately, is SQL. SQL is incomplete, illogical, and obscure, so here we are. moving data from mysql to postgresql is easy...its the table schemas that are tough. If you have the table schemas done, you can mysqldump --compatible=postgresql | psql which should work for 90% of tables, because mysql supports only a subset of the types postgresql supports. the schemas are a bit trickier...you have to do them by hand or use a conversion tool. one such tool is DTS. if you look around you might find something else though. merlin
----- Original Message ----- From: "Merlin Moncure" <mmoncure@gmail.com> > > the open standard to convert data from one database to another, > unfortunately, is SQL. SQL is incomplete, illogical, and obscure, so > here we are. > The same can be said about any programming language, can it not? Even languages as powerful and expressive as C++ and Perl have elements that seem illogical. That they are incomplete is substantially demonstrated by the existence of the boost library, most of which ought to be added to the C++ standard library, and CPAN. But for perl, there is no standard so it is hard to say definitively where the language ends and developer's libraries begin. I am sure that all of the programming students I have taught have found C++ template metaprogramming obscure and very difficult to fathom, but I don't think it is possible at this time to beat Perl, and especially its object model, for obscurity. I will concede, though, that this impression may be an artifact of my being used to the object models in C++ and Java, and that therefore the perl object model just appears to me to be a bit weird. For the same reason, I find some aspects of SQL difficult to fathom. That may be because I am not as familiar with set theory as I am with algebra and calculus and analytic geometry, or it may be an artifact of the languages with which I am most comfortable. I don't know if you can do statistical analyses, such as nonlinear least squares, time series analysis r chi-squared tests within SQL, but when faced with such a problem I tend to follow the path of least resistance and export the data from the database into my client code and do whatever analysis I need using my C++ or Java code. But it there is a lot of data, I suspect there would be much less impact on network traffic, and probably better performance, if the analysis could be done in a SQL stored procedure. One of the more common analyses I need to do involves time series analysis, sometimes with some kind of smoothing (such as a moving average) applied before the analysis proper. > moving data from mysql to postgresql is easy...its the table schemas > that are tough. If you have the table schemas done, you can > When I design my databases, I normally assume that I may have to migrate the data from one RDBMS profuct to another, for whatever reason. Therefore, I have the ones I am most likely to have to support running here. I create a SQL script to create my databases, aiming to use the highest common factor across the RDBMS' SQL, and test it on each to verify that I succeeded in finding the higest common factor. I have, then, a simple perl script to execute the script, and it "knows" which tools to use based on a configuration file that has the specifics for using MySQL or Postgresql or MS SQL Server, and a commandline parameter that specifies which DB to use. > mysqldump --compatible=postgresql | psql > > which should work for 90% of tables, because mysql supports only a > subset of the types postgresql supports. > > the schemas are a bit trickier...you have to do them by hand or use a > conversion tool. one such tool is DTS. if you look around you might > find something else though. > There's many more than one way to skin a cat. My inclination is to take a brute force, albeit brain-dead, approach. One of my irritants is that none of the RDBMS products appear to support the same ways of importing data, e.g. from a flat file (CSV files, for instance). But at least they all do it in some way, and they all are able to export a table to, e.g. a csv file. Therefore, it becomes trivially easy to move data, even in the absence of a tool like DTS, by exporting the data to a CSV file and then reading that file into the target RDBMS. This is something I've done countless times. But, not having found a way to easily determine the schema programmatically, so my perl or Java or C++ code can be generic enough to apply to any of my databases, I find myself taking an hour or two to write a simple script, usually in Perl, to handle each table in sequence. There are, of course, things to watch, such as keeping the table creation statements in a different script than that which creates indices and especially foreign keys, so the tables are ready when the data is to be loaded, but the constraints are created after the data is loaded, purely for performance reasons. I learned the hard way that loading data slows down dramatically if the database has to continually check constraints, so I create constraints after loading valid data, and leave them in place only while new data is to be stored. I guess I am recommending that the schemas be done by hand, regardless of whether you're creating a new database or repairing or migrating an old one, even if the original developer wasn't considerate enough to create, or provide, an appropriate SQL script to create the database de novo. it seems to me to be risky to assume that the tools we like now will still exist ten years from now. Anyone remember IBM's OS/2? That was my favourite OS, and it was arguably better than any flavour of Windows available at the time. But even though I used it exclusively ten years ago, it is no longer a viable option for any machine I now use. I am sure anyone reading this who is of my generation could produce a long list of products that they used and liked which no longer exist or which are now no longer commercially viable. C++ is still a good option for many kinds of application development, but I don't think there are very many people using Watcom's compiler to produce commercial applications. SQL will likely exist ten years from now, but will our favourite RDBMS? I won't make that prediction, but I think I can minimize the risk by creating SQL scripts that use the highest common factor across the RDBMS products I have at my disposal. Therefore, I should be able to use my scripts, with minimal pain, regardless of what RDBMS products are available at that time. I do much the same for my C++ coding. I try to use the highest common factor in the language, as implemented by the suite of compilers I use, and keep the implementation specific stuff to a minimum in completely separate compilation units (and even in different directories). I just see it as inevitable that significant refactoring will be required, especially when migrating from one suite of tools to another, so I plan an architecture for my code that should make it as easy as possible, and similarly, for moving data around, I developed my approach to be as easy and reliable as possible, even if it is not the most elegant or efficient. Cheers, Ted
Ted Byers wrote: > > ----- Original Message ----- From: "Merlin Moncure" <mmoncure@gmail.com> >> >> the open standard to convert data from one database to another, >> unfortunately, is SQL. SQL is incomplete, illogical, and obscure, so >> here we are. >> > The same can be said about any programming language, can it not? > > Even languages as powerful and expressive as C++ and Perl have elements > that seem illogical. Perl has elements that are illogical? Surely you jest! Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/