Thread: MySQL -> pgsql

MySQL -> pgsql

From
"David Reid"
Date:
Can someone point me at any resources on the web that can help with such a
transformation?  I have a series of tables I want to convert.  Thanks.

Oh, and I'm not on the list so can you email direct! :)

david


Re: MySQL -> pgsql

From
Michelle Murrain
Date:
>Can someone point me at any resources on the web that can help with such a
>transformation?  I have a series of tables I want to convert.  Thanks.
>
>Oh, and I'm not on the list so can you email direct! :)

I don't know MySQL, but I've converted tables from mSQL to pgsql by
just using the dump tool in mSQL (dumps the schema and data in SQL
format into a text file), then go into psql and execute the text file
that comes from the dump. I'm sure MySQL has a tool to do that.

Michelle
--
--------------------------------
Michelle Murrain Ph.D., President
Norwottuck Technology Resources
mpm@norwottuck.com
http://www.norwottuck.com

RE: MySQL -> pgsql

From
"Diehl, Jeffrey"
Date:
I'm also interested, so could you post to the list?

Thanx,
Mike Diehl,
Network Monitoring Tool Devl.
284-3137
jdiehl@sandia.gov


> -----Original Message-----
> From: David Reid [mailto:dreid@jetnet.co.uk]
> Sent: October 19, 2000 5:31 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] MySQL -> pgsql
>
>
> Can someone point me at any resources on the web that can
> help with such a
> transformation?  I have a series of tables I want to convert.  Thanks.
>
> Oh, and I'm not on the list so can you email direct! :)
>
> david
>


RE: MySQL -> pgsql

From
"Diehl, Jeffrey"
Date:
I don't think this will work in all cases.  The index syntax is different
between mysql and psql.  For example, I specify indexing in the create
clause under mysql; I use a separate statement to create an index under
psql.  Am I missing something?

Thanx,
Mike Diehl,
Network Monitoring Tool Devl.
284-3137
jdiehl@sandia.gov


> -----Original Message-----
> From: Michelle Murrain [mailto:mpm@norwottuck.com]
> Sent: October 19, 2000 12:46 PM
> To: David Reid
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] MySQL -> pgsql
>
>
> >Can someone point me at any resources on the web that can
> help with such a
> >transformation?  I have a series of tables I want to
> convert.  Thanks.
> >
> >Oh, and I'm not on the list so can you email direct! :)
>
> I don't know MySQL, but I've converted tables from mSQL to pgsql by
> just using the dump tool in mSQL (dumps the schema and data in SQL
> format into a text file), then go into psql and execute the text file
> that comes from the dump. I'm sure MySQL has a tool to do that.
>
> Michelle
> --
> --------------------------------
> Michelle Murrain Ph.D., President
> Norwottuck Technology Resources
> mpm@norwottuck.com
> http://www.norwottuck.com
>


RE: MySQL -> pgsql

From
Michelle Murrain
Date:
At 2:43 PM -0600 10/19/00, Diehl, Jeffrey wrote:
>I don't think this will work in all cases.  The index syntax is different
>between mysql and psql.  For example, I specify indexing in the create
>clause under mysql; I use a separate statement to create an index under
>psql.  Am I missing something?

Yeah, sorry I was oversimplifying the process. I had to go in and use
a text editor and change some of the data type definitions. mSQL is
such a simple DBMS, though, so the tables were really easy.

On the burner for development - conversion tools?

Michelle
--
--------------------------------
Michelle Murrain Ph.D., President
Norwottuck Technology Resources
mpm@norwottuck.com
http://www.norwottuck.com

RE: MySQL -> pgsql

From
"Diehl, Jeffrey"
Date:
The database I am hoping to migrate has a few tables with around 50K
records.  These databases get updated every hour, 24/7.  I don't think I can
do the migration with a text editor.  So, I'm still looking for some other
method if it exists.

Conversion tools?  Na, that's just a small part of what I do.  I write
network management tools which populate a database.  I'm needing to get away
from mysql because it isn't proving to be stable at large scale.  Psql has
managed quite well in one of my other duties where the databases required
several MILLION records...

Any ideas for migration will be most appreciated.

Mike Diehl,
Network Monitoring Tool Devl.
284-3137
jdiehl@sandia.gov


> -----Original Message-----
> From: Michelle Murrain [mailto:mpm@norwottuck.com]
> Sent: October 19, 2000 3:15 PM
> To: Diehl, Jeffrey; David Reid
> Cc: pgsql-general@postgresql.org
> Subject: RE: [GENERAL] MySQL -> pgsql
>
>
> At 2:43 PM -0600 10/19/00, Diehl, Jeffrey wrote:
> >I don't think this will work in all cases.  The index syntax
> is different
> >between mysql and psql.  For example, I specify indexing in
> the create
> >clause under mysql; I use a separate statement to create an
> index under
> >psql.  Am I missing something?
>
> Yeah, sorry I was oversimplifying the process. I had to go in and use
> a text editor and change some of the data type definitions. mSQL is
> such a simple DBMS, though, so the tables were really easy.
>
> On the burner for development - conversion tools?
>
> Michelle
> --
> --------------------------------
> Michelle Murrain Ph.D., President
> Norwottuck Technology Resources
> mpm@norwottuck.com
> http://www.norwottuck.com
>


Re: MySQL -> pgsql

From
Neil Conway
Date:
On Thu, Oct 19, 2000 at 03:18:50PM -0600, Diehl, Jeffrey wrote:
> The database I am hoping to migrate has a few tables with around 50K
> records.  These databases get updated every hour, 24/7.  I don't think I can
> do the migration with a text editor.  So, I'm still looking for some other
> method if it exists.

Why not use a text editor? The idea was to edit the table schema, not
the entire contents of the database. If you only have a ``few'' tables,
you should probably dump the table schema first, change it to PgSQL's
variant of SQL, and then test the schema. Once you're sure it works,
dump the data from MySQL into PgSQL (you could even use a pipe
straight from 'mysql' into 'psql').

HTH,

Neil

--
Neil Conway <neilconway@home.com>
Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc
Encrypted mail welcomed

Don't worry about people stealing your ideas.  If your ideas are any
good, you'll have to ram them down people's throats.
        -- Howard Aiken

Attachment

RE: MySQL -> pgsql

From
KuroiNeko
Date:
> Conversion tools? Na, that's just a small part of what I do.

 Since  it's  a  network  monitor,  chances are  you're  not  using  exotic
datatypes incompatible,  or non-existant in  PGSQL? If so, the  fastest way
would be to get a `dump' schema + data script from MySQL and adjust it with
sed or PERL where it's required.
 On the other hand, just porting the  tables is probably not what you want.
This  will remain  MySQL, only  executed  by PGSQL:  no DRI,  no rules,  no
triggers, nothing. Probably  it's worth designing the  schema from scratch,
using natural features of PGSQL and work out migration procedures then?
 Not knowing exactly what data you operate it's hard to tell more.


--

 Well I tried to be meek
  And I have tried to be mild
 But I spat like a woman
  And I sulked like a child
 I have lived behind the walls
  That have made me alone
 Striven for peace
  Which I never have known

 Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)


RE: MySQL -> pgsql

From
"Diehl, Jeffrey"
Date:
But remember, that mysql doesn't have rules, triggers and such.  Moving the
data is all I need to do.  I guess I could write a tool to select from mysql
and insert into psql...  I was hoping someone else had already done it. ;^)

Mike Diehl,
Network Monitoring Tool Devl.
284-3137
jdiehl@sandia.gov


> -----Original Message-----
> From: KuroiNeko [mailto:evpopkov@carrier.kiev.ua]
> Sent: October 19, 2000 3:38 PM
> To: pgsql-general@postgresql.org
> Subject: RE: [GENERAL] MySQL -> pgsql
>
>
> > Conversion tools? Na, that's just a small part of what I do.
>
>  Since  it's  a  network  monitor,  chances are  you're  not
> using  exotic
> datatypes incompatible,  or non-existant in  PGSQL? If so,
> the  fastest way
> would be to get a `dump' schema + data script from MySQL and
> adjust it with
> sed or PERL where it's required.
>  On the other hand, just porting the  tables is probably not
> what you want.
> This  will remain  MySQL, only  executed  by PGSQL:  no DRI,
> no rules,  no
> triggers, nothing. Probably  it's worth designing the  schema
> from scratch,
> using natural features of PGSQL and work out migration
> procedures then?
>  Not knowing exactly what data you operate it's hard to tell more.
>
>
> --
>
>  Well I tried to be meek
>   And I have tried to be mild
>  But I spat like a woman
>   And I sulked like a child
>  I have lived behind the walls
>   That have made me alone
>  Striven for peace
>   Which I never have known
>
>  Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)
>


RE: MySQL -> pgsql

From
"Diehl, Jeffrey"
Date:
Yup.  Seems like that is what I need to do.

Thanx,
Mike Diehl,
Network Monitoring Tool Devl.
284-3137
jdiehl@sandia.gov


> -----Original Message-----
> From: Neil Conway [mailto:nconway@klamath.dyndns.org]
> Sent: October 19, 2000 3:30 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] MySQL -> pgsql
>
>
> On Thu, Oct 19, 2000 at 03:18:50PM -0600, Diehl, Jeffrey wrote:
> > The database I am hoping to migrate has a few tables with around 50K
> > records.  These databases get updated every hour, 24/7.  I
> don't think I can
> > do the migration with a text editor.  So, I'm still looking
> for some other
> > method if it exists.
>
> Why not use a text editor? The idea was to edit the table schema, not
> the entire contents of the database. If you only have a
> ``few'' tables,
> you should probably dump the table schema first, change it to PgSQL's
> variant of SQL, and then test the schema. Once you're sure it works,
> dump the data from MySQL into PgSQL (you could even use a pipe
> straight from 'mysql' into 'psql').
>
> HTH,
>
> Neil
>
> --
> Neil Conway <neilconway@home.com>
> Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc
> Encrypted mail welcomed
>
> Don't worry about people stealing your ideas.  If your ideas are any
> good, you'll have to ram them down people's throats.
>         -- Howard Aiken
>


RE: MySQL -> pgsql

From
KuroiNeko
Date:
> But remember, that mysql doesn't have rules, triggers and such.

 That's exactly  what I mean  when I  suggest re-designing the  schema from
scratch. Otherwise, you only get `a half of it.'

> I guess I could write a tool to select from mysql
> and insert into psql...

 This can be easy,  up to piping, as already offered.  But you're doomed to
improve the schema, and now it seems to be the best time to.


--

 Well I tried to be meek
  And I have tried to be mild
 But I spat like a woman
  And I sulked like a child
 I have lived behind the walls
  That have made me alone
 Striven for peace
  Which I never have known

 Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)


Re: MySQL -> pgsql

From
"David Reid"
Date:
OK.  Thanks to all who replied.  Here are my first impressions...

- pgsql is fussier about names of columns than mysql.  e.g. a column name of
position was rejected by pgsql but seems to be OK in mysql.
- PRIMARY_KEY() syntax is OK for pgsql (which was a relief)
- There seem to be a lot more types in mysql than pgsql.

Questions that have cropped up already!!

- can tinyint(4) be considered to be int4?
- what should an int(11) be?
- if it uses "int(11) DEFAULT 0 NOT NULL auto_increment" is that
functionally the same as serial?
- is there an easy way to duplicate enums?  this is one of the ones I need
to copy...
"enum('R','L') DEFAULT 'R' NOT NULL"

I'll try to put all the help I get and my findings into a web page and make
it available if people feel it would be useful.  Of course a tool to
automatically convert would be brilliant, but that's far off into the future
for me! :)

Oh, and I've subscribed now!

david

----- Original Message -----
From: "Diehl, Jeffrey" <jdiehl@sandia.gov>
To: "'David Reid'" <dreid@jetnet.co.uk>; <pgsql-general@postgresql.org>
Sent: Thursday, October 19, 2000 8:03 PM
Subject: RE: [GENERAL] MySQL -> pgsql


> I'm also interested, so could you post to the list?
>
> Thanx,
> Mike Diehl,
> Network Monitoring Tool Devl.
> 284-3137
> jdiehl@sandia.gov
>
>
> > -----Original Message-----
> > From: David Reid [mailto:dreid@jetnet.co.uk]
> > Sent: October 19, 2000 5:31 AM
> > To: pgsql-general@postgresql.org
> > Subject: [GENERAL] MySQL -> pgsql
> >
> >
> > Can someone point me at any resources on the web that can
> > help with such a
> > transformation?  I have a series of tables I want to convert.  Thanks.
> >
> > Oh, and I'm not on the list so can you email direct! :)
> >
> > david
> >
>
>


RE: MySQL -> pgsql

From
Alex Pilosov
Date:
I hacked up a small perl script that does a first-pass attempt on
converting mysql's database dump to

It is dirty, it will probably not work for you, but it'll make the first
approximation on the conversion, if you are lucky.

-alex

On Thu, 19 Oct 2000, Diehl, Jeffrey wrote:

> I'm also interested, so could you post to the list?
>
> Thanx,
> Mike Diehl,
> Network Monitoring Tool Devl.
> 284-3137
> jdiehl@sandia.gov
>
>
> > -----Original Message-----
> > From: David Reid [mailto:dreid@jetnet.co.uk]
> > Sent: October 19, 2000 5:31 AM
> > To: pgsql-general@postgresql.org
> > Subject: [GENERAL] MySQL -> pgsql
> >
> >
> > Can someone point me at any resources on the web that can
> > help with such a
> > transformation?  I have a series of tables I want to convert.  Thanks.
> >
> > Oh, and I'm not on the list so can you email direct! :)
> >
> > david
> >
>
>

Attachment

Re: MySQL -> pgsql

From
Alex Pilosov
Date:
On Thu, 19 Oct 2000, David Reid wrote:

> OK.  Thanks to all who replied.  Here are my first impressions...
>
> - pgsql is fussier about names of columns than mysql.  e.g. a column name of
> position was rejected by pgsql but seems to be OK in mysql.
> - PRIMARY_KEY() syntax is OK for pgsql (which was a relief)
> - There seem to be a lot more types in mysql than pgsql.
>
> Questions that have cropped up already!!
>
> - can tinyint(4) be considered to be int4?
my script thinks that if its *int(1) to *int(4) it'll be replaced by int2,
if its *int(5) to *int(11), it'll be replaced by int4, otherwise numeric.

> - what should an int(11) be?
> - if it uses "int(11) DEFAULT 0 NOT NULL auto_increment" is that
> functionally the same as serial?
My script generates necessary functions to imitate mysql's behavior.
(i.e. value inserted will be one+maxval in table)

The _right_ way is of course, to use sequences, but they have a feature
that will sometimes (i.e. backed out transaction), you'll have some holes
in transactions...

> - is there an easy way to duplicate enums?  this is one of the ones I need
> to copy...
> "enum('R','L') DEFAULT 'R' NOT NULL"
My script is udderly stupid. if its enum(true,false), convert it to
boolean. otherwise, convert it to char(5). Feel free to modify it and do
it the proper way (at least char(proper size) with appropriate CHECK
clause).

I think you'll find that my script isn't easily modifiable...Since it
doesn't have a real parser, but just a hack that was written in an hour.
Proper fix would be to rewrite it using a mysql-ish grammar...but then
again, I had all my use of that script and all my data is now safely in
pgsql, and I don't need to use it ever again ;)

-alex



Re: MySQL -> pgsql

From
Ron Chmara
Date:
"Diehl, Jeffrey" wrote:
> The database I am hoping to migrate has a few tables with around 50K
> records.  These databases get updated every hour, 24/7.  I don't think I can
> do the migration with a text editor.  So, I'm still looking for some other
> method if it exists.

I'm a bit late on this thread, but I'm currently wranging a large set of migrations:
 postgreSQL-> Oracle
 mySQL-> Oracle
 mySQL-> postgreSQL
 LDAP-> postgreSQL
 mySQL-> LDAP
all at once. All live. 350+ websites, a few thousand hits a minute. (Yippee?)
Acceptable downtime is under 3 minutes for the entire system. I do lots
of migration. Here's my general workflow, HTH:

1. Plan any large table rollovers ahead of time on test boxes. Build
your schema in your destination db before you migrate, so your're just
moving recent record sets into pre-built environments. It helps if
your new data is hosted on a new box, so you aren't down if the new
server tanks.

2. Before you roll out: Rewrite your update web/app code to apply to both
sets. If you use an rsync script, you can roll this into the "live"
site/app fairly quickly, in between hits. You'll want to do the schema
changes and rewritten db access anyways, to optimize use of each
engine (postgres is slow if you write mySQL style for it, i.e., redo
your statments to make use of sub-selects, joins, etc.). Plan to
use the following code migrations: 1 adds. 2 updates 3 selects. Save
selects for last as your ghost records (for testing add and update)
won't be accurate.

3. To handle updates, make sure you have "ghost" records in the new box,
until you can pull a clean dump. Ghost records are fast, as they
only need minimal data. Once you have all this setup in the test
environment, roll a hidden pilot out. Monitor it. You still won't have good
data in both, but you won't have to throw a switch and pray you
don't loose anything..... now:
Roll back and correct that horrible mistake you made that brought
it all to its knees and we won't tell anyone about ;-). It's the
first pilot, these things happen, it's why you test.
(Note: using a disk backup is really nice for building test
machines that are copies of live ones... you get exact software
configs that way)

4. Once you *can* roll forward and accept all add, selects, updates,
etc. into the test systems, plan for a brief outage. I like 3.am.,
others like mignight...just target a slow period.

5. Coffee, scary movies, whatever, roll out late at night. Test.
Go to sleep when sun comes. A cot in the machine room is helpful.

6. Watch the boxes for a week. Make sure your cron scripts work.
Make sure rarely used components are all accounted for, and fix the
forgotten ones. You will proably have a few web pages or whatever
that only get used every few days, so your new box isn't quite
"ready" until you have all these. Keep the old data box as your
"master", with everything being duplicated by your app code onto
the "slave".

7. Plan your second rollout, confidant that you have almost
everything. You proably won't, so pull an extra set of backups on
the old data box. Change your app code to not scream bloody
murder when it can't find the old db.... is sometimes helps to
have three sets: 1. Current code 2. Rolling transition code
3. New Code.

8. Rollout to point to new box as "master". Take everyone out
for drinks, congratulate them on all their help. They will not
trust the new system, so build trust in other ways. Occasionally
shut of the old box, fix what broke, until the old box can be
left off for a few days.

9. Plan to spend a few more weeks ironing out kinks like the-web
app-in-accounting-that-checks-30-records-once-a-month. Expect panic,
and be graceful.

Reduced version:
Old box live
Old box live / Old box mirrored in testing
Old box live / Old box mirror in testing, new box in testing
Old box live / Old box mirror in testing /Roll out duplication code pilot in testing
Once pilot is clean:
Roll out duplication code.
Old box live / New box Live
Old box live / fix new box ;-)
Second switch:
New box primary, old box as backup.
New Box live

-Bop
--
Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,
which is currently in MacOS land.  Your bopping may vary.

Re: MySQL -> pgsql

From
KuroiNeko
Date:
> all  at once.  All live.  350+ websites,  a few  thousand hits  a minute.
> (Yippee?)
> Acceptable downtime is under 3 minutes for the entire system. I do lots
> of migration. Here's my general workflow, HTH:

 You da man, Bop :)

> 1. Plan any large table rollovers ahead of time on test boxes. Build
> your schema in your destination db before you migrate

 (nods)

> 2. Before you roll out: Rewrite your update web/app code to apply to both
> sets. If you use an rsync script, you can roll this into the "live"
> site/app fairly quickly, in between hits.

 Forgive me my ignorance, what's an rsync script?

> You'll want to do the schema
> changes and rewritten db access anyways, to optimize use of each
> engine (postgres is slow if you write mySQL style for it, i.e., redo
> your statments to make use of sub-selects, joins, etc.).

 That's it.

> 3. To handle updates, make sure you have "ghost" records in the new box,
> until you can pull a clean dump. Ghost records are fast, as they
> only need minimal data.

 Ghost records? Would you please elaborate  that? Also, what if we have two
machines (assuming MySQL -> PGSQL), with  pretty much identical UI part (eg
HTTP server + CGI),  and one of them is running an  old DB, whereas another
one has PGSQL as DB server. Then  we use Balance to distribute the requests
in round-robin manner between the two? Check the logs, monitor behaviour of
the new server to catch the errors.
 With this scheme: a)  you can always pull the plug on  the new server with
no downtime;  b) if new  server fails somewhere,  the user will  repeat the
request and it'll be balanced to the old server automagically with p=0.5

> 4. Once you *can* roll forward and accept all add, selects, updates,
> etc. into the test systems, plan for a brief outage. I like 3.am.,
> others like mignight...just target a slow period.

 Watch the timezones.

> 5. Coffee, scary movies, whatever, roll out late at night. Test         .
> Go to sleep when sun comes. A cot in the machine room is helpful        .

 Uh? Meaning you _don't_ live in machine room? :)


--

 Well I tried to be meek
  And I have tried to be mild
 But I spat like a woman
  And I sulked like a child
 I have lived behind the walls
  That have made me alone
 Striven for peace
  Which I never have known

 Dire Straits, Brothers In Arms, The Man's Too Strong (Knopfler)


Re: MySQL -> pgsql

From
Ron Chmara
Date:
KuroiNeko wrote:
> > 2. Before you roll out: Rewrite your update web/app code to apply to both
> > sets. If you use an rsync script, you can roll this into the "live"
> > site/app fairly quickly, in between hits.
>  Forgive me my ignorance, what's an rsync script?

rsync: It's an r-services piece of software (like rsh, rcp, etc.) designed
to do file/dircectory syncronizations. It runs over shh, as well. You set
up rsync on each server (basically, you install/compile it) and you can
sync entire directory trees on a code-line by code-line basis. It can only
send file changes, so it's very efficient. You can run it in a low
enough user level that all permissions, dates, etc. remain unchanged...
I use it for code rollouts and server backups. Updating a backup of a 36
Gb server, over a frac T-1, takes 5-10 minutes. It's not real suitable
for backing up/rolling forward a postgres database, but for html files,
web code, anythign static, it's just dandy.

> > 3. To handle updates, make sure you have "ghost" records in the new box,
> > until you can pull a clean dump. Ghost records are fast, as they
> > only need minimal data.
>  Ghost records? Would you please elaborate  that?

It's to speed up the testing cycle. Rather than grabbing a full data dump, on
each and every record,
'Joe Q Longname III' 'jlong'  'Joe likes to vacation in Brazil' '2/5/70'
1 55678299 367773 'Manager of Departmental Services, Western region' 1211
'Cell Phone' 'Coffe Cake' 22344628288826
(times 100,000 records)

You can make simplified, smaller, "ghost" records, which are faster to
move around and test on. For example, in the above record, much of the
data may be completely irrelevant to most testing, and you can use a much
smaller version....
'J Lo' 'jlong'  'Joe li' '2/5/70' 1 55 3667773 'Man' 1211 'Cell' 'Coff' 223

With only the important join keys being used in the test version. You don't
even have to use a subsection of the live data for all records:
'A' 'a'  'a' '2/5/70' 1 1 3667773 'a' 1 'a' 'a' 1

Will let you test your important join, your basic fields types, your add
and update code, and to a certain extent, your selects. It is helpful to
start with _some_ percentage of your live data (to test for field lengths)
but you don't need the whole data set. I call them "ghost" records because
they're only a vague representation of the real data set... and when you're
shoving around 200Mb data files, it's nice to only have a 2Mb test set
for fast drop/reloading, for general testing.

> Also, what if we have two
> machines (assuming MySQL -> PGSQL), with  pretty much identical UI part (eg
> HTTP server + CGI),  and one of them is running an  old DB, whereas another
> one has PGSQL as DB server. Then  we use Balance to distribute the requests
> in round-robin manner between the two? Check the logs, monitor behaviour of
> the new server to catch the errors.
>  With this scheme: a)  you can always pull the plug on  the new server with
> no downtime;  b) if new  server fails somewhere,  the user will  repeat the
> request and it'll be balanced to the old server automagically with p=0.5

Well, this could work, but:
You risk a request going to the new server, winding up in a bit-bucket, and
the user/customer/whatever giving up and leaving. I'm not sure how it is for
others, but I know I also have users who assume errors are only there for
aesthetic reasons (i.e, they assumed it worked, even if they had a blinking,
<H1>FAILURE</H1> error, just because it always worked before...<sigh>)

It all depends on your goals, your environment.... I do some updates this way,
for less mission critical stuff (or errors that users _can't_ ignore) just
because it _IS_ easier and faster. More dangerous, though (what if the new
server has captured the transaction wrong, and the same requests never
make is to the old server?).

> > 4. Once you *can* roll forward and accept all add, selects, updates,
> > etc. into the test systems, plan for a brief outage. I like 3.am.,
> > others like mignight...just target a slow period.
>  Watch the timezones.

Yup. Use traffic monitoring for your definition of "slow". Poland is peaking
on my db's at 3am, but they're slower traffic and less demanding.

-Bop

--
Brought to you from boop!, the dual boot Linux/Win95 Compaq Presario 1625
laptop, currently running RedHat 6.1. Your bopping may vary.

Re: MySQL -> pgsql

From
Andrew Evans
Date:
On Fri, Oct 20, 2000 at 12:43:32AM -0700, Ron Chmara wrote:

> I'm a bit late on this thread, but I'm currently wranging a large
> set of migrations:
>  postgreSQL-> Oracle

Would you mind explaining why your company's migrating a PostgreSQL
database to Oracle?  I'm hoping to go the other direction, replacing
Oracle with PostgreSQL for a few commercial web sites.

--
Andrew Evans, Zembu Labs <andrew@zembu.com>

Re: MySQL -> pgsql

From
Ron Chmara
Date:
Andrew Evans wrote:
> On Fri, Oct 20, 2000 at 12:43:32AM -0700, Ron Chmara wrote:
> > I'm a bit late on this thread, but I'm currently wranging a large
> > set of migrations:
> >  postgreSQL-> Oracle
> Would you mind explaining why your company's migrating a PostgreSQL
> database to Oracle?  I'm hoping to go the other direction, replacing
> Oracle with PostgreSQL for a few commercial web sites.

Scalability concerns.

Specifically, about 2 GB of new record data+indicies per month. 2 Gb
of tiny records (it's for data-mining *massive* website logfiles, in
realtime), putting the live records into the 160-million-records-per-table
count.

When we went searching for "large data set" information, we couldn't
find anybody doing this on postgresql, and decided that we didn't
want to risk being first. :-)

Oracle, IMNSHO, sucks rocks for most things, but for scalabilty, the
table-as-file metaphor has real problems once a single table needs
to hit those insane levels.

-Bop

--
Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,
which is currently in MacOS land.  Your bopping may vary.

RE: MySQL -> pgsql

From
"Diehl, Jeffrey"
Date:
Well, I'm working on a database that gets about 5 Million new records a day
and I've had no scale-related problems.  I've found that indexing is sort of
expensive for write performance, but I've worked around that issue.  I have
had to put a day's data in a separate table, though.  I've found PostgresSql
to be rock solid; that's why I'm planning on migrating my smaller mysql
databses to PostgresSql.

I can't really talk about my application, but if you have any large database
questions you want to ask, feel free.

Mike Diehl.

-----Original Message-----
From: Ron Chmara
To: Andrew Evans
Cc: pgsql-general@postgresql.org
Sent: 10/21/00 1:53 AM
Subject: Re: [GENERAL] MySQL -> pgsql

Andrew Evans wrote:
> On Fri, Oct 20, 2000 at 12:43:32AM -0700, Ron Chmara wrote:
> > I'm a bit late on this thread, but I'm currently wranging a large
> > set of migrations:
> >  postgreSQL-> Oracle
> Would you mind explaining why your company's migrating a PostgreSQL
> database to Oracle?  I'm hoping to go the other direction, replacing
> Oracle with PostgreSQL for a few commercial web sites.

Scalability concerns.

Specifically, about 2 GB of new record data+indicies per month. 2 Gb
of tiny records (it's for data-mining *massive* website logfiles, in
realtime), putting the live records into the
160-million-records-per-table
count.

When we went searching for "large data set" information, we couldn't
find anybody doing this on postgresql, and decided that we didn't
want to risk being first. :-)

Oracle, IMNSHO, sucks rocks for most things, but for scalabilty, the
table-as-file metaphor has real problems once a single table needs
to hit those insane levels.

-Bop

--
Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC
machine,
which is currently in MacOS land.  Your bopping may vary.


Re: MySQL -> pgsql

From
Peter Eisentraut
Date:
Ron Chmara writes:

> When we went searching for "large data set" information, we couldn't
> find anybody doing this on postgresql, and decided that we didn't
> want to risk being first. :-)

Several people have had databases of 60 GB or more.

> Oracle, IMNSHO, sucks rocks for most things, but for scalabilty, the
> table-as-file metaphor has real problems once a single table needs
> to hit those insane levels.

When your tables exceed 1 GB, you get more than one file.

--
Peter Eisentraut      peter_e@gmx.net       http://yi.org/peter-e/


Vexing DBD::Pg problem

From
J B Bell
Date:
I've spent several hours today reading lots of docs, learning sneaky ways
to grant access on a whole database's worth of tables at once, and so on,
but have made next to know headway against this exact same error's coming
up everytime I try to run a certain module:

from apache logs:

[Wed Oct 25 14:47:08 2000] [error] DBD::Pg::st execute failed: pqReadData() -- backend closed the channel unexpectedly.
        This probably means the backend terminated abnormally
        before or while processing the request.

I am running:

postgres 7.0.2 (compiled from source)
    running with -i -d2 > postgres.log 2>&1
Linux kernel 2.2.17 on a Debian system

The postgres log shows absolutely nothing; my reading on the web on similar
errors to this seems to show it as being an authorization failure of a kind.
However, auth failues get logged faithfully using a normal client connection.

I assume this has somehow to do with mod_perl.  I can use the module I wrote
(Apache::AuthCookieHandler.pm, part of the Apache::AuthCookie system) from
the command line after modifying it so it doesn't need the Apache->request
object.  Ordinary CGI programs can access the db in question, with the same
authentication information, without apparent difficulty.

I'm quite exhausted & would welcome with great relief any new avenues to check
out with this.  I'm on the list, so no need to Cc: me.  Thanks kindly to all.

--JB

--
------------------------------------------------------------------
                J B Bell            |              /~\
          Systems Administrator     |       ASCII  \ /  Against
            Internet Gateway        |      Ribbon   X   HTML
                                    |    Campaign  / \  Mail

Re: Vexing DBD::Pg problem

From
Tom Lane
Date:
J B Bell <jbbell@intergate.ca> writes:
> [Wed Oct 25 14:47:08 2000] [error] DBD::Pg::st execute failed: pqReadData() -- backend closed the channel
unexpectedly.

That looks a lot like a backend crash.  What query are you issuing?

> The postgres log shows absolutely nothing; my reading on the web on similar
> errors to this seems to show it as being an authorization failure of a kind.
> However, auth failues get logged faithfully using a normal client connection.

Nothing at all?  No reports of child process crashes, nor "unexpected
EOF", nor anything?  If you run the postmaster with -d2, do you see the
queries you're issuing in the log?

            regards, tom lane

Re: Vexing DBD::Pg problem

From
J B Bell
Date:
Hi all.

I have sorted this problem out; it was authentication, as I suspected,
but I stupidly left out a crucial step--when loading in modules right
into the apache server with mod_perl, it's necessary to fully stop and
start the server to re-load the module after you re-write . . . sigh.

Tracking the problem was made easier by setting my module to log to a
file, this before I figured out the joys of $r->log_error in the apache
module.

If anyone's interested, I have come out the other end with a rather nice
implementation of Apache::AuthCookie that uses MD5 hashes and postgres for
user authentication and authorization.

--JB

--
------------------------------------------------------------------
                J B Bell            |              /~\
          Systems Administrator     |       ASCII  \ /  Against
            Internet Gateway        |      Ribbon   X   HTML
                                    |    Campaign  / \  Mail

Re: MySQL -> pgsql

From
Bruce Momjian
Date:
We have added contrib/mysql to 7.1beta that converts MySQL dumps.  If
you can, please add anything you consider appropriate to that and send
in a diff.  It is written in Perl too.

Thanks.

> I hacked up a small perl script that does a first-pass attempt on
> converting mysql's database dump to
>
> It is dirty, it will probably not work for you, but it'll make the first
> approximation on the conversion, if you are lucky.
>
> -alex
>
> On Thu, 19 Oct 2000, Diehl, Jeffrey wrote:
>
> > I'm also interested, so could you post to the list?
> >
> > Thanx,
> > Mike Diehl,
> > Network Monitoring Tool Devl.
> > 284-3137
> > jdiehl@sandia.gov
> >
> >
> > > -----Original Message-----
> > > From: David Reid [mailto:dreid@jetnet.co.uk]
> > > Sent: October 19, 2000 5:31 AM
> > > To: pgsql-general@postgresql.org
> > > Subject: [GENERAL] MySQL -> pgsql
> > >
> > >
> > > Can someone point me at any resources on the web that can
> > > help with such a
> > > transformation?  I have a series of tables I want to convert.  Thanks.
> > >
> > > Oh, and I'm not on the list so can you email direct! :)
> > >
> > > david
> > >
> >
> >
Content-Description:

[ Attachment, skipping... ]


--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
echo: cannot create /dev/ttyp3: permission denied

Re: MySQL -> pgsql

From
Alex Pilosov
Date:
My script does essentially the same thing, unfortunately, its hard to
merge since we use different ways to "parse" the SQL. The script in
contrib is good enough and not much different from what I have, so just
keep it ;)

-alex

On Fri, 19 Jan 2001, Bruce Momjian wrote:

>
> We have added contrib/mysql to 7.1beta that converts MySQL dumps.  If
> you can, please add anything you consider appropriate to that and send
> in a diff.  It is written in Perl too.
>
> Thanks.
>
> > I hacked up a small perl script that does a first-pass attempt on
> > converting mysql's database dump to
> >
> > It is dirty, it will probably not work for you, but it'll make the first
> > approximation on the conversion, if you are lucky.
> >
> > -alex
> >
> > On Thu, 19 Oct 2000, Diehl, Jeffrey wrote:
> >
> > > I'm also interested, so could you post to the list?
> > >
> > > Thanx,
> > > Mike Diehl,
> > > Network Monitoring Tool Devl.
> > > 284-3137
> > > jdiehl@sandia.gov
> > >
> > >
> > > > -----Original Message-----
> > > > From: David Reid [mailto:dreid@jetnet.co.uk]
> > > > Sent: October 19, 2000 5:31 AM
> > > > To: pgsql-general@postgresql.org
> > > > Subject: [GENERAL] MySQL -> pgsql
> > > >
> > > >
> > > > Can someone point me at any resources on the web that can
> > > > help with such a
> > > > transformation?  I have a series of tables I want to convert.  Thanks.
> > > >
> > > > Oh, and I'm not on the list so can you email direct! :)
> > > >
> > > > david
> > > >
> > >
> > >
> Content-Description:
>
> [ Attachment, skipping... ]
>
>
>