Thread: 7.1 Release Date

7.1 Release Date

From
Miguel Omar Carvajal
Date:
Hi there,
   When will Postgresql 7.1 be released?

Miguel

Re: 7.1 Release Date

From
The Hermit Hacker
Date:
On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

> Hi there,
>    When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...



Re: 7.1 Release Date

From
The Hermit Hacker
Date:
On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr�d wrote:

> The Hermit Hacker <scrappy@hub.org> writes:
>
> > On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:
> >
> > > Hi there,
> > >    When will Postgresql 7.1 be released?
> >
> > right now, we're looking at October-ish for going beta, so most likely
> > November-ish for a release ...
>
> Will there be a clean upgrade path this time, or
> yet another dump-initdb-restore procedure?

IMHO, upgrading a database server is like upgrading an operating system
... you scheduale downtime, back it all up and upgrade ...

there is the pg_upgrade script that is available, that some ppl have had
varying degrees of success with, but I've personally never used ...



Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
The Hermit Hacker <scrappy@hub.org> writes:

> On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr d wrote:
>
> > The Hermit Hacker <scrappy@hub.org> writes:
> >
> > > On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:
> > >
> > > > Hi there,
> > > >    When will Postgresql 7.1 be released?
> > >
> > > right now, we're looking at October-ish for going beta, so most likely
> > > November-ish for a release ...
> >
> > Will there be a clean upgrade path this time, or
> > yet another dump-initdb-restore procedure?
>
> IMHO, upgrading a database server is like upgrading an operating system
> ... you scheduale downtime, back it all up and upgrade ...

The problem is, this doesn't play that well with upgrading the
database when upgrading the OS, like in most Linux distributions.

--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
The Hermit Hacker <scrappy@hub.org> writes:

> On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:
>
> > Hi there,
> >    When will Postgresql 7.1 be released?
>
> right now, we're looking at October-ish for going beta, so most likely
> November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

Unclean upgrades are one of major disadvantages of postgresql FTTB,
IMHO.
--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
The Hermit Hacker
Date:
On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr�d wrote:

> The Hermit Hacker <scrappy@hub.org> writes:
>
> > On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr� d wrote:
> >
> > > The Hermit Hacker <scrappy@hub.org> writes:
> > >
> > > > On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:
> > > >
> > > > > Hi there,
> > > > >    When will Postgresql 7.1 be released?
> > > >
> > > > right now, we're looking at October-ish for going beta, so most likely
> > > > November-ish for a release ...
> > >
> > > Will there be a clean upgrade path this time, or
> > > yet another dump-initdb-restore procedure?
> >
> > IMHO, upgrading a database server is like upgrading an operating system
> > ... you scheduale downtime, back it all up and upgrade ...
>
> The problem is, this doesn't play that well with upgrading the
> database when upgrading the OS, like in most Linux distributions.

why not?  pg_dump;pkrm old;pkadd new;load ... no?

I use both Solaris and FreeBSD, and its pretty much "that simple" for both
of those ...



Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
The Hermit Hacker <scrappy@hub.org> writes:

> On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr d wrote:
>
> > The Hermit Hacker <scrappy@hub.org> writes:
> >
> > > On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr d wrote:
> > >
> > > > The Hermit Hacker <scrappy@hub.org> writes:
> > > >
> > > > > On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:
> > > > >
> > > > > > Hi there,
> > > > > >    When will Postgresql 7.1 be released?
> > > > >
> > > > > right now, we're looking at October-ish for going beta, so most likely
> > > > > November-ish for a release ...
> > > >
> > > > Will there be a clean upgrade path this time, or
> > > > yet another dump-initdb-restore procedure?
> > >
> > > IMHO, upgrading a database server is like upgrading an operating system
> > > ... you scheduale downtime, back it all up and upgrade ...
> >
> > The problem is, this doesn't play that well with upgrading the
> > database when upgrading the OS, like in most Linux distributions.
>
> why not?  pg_dump;pkrm old;pkadd new;load ... no?

Because the system is down during this upgrade - the database isn't
running. Also, automated dump might lead to data loss if space becomes
an issue.

--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
The Hermit Hacker
Date:
On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr�d wrote:

> The Hermit Hacker <scrappy@hub.org> writes:
>
> > On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr� d wrote:
> >
> > > The Hermit Hacker <scrappy@hub.org> writes:
> > >
> > > > On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr� d wrote:
> > > >
> > > > > The Hermit Hacker <scrappy@hub.org> writes:
> > > > >
> > > > > > On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:
> > > > > >
> > > > > > > Hi there,
> > > > > > >    When will Postgresql 7.1 be released?
> > > > > >
> > > > > > right now, we're looking at October-ish for going beta, so most likely
> > > > > > November-ish for a release ...
> > > > >
> > > > > Will there be a clean upgrade path this time, or
> > > > > yet another dump-initdb-restore procedure?
> > > >
> > > > IMHO, upgrading a database server is like upgrading an operating system
> > > > ... you scheduale downtime, back it all up and upgrade ...
> > >
> > > The problem is, this doesn't play that well with upgrading the
> > > database when upgrading the OS, like in most Linux distributions.
> >
> > why not?  pg_dump;pkrm old;pkadd new;load ... no?
>
> Because the system is down during this upgrade - the database isn't
> running. Also, automated dump might lead to data loss if space becomes
> an issue.

woah, I'm confused here ... are you saying that you want to upgrade the
database server at the same time, and in conjunction with, upgrading the
Operating System?


Re: 7.1 Release Date

From
Tom Lane
Date:
teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:
> Will there be a clean upgrade path this time, or
> yet another dump-initdb-restore procedure?

Still TBD, I think --- right now pg_upgrade would still work, but if
Vadim finishes WAL there's going to have to be a dump/reload for that.

Another certain dump/reload in the foreseeable future will come from
adding tablespace support/changing file naming conventions.

> Unclean upgrades are one of major disadvantages of postgresql FTTB,
> IMHO.

You can always stick to Postgres 6.5 :-).  There are certain features
that just cannot be added without redoing the on-disk table format.
I don't think we will ever want to promise "no more dump/reload";
if we do, it will mean that Postgres has stopped improving.

            regards, tom lane

Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:
> > Will there be a clean upgrade path this time, or
> > yet another dump-initdb-restore procedure?
>
> Still TBD, I think --- right now pg_upgrade would still work, but if
> Vadim finishes WAL there's going to have to be a dump/reload for that.
>
> Another certain dump/reload in the foreseeable future will come from
> adding tablespace support/changing file naming conventions.
>
> > Unclean upgrades are one of major disadvantages of postgresql FTTB,
> > IMHO.
>
> You can always stick to Postgres 6.5 :-).  There are certain features
> that just cannot be added without redoing the on-disk table format.
> I don't think we will ever want to promise "no more dump/reload";
> if we do, it will mean that Postgres has stopped improving.

Not necesarrily - one could either design a on disk format with room
for expansion or create migration tools to add new fields.

--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
Tom Lane
Date:
teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>> You can always stick to Postgres 6.5 :-).  There are certain features
>> that just cannot be added without redoing the on-disk table format.
>> I don't think we will ever want to promise "no more dump/reload";
>> if we do, it will mean that Postgres has stopped improving.

> Not necesarrily - one could either design a on disk format with room
> for expansion or create migration tools to add new fields.

"Room for expansion" isn't necessarily the issue --- sometimes you
just have to fix wrong decisions.  The table-file-naming business is
a perfect example.

Migration tools might ease the pain, sure (though I'd still recommend
doing a full backup before a major version upgrade, just on safety
grounds; so the savings afforded by a tool might not be all that much).

Up to now, the attitude of the developer community has mostly been
that our TODO list is a mile long and we'd rather spend our limited
time on bug fixes and new features than on migration tools --- both
because it seemed like the right set of priorities for the project,
and because fixes/features are fun while tools are just work ;-).
But perhaps that is an area where Great Bridge and PostgreSQL Inc can
make some contributions using support-contract funding.

            regards, tom lane

Re: 7.1 Release Date

From
Lamar Owen
Date:
Tom Lane wrote:

> Migration tools might ease the pain, sure (though I'd still recommend
> doing a full backup before a major version upgrade, just on safety
> grounds; so the savings afforded by a tool might not be all that much).

What is needed, IMHO, is a replacement to the pg_upgrade script that can
do the following:
1.)    Read _any_ previous version's format data files;
2.)    Write the current version's data files (without a running
postmaster).

This replacement (call it pg_upgrade to confuse everybody) would be
called as: pg_upgrade OLDPGDATA NEWPGDATA and would simply Do The Right
Thing for that directory -- including making an ACSII dump (command line
switch, perhaps), checking disk space, robust error detection, and
_seamless_ upgrading of system catalogs and indices (all it needs to do
is call initdb on the NEWPGDATA tree, right?).  The key is seamless.
The second key is _without_ a running postmaster.  Much of pg_dump's
code would be needed as well, to generate an ASCII dump.

Now, this new pg_upgrade would have to know a great deal about data file
formats (but, of course, since we're on CVS, getting the old code to do
the old formats is as simple as checking out the old version, right?).

HOWEVER, I see no two ways around the fact that a core developer needs
to be the one to do this utility.  In particular, the developer to write
this utility needs to know the backend code as well or better than any
other developer -- and, Tom, that person sounds like you.

Now, it _may_ be possible for another developer to do this -- and, if I
thought my grasp of the backend was good enough I would go ahead and
volunteer -- in fact, if I can get the help I need to do it, and the
time to do it in, I _will_ volunteer.  Of course, it will take me much
longer to make a working tool, as I'm going to have to learn what Tom
(and others) already know -- but I am willing to put in the time to make
this work _right_.  This upgrade issue has been a thorn in my side far
too long.

And, to answer the questions:  currently, the RPM's have to upgrade the
way they do due to the fact that they are called during an OS upgrade
cycle -- if you are running RedHat 6.2 with the 6.5.3-6 PostgreSQL RPM's
installed, and you upgrade to Pinstripe (the RH 7 public beta), which
give you 7.0.2 RPM's, the binaries necessary to extract the data from
PGDATA are going to be wiped away by the upgrade -- currently, they are
being backed up by the RPM's pre-install script so that an upgrade
script can then call them into service after the hapless user has
figured out that PostgreSQL doesn't upgrade smoothly.  This is fine and
good as long as Pinstripe can run the old binaries -- which might not be
true for the next dot-oh RedHat upgrade!

Actually, that is true _now_ is a RedHat 4.x user attempts to upgrade to
Pinstripe -- correct me if I'm wrong, Trond.

We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
Thing.  Furthermore, with a little work, this program could be used to
salvage broken databases.  But imagine upgrading from Postgres95 1.01 to
PostgreSQL 7.1.0 with a single pg_upgrade command AFTER loading 7.1.0
(besides, there's many bugs in pre-6.3 pg_dump, right?  A dump/restore
won't work there anyway).  Imagine a simple upgrade for those folks who
use large objects.  It should be doable.

Note that ANY RPM-based distribution is going to have this same
problem.  Yes, Tom, the RPM-based OS's upgrade procedures are
brain-dead.  But, it can also be argued that our dump/restore upgrade
procedure is also brain-dead.

I think it's high time that the dump/initdb/restore cycle needs to be
retired as a normal upgrading step.

Or, to put it into 'fighting words', 'mysql doesn't have this problem.'

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: 7.1 Release Date

From
Brook Milligan
Date:
   We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
   Thing.

   I think it's high time that the dump/initdb/restore cycle needs to be
   retired as a normal upgrading step.

YOU (i.e., people relying on the RH stuff to do everything at once)
may need such a thing, but it seems like you are overstating the case
just a bit.  If this project gets adopted by core developers, it would
seem to conflict drastically with the goal of developing the core
functionality.  Thus, it's not quite "high time" for this.

There is nothing inherently different (other than implementation
details) about the basic procedure for upgrading the database as
compared to upgrading user data of any sort.  In each case, you need
to go through the steps of 1) dump data to a secure place, 2) destroy
the old stuff, 3) add new stuff, and 4) restore the old data.  In the
case of "normal" user data (home directories and such) the
dump/restore sequence can be performed using exactly those commands or
tar or dd or whatever.  In the case of the database we have the
pg_dump/psql commands.  In either case, the person doing the upgrade
must have enough of a clue to have made an appropriate dump in the
first place before trashing their system.  If the person lacks such a
clue, the solution is education (e.g., make the analogy explicit, show
the tools required, make pg_dump more robust, ...) not redirecting the
precious resources of core developers to duplicate the database system
in a standalone program for upgrades.

Cheers,
Brook


Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
Lamar Owen <lamar.owen@wgcr.org> writes:

> And, to answer the questions:  currently, the RPM's have to upgrade the
> way they do due to the fact that they are called during an OS upgrade
> cycle -- if you are running RedHat 6.2 with the 6.5.3-6 PostgreSQL RPM's
> installed, and you upgrade to Pinstripe (the RH 7 public beta), which
> give you 7.0.2 RPM's, the binaries necessary to extract the data from
> PGDATA are going to be wiped away by the upgrade -- currently, they are
> being backed up by the RPM's pre-install script so that an upgrade
> script can then call them into service after the hapless user has
> figured out that PostgreSQL doesn't upgrade smoothly.  This is fine and
> good as long as Pinstripe can run the old binaries -- which might not be
> true for the next dot-oh RedHat upgrade!
>
> Actually, that is true _now_ is a RedHat 4.x user attempts to upgrade to
> Pinstripe -- correct me if I'm wrong, Trond.

For Red Hat 4.x, that would be true - we don't support the ancient
libc5 anymore (OTOH, we didn't include Postgres95 at the time either).

> Note that ANY RPM-based distribution is going to have this same
> problem.

Not just RPM-based - any distribution who upgrades when the system is
offline.

> Yes, Tom, the RPM-based OS's upgrade procedures are brain-dead.

No, it's not - it's just not making assumptions like "enough space is
present to dump everything somewhere" (if you have a multiGB database,
dumping it to upgrade sounds like a bad idea), "the database server is
running, so I can just dump the data" etc.

> But, it can also be argued that our dump/restore upgrade procedure
> is also brain-dead.

This is basically "no upgrade path. But you can dump your old data
before upgrading. And you can insert data in the new database".



--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
Brook Milligan <brook@biology.nmsu.edu> writes:

> YOU (i.e., people relying on the RH stuff to do everything at once)
> may need such a thing, but it seems like you are overstating the case
> just a bit.  If this project gets adopted by core developers, it would
> seem to conflict drastically with the goal of developing the core
> functionality.

Upgradability is also functionality.

> There is nothing inherently different (other than implementation
> details) about the basic procedure for upgrading the database as
> compared to upgrading user data of any sort.  In each case, you need
> to go through the steps of 1) dump data to a secure place, 2) destroy
> the old stuff, 3) add new stuff, and 4) restore the old data.  In the
> case of "normal" user data (home directories and such) the
> dump/restore sequence can be performed using exactly those commands or
> tar or dd or whatever.

You usually don't do that at all - the home directories and the users'
data stay just the way they are.
>
>

--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
Alfred Perlstein
Date:
* Brook Milligan <brook@biology.nmsu.edu> [000829 12:07] wrote:
>    We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
>    Thing.
>
>    I think it's high time that the dump/initdb/restore cycle needs to be
>    retired as a normal upgrading step.
>
> YOU (i.e., people relying on the RH stuff to do everything at once)
> may need such a thing, but it seems like you are overstating the case
> just a bit.  If this project gets adopted by core developers, it would
> seem to conflict drastically with the goal of developing the core
> functionality.  Thus, it's not quite "high time" for this.
>
> There is nothing inherently different (other than implementation
> details) about the basic procedure for upgrading the database as
> compared to upgrading user data of any sort.  In each case, you need
> to go through the steps of 1) dump data to a secure place, 2) destroy
> the old stuff, 3) add new stuff, and 4) restore the old data.  In the
> case of "normal" user data (home directories and such) the
> dump/restore sequence can be performed using exactly those commands or
> tar or dd or whatever.  In the case of the database we have the
> pg_dump/psql commands.  In either case, the person doing the upgrade
> must have enough of a clue to have made an appropriate dump in the
> first place before trashing their system.  If the person lacks such a
> clue, the solution is education (e.g., make the analogy explicit, show
> the tools required, make pg_dump more robust, ...) not redirecting the
> precious resources of core developers to duplicate the database system
> in a standalone program for upgrades.

Actually you make the process sound way too evil, a slightly more
complex system can leave you fully operational if anything goes wrong:

install new postgresql
start new version on alternate port
suspend updating data (but not queries)
do a direct pg_dump into the new version
       (i think you need to export PG_PORT to use the alternate port)
suspend all queries
shutdown old version
restart new version on default port
resume queries

if (problems == 0) {
  resume updates;
} else {
  stop updates and queries;
  shutdown new
  restart old
  resume normal operations
}

Ok, it's a LOT more complex, but with careful planning pain may be
kept to an acceptable minimum.

--
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."

Re: 7.1 Release Date

From
Lamar Owen
Date:
Brook Milligan wrote:

>    We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
>    Thing.

>    I think it's high time that the dump/initdb/restore cycle needs to be
>    retired as a normal upgrading step.

> YOU (i.e., people relying on the RH stuff to do everything at once)
> may need such a thing, but it seems like you are overstating the case
> just a bit.  If this project gets adopted by core developers, it would
> seem to conflict drastically with the goal of developing the core
> functionality.  Thus, it's not quite "high time" for this.

Does a dump/restore from 6.5.3 to 7.1.0 properly handle large objectcs
yet?  (I know Philip Warner is working on it -- but that is NOT going to
help the person running and old version wanting to upgrade).  I would
dare say that there are more users of PostgreSQL running on RedHat than
all other platforms combined.

That's fine -- if PostgreSQL doesn't want to cater to newbies who simply
want it to work, then someone else will cater to them.  Personally, I
believe the 'newbie niche' is one of many niches that PostgreSQL fills
very effectively -- until the hapless newbie upgrades his OS and trashes
his database in the process.  Then he goes and gets someone else's
database and badmouths PostgreSQL. (as to those other niches, I benched
my OpenACS installation yesterday at 10.5 pages per second -- where each
page involved 7-10 SQL queries -- with a concurrent load of 50
connections.  PostgreSQL's speed and scalability are major benefits --
it's relative ease of installation and administration are other major
benefits).

Education is nice -- but, tell me, first of all, how is the newbie to
find it?  Release notes that don't get put on the disk until it's
already too late to do a proper dump/restore?  Sure, old hands at
PostgreSQL know the drill -- I know to uncheck PostgreSQL during OS
upgrades.  But, even that doesn't help if the new version of the OS
can't run the old version's binaries.

This is not the first time I've mentioned this -- nor is it the first
time it has been called into question.  This upgrading issue is already
wearing thin at RedHat (or didn't you notice Trond's message) -- it
would not surprise me in the least to see PostgreSQL dropped from the
RedHat distribution in favor of InterBase or MySQL if this issue isn't
fixed for 7.1.  Sure, it's their loss -- unless you actually want
PostgreSQL to be more popular, which I would like.  Even if RedHat drops
PostgreSQL, I'm likely to remain with it -- at least until InterBase's
AOLserver driver is up to par, and OpenACS is fully ported over to
InterBase.  Well, even then I'll likely remain with PostgreSQL, as it
works, I know it (relatively well), and the development community is
great to work with.

> first place before trashing their system.  If the person lacks such a
> clue, the solution is education (e.g., make the analogy explicit, show
> the tools required, make pg_dump more robust, ...) not redirecting the
> precious resources of core developers to duplicate the database system
> in a standalone program for upgrades.

No one outside the PostgreSQL developer community understands why it is
such an issue to require dump/restore at _every_single_ minor update --
ooops, sorry, major update where their minor is our major.  Or, to put
it differently -- mysql doesn't have this problem.  Sure, mysql has
plenty of problems, but this isn't one of them.

Did you also miss where I'm willing to do the legwork myself?  I'm to
that point of aggravation over this -- but, then again, I get the 100+
emails a week about the RPM set, and I get the ire of newbies who are
dumbfounded that they have to be _that_careful_ during updates. Maybe I
_am_ a little too vehement over this -- but, I am not alone.  I know
Trond shares my frustration -- amongst others.

Just how long would such a program take to write, anyway?  Probably not
nearly as long as you might suspect, as all such a program is is a
translator, taking input in one format and rewriting it to another
format. You just have to know what to translate and how to translate --
there are details of course (such as pg_log handling), but the basics
are already coded in the existing backends of the many versions. There's
no SQL parsing or executing to deal with -- just reading in one format
and writing in another.

In fact, you would only need to support upgrades from 9 versions (1.01,
1.09, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5,7.0) to make this work -- and some of
those versions have the same binary format (am I right on that, Tom,
Bruce, Thomas, or Vadim?).  IIRC, the binary format changed at 6.5 -- so
you basically have pre-6.5 and post-6.5 data to worry with, as the other
changes that require the dump/initdb/restore are system catalog issues,
right?  Since the new pg_upgrade would do an initdb as part of its
operation (in the new directory), the old system catalogs will only have
to be read for certain things, I would think.

Comments?

If we don't do it, someone else will. Yes, maybe I overstated the issue
-- unless you agree that RedHat's continued distribution of PostgreSQL
is a good thing.

If such a program were already written, wouldn't you use it, Brook?

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: 7.1 Release Date

From
Lamar Owen
Date:
Trond Eivind Glomsrød wrote:
> For Red Hat 4.x, that would be true - we don't support the ancient
> libc5 anymore (OTOH, we didn't include Postgres95 at the time either).

There were RPM's at the time, though -- I ran 6.1.1 for nearly a year on
RedHat 4.1 until I upgraded to RH 5 (which shipped 6.2.1) after being
cracked.  Good thing it was a reinstall from scratch -- those 6.1.1 RPMs
were _very_ different from what RedHat shipped in 5.0.

> > Note that ANY RPM-based distribution is going to have this same
> > problem.

> Not just RPM-based - any distribution who upgrades when the system is
> offline.

Like Debian.  Of course, the RPM postgresql-dump script came from the
Debian packages -- so Oliver knows where I'm coming from.  However,
Debian upgrading is more intelligent in many areas than RPM upgrading
is.

> > Yes, Tom, the RPM-based OS's upgrade procedures are brain-dead.

> No, it's not - it's just not making assumptions like "enough space is
> present to dump everything somewhere" (if you have a multiGB database,
> dumping it to upgrade sounds like a bad idea), "the database server is
> running, so I can just dump the data" etc.

'Brain-dead' meaning WRT upgrading RPMs...:
1.)    I can't start a backend to dump data if the RPM is installing under
anaconda;
2.)    I can't check to see if a backend is running (as an RPM pre or post
script can't use ps or cat /proc reliably (according to Jeff Johnson) if
that pre or post script is running under anaconda);
3.)    I can't even check to see if the RPM is installing under anaconda!
(ie, to have a more interactive upgrade if the RPM -U is from the
command line, a check for the dump, or a confirmation from the user that
he/she knows what they're getting ready to do) -- in fact, I would
prefer to abort the upgrade of postgresql RPM's in anaconda as it
currently stands -- but that might easily abort the whole install!
4.)    I'm not guaranteed of package upgrade order with split packages;
5.)    I'm not even guaranteed to have basic system commands available,
unless I Prereq: them in the RPM (which is the fix for that);
6.)    The installation chroot system is flakey (again, according to Jeff
Johnson) -- the least things you do, the better.  My current backing up
of the old executables was really more than Jeff wanted to see.  Maybe
this is fixed in Pinstripe.
7.)    The requirements and script orders are not as well documented as one
might want.
8.)    If I need to do complex operations to upgrade a package, it
shouldn't be a problem to do so in a pre install script -- but it is a
big problem.  There _are_ other packages that require some _interesting_
steps to upgrade....

> > But, it can also be argued that our dump/restore upgrade procedure
> > is also brain-dead.

> This is basically "no upgrade path. But you can dump your old data
> before upgrading. And you can insert data in the new database".

Vegetable upgrades.  You have really trimmed it to essentials --
PostgreSQL has no upgrade path in actuality.  I seem to remember several
messages to this list in the past about problems with restoring data
dumped under older versions....

Upgrades should just be this simple:
Install new version.
Start new version's postmaster, which issues a 'pg_upgrade' in safest
mode.
If pg_upgrade fails for any reason, get DBA intervention, otherwise,
just start the postmaster already!

This could just as easily be:
Install new version.
Run pg_upgrade if required.
Start postmaster, and it just runs.

It SHOULD be that simple.  It CAN be that simple.  Effort HAS been
expended already on this issue -- there is a pg_upgrade script already
written that tries to do some of this, but without actually translating
the contents of the relation files.  Maybe we should file this as a bug
against pg_upgrade :-).
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: 7.1 Release Date

From
Andrew Sullivan
Date:
On Tue, Aug 29, 2000 at 03:33:48PM -0400, Lamar Owen wrote:

> This upgrading issue is already wearing thin at RedHat (or didn't
> you notice Trond's message) -- it would not surprise me in the
> least to see PostgreSQL dropped from the RedHat distribution in
> favor of InterBase or MySQL if this issue isn't fixed for 7.1.

Why don't they just do a test, and then echo an explanation of why
the old Postgres can't be updated?  That's the way it works in
Debian, and I don't see anything wrong with it.  I can't believe that
Red Hat figures it's package management is so good that it will
handle all cases, and then blame everyone else when the package
management breaks the packages.

A

--
Andrew Sullivan                                      Computer Services
<sullivana@bpl.on.ca>                        Burlington Public Library
+1 905 639 3611 x158                                   2331 New Street
                                   Burlington, Ontario, Canada L7R 1J4

Re: 7.1 Release Date

From
teg@redhat.com (Trond Eivind Glomsrød)
Date:
Lamar Owen <lamar.owen@wgcr.org> writes:

> 'Brain-dead' meaning WRT upgrading RPMs...:
> 1.)    I can't start a backend to dump data if the RPM is installing under
> anaconda;

You can try, but I don't see it as a good idea.

> 2.)    I can't check to see if a backend is running (as an RPM pre or post
> script can't use ps or cat /proc reliably (according to Jeff Johnson) if
> that pre or post script is running under anaconda);

This should work, I think.

> 3.)    I can't even check to see if the RPM is installing under anaconda!

That should be irrelavant, actually - RPM is designed to be
non-interactive. The best place to do this would probably be in the
condrestart, which is usually run when upgrading and restarts the
server if it is already running.

> (ie, to have a more interactive upgrade if the RPM -U is from the
> command line, a check for the dump, or a confirmation from the user that
> he/she knows what they're getting ready to do)

rpm is non-interactive by design.

> 4.)    I'm not guaranteed of package upgrade order with split packages;

Prereq versions of the other components.

> 5.)    I'm not even guaranteed to have basic system commands available,
> unless I Prereq: them in the RPM (which is the fix for that);

Yup.

> 6.)    The installation chroot system is flakey (again, according to Jeff
> Johnson) -- the least things you do, the better.

No. Yes.

> 7.)    The requirements and script orders are not as well documented as one
> might want.

More documentation is being worked on.
>
> Upgrades should just be this simple:
> Install new version.
> Start new version's postmaster, which issues a 'pg_upgrade' in safest
> mode.
> If pg_upgrade fails for any reason, get DBA intervention, otherwise,
> just start the postmaster already!

I would love that.

--
Trond Eivind Glomsrød
Red Hat, Inc.

Re: 7.1 Release Date

From
Lamar Owen
Date:
Trond Eivind Glomsrød wrote:
> Lamar Owen <lamar.owen@wgcr.org> writes:
> > 'Brain-dead' meaning WRT upgrading RPMs...:
> > 1.)   I can't start a backend to dump data if the RPM is installing under
> > anaconda;

> You can try, but I don't see it as a good idea.

Oh, it's a very impractical idea from the standpoint of the glibc
version, the incompleteness of the system state mid-upgrade, etc.
However, from the point of view of the PostgreSQL dataset, it would be
nice to dump it BEFORE the old package is blown away, rather than deal
with the mess we have now...

> > 2.)   I can't check to see if a backend is running (as an RPM pre or post
> > script can't use ps or cat /proc reliably (according to Jeff Johnson) if
> > that pre or post script is running under anaconda);

> This should work, I think.

Quoting Jeff Johnson:
Jeff Johnson wrote:
> The Red Hat install environment is a chroot. That means no daemons,
> no network, no devices, nothing. Even sniffing /proc can be problematic
> in certain cases.

ps, of course, uses /proc....

> > 3.)   I can't even check to see if the RPM is installing under anaconda!

> That should be irrelavant, actually - RPM is designed to be
> non-interactive. The best place to do this would probably be in the
> condrestart, which is usually run when upgrading and restarts the
> server if it is already running.

But condrestart doesn't exist in the old version....of course, the new
version initscript is in place by then....

> > (ie, to have a more interactive upgrade if the RPM -U is from the
> > command line, a check for the dump, or a confirmation from the user that
> > he/she knows what they're getting ready to do)

> rpm is non-interactive by design.

And, IMHO, it is brain-dead to preclude user interaction when
interaction is necessary.  Up until now, PostgreSQL upgrades have been
difficult to automate -- maybe that can be fixed by 7.1 (PostgreSQL
release).

> > 4.)   I'm not guaranteed of package upgrade order with split packages;

> Prereq versions of the other components.

Well, it wasn't quite that simple with the RH 6.0-> 6.1 upgrade
(PostgreSQL 6.4.2-> PostgreSQL 6.5.2), as the number and names of the
packages themselves changed.

> > 7.)   The requirements and script orders are not as well documented as one
> > might want.

> More documentation is being worked on.

Good.

> > Upgrades should just be this simple:
> > Install new version.
> > Start new version's postmaster, which issues a 'pg_upgrade' in safest
> > mode.
> > If pg_upgrade fails for any reason, get DBA intervention, otherwise,
> > just start the postmaster already!

> I would love that.

So would I, and many other folk, even those who are not using
prepackaged binary distributions.  In fact, I just saw a message about
the upgrade procedure float by....

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: 7.1 Release Date

From
Tom Lane
Date:
Lamar Owen <lamar.owen@wgcr.org> writes:
> Did you also miss where I'm willing to do the legwork myself?

If you want to write and maintain such an update program, no one is
going to stand in your way ;-).

Personally I am not going to work on such a thing; I have a very long
list of to-do items that I consider more pressing and more interesting.
But hey, it's open source: *you* can fix the problems that *you*
consider pressing and interesting.  Go for it.

            regards, tom lane

Re: 7.1 Release Date

From
Tom Lane
Date:
Lamar Owen <lamar.owen@wgcr.org> writes:
>> Personally I am not going to work on such a thing; I have a very long
>> list of to-do items that I consider more pressing and more interesting.

> Like Outer Joins?

Yup.  I'm hacking on that right now, in fact (when not reading email...)

Ultimately, there's no point in having a spiffy upgrade process unless
you have a new version that's worth upgrading to ... so I hope you won't
mind too much if people concentrate on features/fixes instead.

            regards, tom lane

Re: 7.1 Release Date

From
Lamar Owen
Date:
Tom Lane wrote:
> Lamar Owen <lamar.owen@wgcr.org> writes:
> > Did you also miss where I'm willing to do the legwork myself?

> If you want to write and maintain such an update program, no one is
> going to stand in your way ;-).

I was afraid you'd say that. :-)  As long as I can get questions
answered here about the gory details, and without laughing too hard at
my missteps, I'll see if I can tackle this.

> Personally I am not going to work on such a thing; I have a very long
> list of to-do items that I consider more pressing and more interesting.

Like Outer Joins? (which I also consider more pressing and more
interesting -- and more out of my reach....).

> But hey, it's open source: *you* can fix the problems that *you*
> consider pressing and interesting.  Go for it.

Pressing, yes.  Interesting?  Not particularly.  Useful? Most
definitely. Educational?  I'm liable to learn quite a bit.

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

RE: 7.1 Release Date

From
Bill Barnes
Date:
Oh, if only I could be so sanguine about my learning curve in matters of
Linux, PostgreSQL, Enhydra, Glade, gnome-db, bonobo, HTML, XML, DHCP, NIS,
NSF, DNS, ABC, XYZ, ETC, ETC, ETC.

Bill

>===== Original Message From Lamar Owen <lamar.owen@wgcr.org> =====
>Tom Lane wrote:
>> Lamar Owen <lamar.owen@wgcr.org> writes:
>> > Did you also miss where I'm willing to do the legwork myself?
>
>> If you want to write and maintain such an update program, no one is
>> going to stand in your way ;-).
>
>I was afraid you'd say that. :-)  As long as I can get questions
>answered here about the gory details, and without laughing too hard at
>my missteps, I'll see if I can tackle this.
>
>> Personally I am not going to work on such a thing; I have a very long
>> list of to-do items that I consider more pressing and more interesting.
>
>Like Outer Joins? (which I also consider more pressing and more
>interesting -- and more out of my reach....).
>
>> But hey, it's open source: *you* can fix the problems that *you*
>> consider pressing and interesting.  Go for it.
>
>Pressing, yes.  Interesting?  Not particularly.  Useful? Most
>definitely. Educational?  I'm liable to learn quite a bit.
>
>--
>Lamar Owen
>WGCR Internet Radio
>1 Peter 4:11


Re: 7.1 Release Date

From
Lamar Owen
Date:
Bill Barnes wrote:
>
> Oh, if only I could be so sanguine about my learning curve in matters of
> Linux, PostgreSQL, Enhydra, Glade, gnome-db, bonobo, HTML, XML, DHCP, NIS,
> NSF, DNS, ABC, XYZ, ETC, ETC, ETC.
>
> Bill

:-)

It's like I told a client about learning PHP (which I had had little
experience with before going full-bore writing pages for a
database-backed website): "It's just another programming language."
Took about fifteen man-hours to get real comfortable with it.  Still
catching myself with Perlisms, but not too bad.  But, then again, that
client's previous website was in perl, which before taking on the client
I had had little experience with.  Go figure.  I kept trying awkisms
instead (and I won't go into why I was awkified that day....).  Does
that make my code awkward, perhaps?

Same with learning Tcl -- took about four hours to get the hang of it,
and another ten or so to get comfortable -- although, with Tcl, you are
writing in a completely different style than if you are writing in a
functional expression language such as perl or php, as opposed to Tcl
being a command string procedure language.

No biggie.  Took much longer to learn Z80 machine language with a hex
editor/debugger.....

Programming is programming, regardless of language.  Now, learning the
ins and outs of a package written in a particular language is a little
harder -- as anyone who knows a dozen or so languages can attest,
writing code is much easier than reading someone else's code --
although, I find that, if I stare at code segments long enough, I just
intuitively grok the meaning of the segment -- one of those things, I
guess.

One you grasp basic programming constructs such as indirection (with
it's assortment of linked lists, stacks, trees, etc), arrays, strings,
variables, etc, you've got 90% of learning any programming language
whipped.  I use indirection as a catch-all for perls references and C's
pointers....it helps to have done all that indirection stuff on a 6502
and on a Z80 (speaking of different approaches to a problem!) in
assembler.

At the moment I'm learning the ins and outs of the 250KLOC OpenACS
package -- given its heavy reliance on a 15KLOC SQL datamodel and
several thousand embedded Tcl (ADP) HTML pages (with associated Tcl
procedure libraries and registered URLS), it's a bit tougher than most,
but I'm making headway.

All it takes is time and a little concentration....

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: 7.1 Release Date

From
g
Date:
Tom, I for one appreciate the fact that the developers would rather spend
their time working on features! Keep at it, everyone is doing a *great*
job. Postgres is a joy to use.

I don't really know what all the controversy is over here. Dumping your
data before a version upgrade of your database is pretty standard
procedure.

To the People Who Don't Backup Their Data Before Upgrading: you're playing
with fire. Even if you have some application which claims that you don't
need to dump your db and back it up, you're STILL playing with fire.

-----------------------------------------
Water overcomes the stone;
Without substance it requires no opening;
This is the benefit of taking no action.
            Lao-Tse

Brian Knox
Senior Systems Engineer
brian@govshops.com

On Tue, 29 Aug 2000, Tom Lane wrote:

> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:
> > Tom Lane <tgl@sss.pgh.pa.us> writes:
> >> You can always stick to Postgres 6.5 :-).  There are certain features
> >> that just cannot be added without redoing the on-disk table format.
> >> I don't think we will ever want to promise "no more dump/reload";
> >> if we do, it will mean that Postgres has stopped improving.
>
> > Not necesarrily - one could either design a on disk format with room
> > for expansion or create migration tools to add new fields.
>
> "Room for expansion" isn't necessarily the issue --- sometimes you
> just have to fix wrong decisions.  The table-file-naming business is
> a perfect example.
>
> Migration tools might ease the pain, sure (though I'd still recommend
> doing a full backup before a major version upgrade, just on safety
> grounds; so the savings afforded by a tool might not be all that much).
>
> Up to now, the attitude of the developer community has mostly been
> that our TODO list is a mile long and we'd rather spend our limited
> time on bug fixes and new features than on migration tools --- both
> because it seemed like the right set of priorities for the project,
> and because fixes/features are fun while tools are just work ;-).
> But perhaps that is an area where Great Bridge and PostgreSQL Inc can
> make some contributions using support-contract funding.
>
>             regards, tom lane
>


Re: 7.1 Release Date

From
Karl DeBisschop
Date:
Andrew Sullivan wrote:
>
> On Tue, Aug 29, 2000 at 03:33:48PM -0400, Lamar Owen wrote:
>
> > This upgrading issue is already wearing thin at RedHat (or didn't
> > you notice Trond's message) -- it would not surprise me in the
> > least to see PostgreSQL dropped from the RedHat distribution in
> > favor of InterBase or MySQL if this issue isn't fixed for 7.1.
>
> Why don't they just do a test, and then echo an explanation of why
> the old Postgres can't be updated?  That's the way it works in
> Debian, and I don't see anything wrong with it.

Having done both Redhat and debian installs, I have to say I'm much more
fond of Redhat's process. In a work environment, you need to be able to
put the CD in the server that sits in the server room, then walk away. A
redhat install takes half an hour. When I did a debian install at home
it took days, mostly because the machine kept waiting for me to answer
some silly little question. Redhat has it's pitfalls as well -- overall
I'm pretty neutral over which I prefer, but there are important virtues
to Redhat's strategy.

> I can't believe that
> Red Hat figures it's package management is so good that it will
> handle all cases, and then blame everyone else when the package
> management breaks the packages.

As a developer for a GPL'd POSIX network monitor, I felt pretty much the
same way when some Mandrake RPM users held my feet to the
non-interactive fire for the RPM spec. But it turned out to be a doable
thing, and the install porcess as a whole is better for it. With the
wrong attitude, things become adversarial. When I got over that, I found
it educational and good for the product to collaborate with people who
gave more thought than packaging issues than I had.

Just my $0.02

--
Karl DeBisschop                    kdebisschop@alert.infoplease.com
Family Education Network/Information Please    http://www.infoplease.com
Netsaint Plugin Developer            kdebisschop@users.sourceforge.net

Re: 7.1 Release Date

From
"Sander Steffann"
Date:
Hi Lamar,

> I was afraid you'd say that. :-)  As long as I can get questions
> answered here about the gory details, and without laughing too hard at
> my missteps, I'll see if I can tackle this.

I think you would make a lot of people very happy with this!
Sander.



Re: 7.1 Release Date

From
Elmar Haneke
Date:
Trond Eivind Glomsrød wrote:

> No, it's not - it's just not making assumptions like "enough space is
> present to dump everything somewhere" (if you have a multiGB database,
> dumping it to upgrade sounds like a bad idea), "the database server is
> running, so I can just dump the data" etc.

On every big Database-Server there sould be a way do dump the data,
the compressed dump is ways samaller than the files used by the
database. If it is impossible to dump it to disk yopu can dump it to
tape. If you cannot dump your data anywere you should simulate a
disk-crash and restart with an empty database :-)

Elmar

Re: 7.1 Release Date

From
Andreas Tille
Date:
On Tue, 29 Aug 2000, Karl DeBisschop wrote:

> Having done both Redhat and debian installs, I have to say I'm much more
> fond of Redhat's process. In a work environment, you need to be able to
> put the CD in the server that sits in the server room, then walk away. A
> redhat install takes half an hour. When I did a debian install at home
> it took days, mostly because the machine kept waiting for me to answer
> some silly little question. Redhat has it's pitfalls as well -- overall
> I'm pretty neutral over which I prefer, but there are important virtues
> to Redhat's strategy.
To be exact: The *old* Debian strategy.  Debconf in Potato (2.2) does
a great job and will be even better in the future.  There exist's also
an project called FAI, which enables fully automatical install.

No intention to start a distribution flamewar, just to serve some
facts, which I have to while being a Debian maintainer ;-).
Just use your favourite distribution and have fun with the nice
PostgreSQL server.

Kind regards

          Andreas.



Re: 7.1 Release Date

From
Bruce Momjian
Date:
> Tom Lane wrote:
>
> > Migration tools might ease the pain, sure (though I'd still recommend
> > doing a full backup before a major version upgrade, just on safety
> > grounds; so the savings afforded by a tool might not be all that much).
>
> What is needed, IMHO, is a replacement to the pg_upgrade script that can
> do the following:
> 1.)    Read _any_ previous version's format data files;
> 2.)    Write the current version's data files (without a running
> postmaster).

Let me ask.  Could people who need to be up all the time dump their
data, install PostgreSQL on another machine, load that in, then quickly
copy the new version to the live machine and restart.  Seems like
downtime would be minimal.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: 7.1 Release Date

From
Jim Mercer
Date:
On Mon, Oct 16, 2000 at 12:00:59PM -0400, Bruce Momjian wrote:
> > > Migration tools might ease the pain, sure (though I'd still recommend
> > > doing a full backup before a major version upgrade, just on safety
> > > grounds; so the savings afforded by a tool might not be all that much).
> >
> > What is needed, IMHO, is a replacement to the pg_upgrade script that can
> > do the following:
> > 1.)    Read _any_ previous version's format data files;
> > 2.)    Write the current version's data files (without a running
> > postmaster).
>
> Let me ask.  Could people who need to be up all the time dump their
> data, install PostgreSQL on another machine, load that in, then quickly
> copy the new version to the live machine and restart.  Seems like
> downtime would be minimal.

sure, i've only got 25+ gig of tables, two of which are 10+ gig each.

8^(

it certainly would be nice to have a quicker process than dump/reload.

--
[ Jim Mercer                 jim@reptiles.org              +1 416 410-5633 ]
[          Reptilian Research -- Longer Life through Colder Blood          ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]

Re: 7.1 Release Date

From
Bruce Momjian
Date:
> On Mon, Oct 16, 2000 at 12:00:59PM -0400, Bruce Momjian wrote:
> > > > Migration tools might ease the pain, sure (though I'd still recommend
> > > > doing a full backup before a major version upgrade, just on safety
> > > > grounds; so the savings afforded by a tool might not be all that much).
> > >
> > > What is needed, IMHO, is a replacement to the pg_upgrade script that can
> > > do the following:
> > > 1.)    Read _any_ previous version's format data files;
> > > 2.)    Write the current version's data files (without a running
> > > postmaster).
> >
> > Let me ask.  Could people who need to be up all the time dump their
> > data, install PostgreSQL on another machine, load that in, then quickly
> > copy the new version to the live machine and restart.  Seems like
> > downtime would be minimal.
>
> sure, i've only got 25+ gig of tables, two of which are 10+ gig each.
>
> 8^(
>
> it certainly would be nice to have a quicker process than dump/reload.

I see.  Hmmm.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: 7.1 Release Date

From
Jim Mercer
Date:
On Mon, Oct 16, 2000 at 12:18:15PM -0400, Bruce Momjian wrote:
> > > Let me ask.  Could people who need to be up all the time dump their
> > > data, install PostgreSQL on another machine, load that in, then quickly
> > > copy the new version to the live machine and restart.  Seems like
> > > downtime would be minimal.
> >
> > sure, i've only got 25+ gig of tables, two of which are 10+ gig each.
> >
> > 8^(
> >
> > it certainly would be nice to have a quicker process than dump/reload.
>
> I see.  Hmmm.

oh, forgot to mention that some of my indexes take 2+ hours to rebuild
from scratch.

--
[ Jim Mercer                 jim@reptiles.org              +1 416 410-5633 ]
[          Reptilian Research -- Longer Life through Colder Blood          ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]

Re: 7.1 Release Date

From
"A farmer using BSD, eh!"
Date:
Bruce Momjian wrote:
>
> Let me ask.  Could people who need to be up all the time dump their
> data, install PostgreSQL on another machine, load that in, then quickly
> copy the new version to the live machine and restart.  Seems like
> downtime would be minimal.
>
Rsync to mirror a node first and do any changes you like before
switching
this new node to current running node.
--
Don't login as root, use sudo