Thread: State of Beta 2

State of Beta 2

From
Andrew Rawnsley
Date:
Anyone out there using beta 2 in production situations? Comments on
stability? I am rolling out a project in the next 4 weeks, and really
don't want to go though an upgrade soon after its released on an
Unsuspecting Client, so I would LIKE to start working with 7.4.

--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
"Marc G. Fournier"
Date:
Beta2 is running archives.postgresql.org right now ... >4gig worth of
data, and seems to be performing pretty good, no crashes that I've been
made aware of ...

 On Tue, 9 Sep 2003, Andrew Rawnsley wrote:

>
> Anyone out there using beta 2 in production situations? Comments on
> stability? I am rolling out a project in the next 4 weeks, and really
> don't want to go though an upgrade soon after its released on an
> Unsuspecting Client, so I would LIKE to start working with 7.4.
>
> --------------------
>
> Andrew Rawnsley
> President
> The Ravensfield Digital Resource Group, Ltd.
> (740) 587-0114
> www.ravensfield.com
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>

Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "AR" == Andrew Rawnsley <ronz@ravensfield.com> writes:

AR> Anyone out there using beta 2 in production situations? Comments on
AR> stability? I am rolling out a project in the next 4 weeks, and really
AR> don't want to go though an upgrade soon after its released on an
AR> Unsuspecting Client, so I would LIKE to start working with 7.4.

I'm pondering doing the same, but I'm not 100% sure there won't be any
dump/restore-required changes to it before it goes gold.  From my
tuning tests I've been running on it, it appears to be extremely fast
and stable.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
"scott.marlowe"
Date:
On Wed, 10 Sep 2003, Vivek Khera wrote:

> >>>>> "AR" == Andrew Rawnsley <ronz@ravensfield.com> writes:
>
> AR> Anyone out there using beta 2 in production situations? Comments on
> AR> stability? I am rolling out a project in the next 4 weeks, and really
> AR> don't want to go though an upgrade soon after its released on an
> AR> Unsuspecting Client, so I would LIKE to start working with 7.4.
>
> I'm pondering doing the same, but I'm not 100% sure there won't be any
> dump/restore-required changes to it before it goes gold.  From my
> tuning tests I've been running on it, it appears to be extremely fast
> and stable.

Yeah, right now it's looking like the only thing you'll have to do is
reindex hash indexes between beta2 and beta3.


Re: State of Beta 2

From
Tom Lane
Date:
On Wed, 10 Sep 2003, Vivek Khera wrote:
> "AR" == Andrew Rawnsley <ronz@ravensfield.com> writes:
>> AR> Anyone out there using beta 2 in production situations?
>>
>> I'm pondering doing the same, but I'm not 100% sure there won't be any
>> dump/restore-required changes to it before it goes gold.

As you shouldn't be ...

There's some major-league whining going on right now in the jdbc list
about the fact that "int8col = 42" isn't indexable.  While we know that
solving this problem in the general case is hard, it occurred to me this
afternoon that fixing it just for int8 might not be so hard --- maybe
just taking out the int8-vs-int4 comparison operators would improve
matters.  I might be willing to advocate another initdb to do that,
if it seems to help that situation without introducing other issues.
It's not well tested as yet, but stay tuned ...

            regards, tom lane

Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "sm" == scott marlowe <scott.marlowe> writes:

>> I'm pondering doing the same, but I'm not 100% sure there won't be any
>> dump/restore-required changes to it before it goes gold.  From my
>> tuning tests I've been running on it, it appears to be extremely fast
>> and stable.

sm> Yeah, right now it's looking like the only thing you'll have to do is
sm> reindex hash indexes between beta2 and beta3.


Sean had grumbled something about making pagesize 16k on FreeBSD for
7.4 but it seems unlikely.  I'll just locally patch it since it does
seem to offer some improvement.

Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Thu, 11 Sep 2003, Vivek Khera wrote:

> >>>>> "sm" == scott marlowe <scott.marlowe> writes:
>
> >> I'm pondering doing the same, but I'm not 100% sure there won't be any
> >> dump/restore-required changes to it before it goes gold.  From my
> >> tuning tests I've been running on it, it appears to be extremely fast
> >> and stable.
>
> sm> Yeah, right now it's looking like the only thing you'll have to do is
> sm> reindex hash indexes between beta2 and beta3.
>
>
> Sean had grumbled something about making pagesize 16k on FreeBSD for
> 7.4 but it seems unlikely.  I'll just locally patch it since it does
> seem to offer some improvement.

Without a fair amount of testing, especially on other platforms, it most
likely won't happen in the distribution itself ... one of the things that
was bantered around for after v7.4 is released is seeing how increasing it
on the various platforms fairs, and possibly just raising the default to
16k or 32k (Tatsuo mentioned a 15% improvement at 32k) ...

But, we'll need broader testing before that happens ...

Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "MGF" == Marc G Fournier <scrappy@postgresql.org> writes:

MGF> Without a fair amount of testing, especially on other platforms, it most
MGF> likely won't happen in the distribution itself ... one of the things that
MGF> was bantered around for after v7.4 is released is seeing how increasing it
MGF> on the various platforms fairs, and possibly just raising the default to
MGF> 16k or 32k (Tatsuo mentioned a 15% improvement at 32k) ...

MGF> But, we'll need broader testing before that happens ...

Well... if we had a good load generator (many threads; many small,
medium, large transactions; many inserts; many reads) I'd run it to
death on my idle server until 7.4 is released, at which point that
server won't be idle anymore.

I tried building one of the OSDL DB benchmark, but after installing
the dependencies which are only announced by the failure of configure
to run, it errored out with a C syntax error...  at that point I gave
up.



Re: State of Beta 2

From
Sean Chittenden
Date:
> > >> I'm pondering doing the same, but I'm not 100% sure there won't
> > >> be any dump/restore-required changes to it before it goes gold.
> > >> From my tuning tests I've been running on it, it appears to be
> > >> extremely fast and stable.
> >
> > sm> Yeah, right now it's looking like the only thing you'll have to do is
> > sm> reindex hash indexes between beta2 and beta3.
> >
> > Sean had grumbled something about making pagesize 16k on FreeBSD
> > for 7.4 but it seems unlikely.  I'll just locally patch it since
> > it does seem to offer some improvement.
>
> Without a fair amount of testing, especially on other platforms, it
> most likely won't happen in the distribution itself ... one of the
> things that was bantered around for after v7.4 is released is seeing
> how increasing it on the various platforms fairs, and possibly just
> raising the default to 16k or 32k (Tatsuo mentioned a 15%
> improvement at 32k) ...
>
> But, we'll need broader testing before that happens ...

I haven't had a chance to sit down and do any exhaustive testing yet
and don't think I will for a while.  That said, once 7.4 goes gold,
I'm going to provide databases/postgresql-devel with a tunable that
will allow people to choose what block size they would like (4k, 8K,
16K, 32K, or 64K) when they build the port.  Hopefully people will
chime in with their results at that time.  With things so close to 7.4
and Tom worried about digging up possible bugs, I'm not about to
destabilize 7.4 for FreeBSD users.

I'm personally under the gut feeling that 8K or 4K block sizes will be
a win for some loads, but bigger block sizes will result in more
efficient over all operations in cases where IO is more expensive than
CPU (which changes with hardware and workload).

In the future table spaces implementation, I think it would be a HUGE
win for DBAs if the block size could be specified on a per table
basis.  I know that won't be an easy change, but I do think it would
be beneficial for different work loads and filesystems.

-sc

--
Sean Chittenden

Re: State of Beta 2

From
"Marc G. Fournier"
Date:
> I haven't had a chance to sit down and do any exhaustive testing yet and
> don't think I will for a while.  That said, once 7.4 goes gold, I'm
> going to provide databases/postgresql-devel with a tunable that will
> allow people to choose what block size they would like (4k, 8K, 16K,
> 32K, or 64K) when they build the port.

If you do this, you *have* to put in a very very big warning that
databases created with non-PostgreSQL-standard block sizes may not be
transferrable to a standard-PostgreSQL install ... that is Tom's major
problem, is cross-platform/system dump/restores may no work is the
database schema was designed with a 16k block size in mind ...


Re: State of Beta 2

From
Sean Chittenden
Date:
> > I haven't had a chance to sit down and do any exhaustive testing
> > yet and don't think I will for a while.  That said, once 7.4 goes
> > gold, I'm going to provide databases/postgresql-devel with a
> > tunable that will allow people to choose what block size they
> > would like (4k, 8K, 16K, 32K, or 64K) when they build the port.
>
> If you do this, you *have* to put in a very very big warning that
> databases created with non-PostgreSQL-standard block sizes may not
> be transferrable to a standard-PostgreSQL install ... that is Tom's
> major problem, is cross-platform/system dump/restores may no work is
> the database schema was designed with a 16k block size in mind ...

Agreed, but if anyone has a table with close to 1600 columns in a
table... is either nuts or knows what they're doing.  If someone has
>1600 columns, that is an issue, but isn't one that I think can be
easily fended off without the backend being able to adapt on the fly
to different block sizes, which seems like something that could be
done with a rewrite of some of this code when table spaces are
introduced.

-sc

--
Sean Chittenden

Re: State of Beta 2

From
Manfred Koizar
Date:
On Thu, 11 Sep 2003 14:24:25 -0700, Sean Chittenden
<sean@chittenden.org> wrote:
>Agreed, but if anyone has a table with close to 1600 columns in a
>table...

This 1600 column limit has nothing to do with block size.  It is
caused by the fact that a heap tuple header cannot be larger than 255
bytes, so there is a limited number of bits in the null bitmap.

Servus
 Manfred

Re: State of Beta 2

From
Bruce Momjian
Date:
Manfred Koizar wrote:
> On Thu, 11 Sep 2003 14:24:25 -0700, Sean Chittenden
> <sean@chittenden.org> wrote:
> >Agreed, but if anyone has a table with close to 1600 columns in a
> >table...
>
> This 1600 column limit has nothing to do with block size.  It is
> caused by the fact that a heap tuple header cannot be larger than 255
> bytes, so there is a limited number of bits in the null bitmap.

Are you sure.  Then our max would be:

    255 * 8 = 2040

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Manfred Koizar
Date:
On Thu, 11 Sep 2003 00:25:53 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>"int8col = 42" isn't indexable.  [...] --- maybe
>just taking out the int8-vs-int4 comparison operators would improve
>matters.  I might be willing to advocate another initdb to do that,

You mean

    DELETE FROM pg_operator WHERE oid in (15, 36, 416, 417);

and possibly some more oids?  Does this really require an initdb?  If
we were willing to tell people who roll a 7.4Beta2 database cluster
into 7.4Beta3 (or 7.4 production) to execute this query once per
database, we could get away without increasing CATALOG_VERSION_NO.

Servus
 Manfred

Re: State of Beta 2

From
Tom Lane
Date:
Manfred Koizar <mkoi-pg@aon.at> writes:
> On Thu, 11 Sep 2003 00:25:53 -0400, Tom Lane <tgl@sss.pgh.pa.us>
> wrote:
>> "int8col = 42" isn't indexable.  [...] --- maybe
>> just taking out the int8-vs-int4 comparison operators would improve
>> matters.  I might be willing to advocate another initdb to do that,

> You mean
>     DELETE FROM pg_operator WHERE oid in (15, 36, 416, 417);
> and possibly some more oids?  Does this really require an initdb?

I think so.  Consider for instance stored views that contain references
to those operators.  In any case, I don't really want to have to ask
people who complain about 7.4 performance problems whether they've done
the above.

            regards, tom lane

Re: State of Beta 2

From
Manfred Koizar
Date:
On Fri, 12 Sep 2003 10:22:32 -0400 (EDT), Bruce Momjian <pgman@candle.pha.pa.us>
wrote:
>> This 1600 column limit has nothing to do with block size.  It is
>> caused by the fact that a heap tuple header cannot be larger than 255
>> bytes, so there is a limited number of bits in the null bitmap.
>
>Are you sure.

No, never!  ;-)

Sollte einer auch einst die vollkommenste Wahrheit verkünden,
Wissen könnt' er das nicht: Es ist alles durchwebt von Vermutung.

For even if by chance he were to utter the final truth,
He would himself not know it: For it is but a woven web of guesses.
                     -- Xenophanes, translation by K. R. Popper

But in this case I have htup.h on my side:

/*
 * MaxTupleAttributeNumber limits the number of (user) columns in a tuple.
 * The key limit on this value is that the size of the fixed overhead for
 * a tuple, plus the size of the null-values bitmap (at 1 bit per column),
 * plus MAXALIGN alignment, must fit into t_hoff which is uint8.  On most
 * machines the upper limit without making t_hoff wider would be a little
 * over 1700.  We use round numbers here and for MaxHeapAttributeNumber
 * so that alterations in HeapTupleHeaderData layout won't change the
 * supported max number of columns.
 */
#define MaxTupleAttributeNumber 1664    /* 8 * 208 */

/*----------
 * MaxHeapAttributeNumber limits the number of (user) columns in a table.
 * This should be somewhat less than MaxTupleAttributeNumber.  It must be
 * at least one less, else we will fail to do UPDATEs on a maximal-width
 * table (because UPDATE has to form working tuples that include CTID).
 * In practice we want some additional daylight so that we can gracefully
 * support operations that add hidden "resjunk" columns, for example
 * SELECT * FROM wide_table ORDER BY foo, bar, baz.
 * In any case, depending on column data types you will likely be running
 * into the disk-block-based limit on overall tuple size if you have more
 * than a thousand or so columns.  TOAST won't help.
 *----------
 */
#define MaxHeapAttributeNumber    1600    /* 8 * 200 */

Servus
 Manfred

Re: State of Beta 2

From
Bruce Momjian
Date:
Manfred Koizar wrote:
> On Fri, 12 Sep 2003 10:22:32 -0400 (EDT), Bruce Momjian <pgman@candle.pha.pa.us>
> wrote:
> >> This 1600 column limit has nothing to do with block size.  It is
> >> caused by the fact that a heap tuple header cannot be larger than 255
> >> bytes, so there is a limited number of bits in the null bitmap.
> >
> >Are you sure.
>
> No, never!  ;-)
>
> Sollte einer auch einst die vollkommenste Wahrheit verk?nden,
> Wissen k?nnt' er das nicht: Es ist alles durchwebt von Vermutung.
>
> For even if by chance he were to utter the final truth,
> He would himself not know it: For it is but a woven web of guesses.
>                      -- Xenophanes, translation by K. R. Popper
>
> But in this case I have htup.h on my side:
>
> /*
>  * MaxTupleAttributeNumber limits the number of (user) columns in a tuple.
>  * The key limit on this value is that the size of the fixed overhead for
>  * a tuple, plus the size of the null-values bitmap (at 1 bit per column),
>  * plus MAXALIGN alignment, must fit into t_hoff which is uint8.  On most
>  * machines the upper limit without making t_hoff wider would be a little
>  * over 1700.  We use round numbers here and for MaxHeapAttributeNumber
>  * so that alterations in HeapTupleHeaderData layout won't change the
>  * supported max number of columns.
>  */
> #define MaxTupleAttributeNumber 1664    /* 8 * 208 */
>
> /*----------
>  * MaxHeapAttributeNumber limits the number of (user) columns in a table.
>  * This should be somewhat less than MaxTupleAttributeNumber.  It must be
>  * at least one less, else we will fail to do UPDATEs on a maximal-width
>  * table (because UPDATE has to form working tuples that include CTID).
>  * In practice we want some additional daylight so that we can gracefully
>  * support operations that add hidden "resjunk" columns, for example
>  * SELECT * FROM wide_table ORDER BY foo, bar, baz.
>  * In any case, depending on column data types you will likely be running
>  * into the disk-block-based limit on overall tuple size if you have more
>  * than a thousand or so columns.  TOAST won't help.
>  *----------
>  */
> #define MaxHeapAttributeNumber    1600    /* 8 * 200 */

Oh, interesting.  I thought it was based on the maximum number of
columns we could pack into a block.  I realize that our limit could be
much less than 1600 if you pick wide columns like TEXT.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Tom Lane
Date:
Manfred Koizar <mkoi-pg@aon.at> writes:
> On Thu, 11 Sep 2003 14:24:25 -0700, Sean Chittenden
> <sean@chittenden.org> wrote:
>> Agreed, but if anyone has a table with close to 1600 columns in a
>> table...

> This 1600 column limit has nothing to do with block size.  It is
> caused by the fact that a heap tuple header cannot be larger than 255
> bytes, so there is a limited number of bits in the null bitmap.

Right, but that's not the only limit on number of columns.  A tuple has
to be able to fit into a page.  If all your columns are toastable types,
and you toast every one of them, then the toast pointers are 20 bytes
each, so with 8K block size the maximum usable number of columns is
somewhere around 400.  If the columns were all int8 or float8 the limit
would be about 1000 columns; etc.  But raise the page size, and these
limits increase, possibly allowing the 1600 number to become the actual
limiting factor.

            regards, tom lane

Re: State of Beta 2

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Manfred Koizar wrote:
>> This 1600 column limit has nothing to do with block size.  It is
>> caused by the fact that a heap tuple header cannot be larger than 255
>> bytes, so there is a limited number of bits in the null bitmap.

> Are you sure.  Then our max would be:
>     255 * 8 = 2040

I assure you, Manfred knows heap tuple headers inside and out ;-)
See the comments at the top of src/include/access/htup.h.

            regards, tom lane

Re: State of Beta 2

From
Andrew Rawnsley
Date:
Small soapbox moment here...

ANYTHING that can be done to eliminate having to do an initdb on
version changes would make a lot of people do cartwheels. 'Do a
dump/reload' sometimes comes across a bit casually on the lists (my
apologies if it isn't meant to be), but it can be be incredibly onerous
to do on a large production system. That's probably why you run across
people running stupid-old versions.

I am, of course, speaking from near-complete ignorance about what it
takes to avoid the whole problem.


On Friday, September 12, 2003, at 10:37 AM, Tom Lane wrote:

> Manfred Koizar <mkoi-pg@aon.at> writes:
>> On Thu, 11 Sep 2003 00:25:53 -0400, Tom Lane <tgl@sss.pgh.pa.us>
>> wrote:
>>> "int8col = 42" isn't indexable.  [...] --- maybe
>>> just taking out the int8-vs-int4 comparison operators would improve
>>> matters.  I might be willing to advocate another initdb to do that,
>
>> You mean
>>     DELETE FROM pg_operator WHERE oid in (15, 36, 416, 417);
>> and possibly some more oids?  Does this really require an initdb?
>
> I think so.  Consider for instance stored views that contain references
> to those operators.  In any case, I don't really want to have to ask
> people who complain about 7.4 performance problems whether they've done
> the above.
>
>             regards, tom lane
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>




--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
Manfred Koizar
Date:
On Fri, 12 Sep 2003 11:16:58 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>> This 1600 column limit has nothing to do with block size.
>
>Right, but that's not the only limit on number of columns.

I just wanted to make clear that increasing page size does not enable
you to get beyond that 1600 column limit.  This not so uncommon
misbelief is ... well, I wouldn't say caused, but at least not
contradicted by
http://www.postgresql.org/users-lounge/limitations.html

|   Maximum number of         250 - 1600 depending
|   columns in a table        on column types
| [...]
| The maximum table size and maximum number of columns can be
| increased if the default block size is increased to 32k.

>But raise the page size, and these
>limits increase, possibly allowing the 1600 number to become the actual
>limiting factor.

Theoretically with int2 or "char" columns the 1600 columns limit can
be reached even without changing the page size.  Figuring out a use
case for such a table is another story ...

Servus
 Manfred

Re: State of Beta 2

From
Manfred Koizar
Date:
On Fri, 12 Sep 2003 10:37:20 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>>> int8-vs-int4 comparison operators
>Consider for instance stored views that contain references
>to those operators.

I'm not able to produce a test case for what I think you mean;  must
have missed something.  Doesn't matter.  Just move on ...

Servus
 Manfred

Re: State of Beta 2

From
Ron Johnson
Date:
On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote:
> Small soapbox moment here...
>
> ANYTHING that can be done to eliminate having to do an initdb on
> version changes would make a lot of people do cartwheels. 'Do a
> dump/reload' sometimes comes across a bit casually on the lists (my
> apologies if it isn't meant to be), but it can be be incredibly onerous
> to do on a large production system. That's probably why you run across
> people running stupid-old versions.

And this will become even more of an issue as it's PG's popularity
grows with large and 24x7 databases.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

An ad run by the NEA (the US's biggest public school TEACHERS
UNION) in the Spring and Summer of 2003 asks a teenager if he
can find sodium and *chloride* in the periodic table of the elements.
And they wonder why people think public schools suck...


Re: State of Beta 2

From
"Nigel J. Andrews"
Date:
On Fri, 12 Sep 2003, Ron Johnson wrote:

> On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote:
> > Small soapbox moment here...
> >
> > ANYTHING that can be done to eliminate having to do an initdb on
> > version changes would make a lot of people do cartwheels. 'Do a
> > dump/reload' sometimes comes across a bit casually on the lists (my
> > apologies if it isn't meant to be), but it can be be incredibly onerous
> > to do on a large production system. That's probably why you run across
> > people running stupid-old versions.
>
> And this will become even more of an issue as it's PG's popularity
> grows with large and 24x7 databases.

And dump/reload isn't always such a casual operation to do. I initialise a
database from dump but I have to fiddle the sql on the reload to make it work.
The odd thing is I never thought it a bug, just something to work around, until
someone else has been persuing it on the list as one (it's the create schema
thing).


--
Nigel J. Andrews


Re: State of Beta 2

From
Dennis Gearon
Date:
Ron Johnson wrote:

>On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote:
>
>
>>Small soapbox moment here...
>>
>>ANYTHING that can be done to eliminate having to do an initdb on
>>version changes would make a lot of people do cartwheels. 'Do a
>>dump/reload' sometimes comes across a bit casually on the lists (my
>>apologies if it isn't meant to be), but it can be be incredibly onerous
>>to do on a large production system. That's probably why you run across
>>people running stupid-old versions.
>>
>>
>
>And this will become even more of an issue as it's PG's popularity
>grows with large and 24x7 databases.
>
>
He is right, it might be a good idea to head this problem 'off at the
pass'. I am usually pretty good at predicting technilogical trends. I've
made some money at it. And I predict that Postgres will eclipse MySQL
and be in the top 5 of all databases deployed. But it does have some
achilles tendon's.


Re: State of Beta 2

From
Kaare Rasmussen
Date:
> He is right, it might be a good idea to head this problem 'off at the
> pass'. I am usually pretty good at predicting technilogical trends. I've

Well, the only solution I can see is to make an inline conversion tool that
knows about every step from earlier versions.

I believe this has been discussed before, but it does not seem to be a small
or an easy task to implement.

--
Kaare Rasmussen            --Linux, spil,--        Tlf:        3816 2582
Kaki Data                tshirts, merchandize      Fax:        3816 2501
Howitzvej 75               Åben 12.00-18.00        Email: kar@kakidata.dk
2000 Frederiksberg        Lørdag 12.00-16.00       Web:      www.suse.dk

Upgrading (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Fri, 2003-09-12 at 17:01, Kaare Rasmussen wrote:
> > He is right, it might be a good idea to head this problem 'off at the
> > pass'. I am usually pretty good at predicting technilogical trends. I've
>
> Well, the only solution I can see is to make an inline conversion tool that
> knows about every step from earlier versions.
>
> I believe this has been discussed before, but it does not seem to be a small
> or an easy task to implement.

Does the "on-disk structure" really change that much between major
versions?

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"Vanity, my favorite sin."
Larry/John/Satan, "The Devil's Advocate"


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
Hello,

  The initdb is not always a bad thing. In reality the idea of just
being able to "upgrade" is not a good thing. Just think about the
differences between 7.2.3 and 7.3.x... The most annoying (although
appropriate) one being that integers can no longer be ''.

  If we provide the ability to do a wholesale upgrade many things would
just break. Heck even the connection protocol is different for 7.4.


J

Dennis Gearon wrote:

> Ron Johnson wrote:
>
>> On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote:
>>
>>
>>> Small soapbox moment here...
>>>
>>> ANYTHING that can be done to eliminate having to do an initdb on
>>> version changes would make a lot of people do cartwheels. 'Do a
>>> dump/reload' sometimes comes across a bit casually on the lists (my
>>> apologies if it isn't meant to be), but it can be be incredibly
>>> onerous to do on a large production system. That's probably why you
>>> run across people running stupid-old versions.
>>>
>>
>>
>> And this will become even more of an issue as it's PG's popularity
>> grows with large and 24x7 databases.
>>
>>
> He is right, it might be a good idea to head this problem 'off at the
> pass'. I am usually pretty good at predicting technilogical trends.
> I've made some money at it. And I predict that Postgres will eclipse
> MySQL and be in the top 5 of all databases deployed. But it does have
> some achilles tendon's.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>      subscribe-nomail command to majordomo@postgresql.org so that your
>      message can get through to the mailing list cleanly


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



need for in-place upgrades (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote:
> Hello,
>
>   The initdb is not always a bad thing. In reality the idea of just
> being able to "upgrade" is not a good thing. Just think about the
> differences between 7.2.3 and 7.3.x... The most annoying (although
> appropriate) one being that integers can no longer be ''.

But that's just not going to cut it if PostgreSQL wants to be
a serious "player" in the enterprise space, where 24x7 systems
are common, and you just don't *get* 12/18/24/whatever hours to
dump/restore a 200GB database.

For example, there are some rather large companies whose fac-
tories are run 24x365 on rather old versions of VAX/VMS and
Rdb/VMS, because the DBAs can't even get the 3 hours to do
in-place upgrades to Rdb, much less the time the SysAdmin needs
to upgrade VAX/VMS to VAX/OpenVMS.

In our case, we have systems that have multiple 300+GB databases
(working in concert as one big system), and dumping all of them,
then restoring (which includes creating indexes on tables with
row-counts in the low 9 digits, and one which has gone as high
as 2+ billion records) is just totally out of the question.

>   If we provide the ability to do a wholesale upgrade many things would
> just break. Heck even the connection protocol is different for 7.4.

But what does a *closed* database care about changed communications
protocols?  When you reopen the database after an upgrade the
postmaster and client libs start yakking away in a slightly diff-
erent language, but so what?

> Dennis Gearon wrote:
>
> > Ron Johnson wrote:
> >
> >> On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote:
> >>
> >>
> >>> Small soapbox moment here...
> >>>
> >>> ANYTHING that can be done to eliminate having to do an initdb on
> >>> version changes would make a lot of people do cartwheels. 'Do a
> >>> dump/reload' sometimes comes across a bit casually on the lists (my
> >>> apologies if it isn't meant to be), but it can be be incredibly
> >>> onerous to do on a large production system. That's probably why you
> >>> run across people running stupid-old versions.
> >>>
> >>
> >>
> >> And this will become even more of an issue as it's PG's popularity
> >> grows with large and 24x7 databases.
> >>
> >>
> > He is right, it might be a good idea to head this problem 'off at the
> > pass'. I am usually pretty good at predicting technilogical trends.
> > I've made some money at it. And I predict that Postgres will eclipse
> > MySQL and be in the top 5 of all databases deployed. But it does have
> > some achilles tendon's.


--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"The UN couldn't break up a cookie fight in a Brownie meeting."
Larry Miller


Re: State of Beta 2

From
Tom Lane
Date:
Kaare Rasmussen <kar@kakidata.dk> writes:
> I believe this has been discussed before, but it does not seem to be a small
> or an easy task to implement.

Yes, it's been discussed to death, and it isn't easy.  See the archives
for Lamar Owen's eloquent rants on the subject, and various hackers'
followups as to the implementation issues.

What it comes down to IMHO is that (a) there are still a lot of bad,
incomplete, or shortsighted decisions embedded in Postgres, which cannot
really be fixed in 100% backwards compatible ways; (b) there are not all
that many people competent to work on improving Postgres, and even fewer
who are actually being paid to do so; and (c) those who are volunteers
are likely to work on things they find interesting to fix.  Finding ways
to maintain backwards compatibility without dump/reload is not in the
"interesting" category.  It is in the category of things that will only
happen if people pony up money to pay someone to do uninteresting work.
And for all the ranting, I've not seen any ponying.

            regards, tom lane

Re: State of Beta 2

From
Alvaro Herrera
Date:
On Fri, Sep 12, 2003 at 03:48:48PM -0700, Joshua D. Drake wrote:

>  The initdb is not always a bad thing. In reality the idea of just
> being able to "upgrade" is not a good thing. Just think about the
> differences between 7.2.3 and 7.3.x... The most annoying (although
> appropriate) one being that integers can no longer be ''.

But it would be much easier if one wasn't forced to create a dump and
then restore it.  One would still need to change the applications, but
that doesn't force downtime.


>  If we provide the ability to do a wholesale upgrade many things would
> just break. Heck even the connection protocol is different for 7.4.

But the new client libpq _can_ talk to older servers.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
FOO MANE PADME HUM

Re: State of Beta 2

From
Kaare Rasmussen
Date:
Hi

> Yes, it's been discussed to death, and it isn't easy.  See the archives

That's what I thought.

> "interesting" category.  It is in the category of things that will only
> happen if people pony up money to pay someone to do uninteresting work.
> And for all the ranting, I've not seen any ponying.

Just for the record now that there's an argument that big companies need 24x7
- could you or someone else with knowledge of what's involved give a
guesstimate of how many ponies we're talking. Is it one man month, one man
year, more, or what?

Just in case there is a company with enough interest in this matter.

Next question would of course be if anyone would care to do it even though
they're paid, but one hypothetical question at the time :-)

--
Kaare Rasmussen            --Linux, spil,--        Tlf:        3816 2582
Kaki Data                tshirts, merchandize      Fax:        3816 2501
Howitzvej 75               Åben 12.00-18.00        Email: kar@kakidata.dk
2000 Frederiksberg        Lørdag 12.00-16.00       Web:      www.suse.dk

Re: need for in-place upgrades (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:
On Fri, 12 Sep 2003, Ron Johnson wrote:

> On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote:
> > Hello,
> >
> >   The initdb is not always a bad thing. In reality the idea of just
> > being able to "upgrade" is not a good thing. Just think about the
> > differences between 7.2.3 and 7.3.x... The most annoying (although
> > appropriate) one being that integers can no longer be ''.
>
> But that's just not going to cut it if PostgreSQL wants to be
> a serious "player" in the enterprise space, where 24x7 systems
> are common, and you just don't *get* 12/18/24/whatever hours to
> dump/restore a 200GB database.
>
> For example, there are some rather large companies whose fac-
> tories are run 24x365 on rather old versions of VAX/VMS and
> Rdb/VMS, because the DBAs can't even get the 3 hours to do
> in-place upgrades to Rdb, much less the time the SysAdmin needs
> to upgrade VAX/VMS to VAX/OpenVMS.
>
> In our case, we have systems that have multiple 300+GB databases
> (working in concert as one big system), and dumping all of them,
> then restoring (which includes creating indexes on tables with
> row-counts in the low 9 digits, and one which has gone as high
> as 2+ billion records) is just totally out of the question.

'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?



Re: need for in-place upgrades (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Sat, 2003-09-13 at 10:10, Marc G. Fournier wrote:
> On Fri, 12 Sep 2003, Ron Johnson wrote:
>
> > On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote:
> > > Hello,
> > >
> > >   The initdb is not always a bad thing. In reality the idea of just
> > > being able to "upgrade" is not a good thing. Just think about the
> > > differences between 7.2.3 and 7.3.x... The most annoying (although
> > > appropriate) one being that integers can no longer be ''.
> >
> > But that's just not going to cut it if PostgreSQL wants to be
> > a serious "player" in the enterprise space, where 24x7 systems
> > are common, and you just don't *get* 12/18/24/whatever hours to
> > dump/restore a 200GB database.
> >
> > For example, there are some rather large companies whose fac-
> > tories are run 24x365 on rather old versions of VAX/VMS and
> > Rdb/VMS, because the DBAs can't even get the 3 hours to do
> > in-place upgrades to Rdb, much less the time the SysAdmin needs
> > to upgrade VAX/VMS to VAX/OpenVMS.
> >
> > In our case, we have systems that have multiple 300+GB databases
> > (working in concert as one big system), and dumping all of them,
> > then restoring (which includes creating indexes on tables with
> > row-counts in the low 9 digits, and one which has gone as high
> > as 2+ billion records) is just totally out of the question.
>
> 'k, but is it out of the question to pick up a duplicate server, and use
> something like eRServer to replicate the databases between the two
> systems, with the new system having the upgraded database version running
> on it, and then cutting over once its all in sync?

So instead of 1TB of 15K fiber channel disks (and the requisite
controllers, shelves, RAID overhead, etc), we'd need *two* TB of
15K fiber channel disks (and the requisite controllers, shelves,
RAID overhead, etc) just for the 1 time per year when we'd upgrade
PostgreSQL?

Not a chance.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

Thanks to the good people in Microsoft, a great deal of the data
that flows is dependent on one company. That is not a healthy
ecosystem. The issue is that creativity gets filtered through
the business plan of one company.
Mitchell Baker, "Chief Lizard Wrangler" at Mozilla


Re: need for in-place upgrades (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:

On Sat, 13 Sep 2003, Ron Johnson wrote:

> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K
> fiber channel disks (and the requisite controllers, shelves, RAID
> overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?

Ah, see, the post that I was responding to dealt with 300GB of data,
which, a disk array for, is relatively cheap ... :)

But even with 1TB of data, do you note have a redundant system?  If you
can't afford 3 hours to dump/reload, can you actually afford any better
the cost of the server itself going poof?


Re: State of Beta 2

From
Tom Lane
Date:
Kaare Rasmussen <kar@kakidata.dk> writes:
>> "interesting" category.  It is in the category of things that will only
>> happen if people pony up money to pay someone to do uninteresting work.
>> And for all the ranting, I've not seen any ponying.

> Just for the record now that there's an argument that big companies need 24x7
> - could you or someone else with knowledge of what's involved give a
> guesstimate of how many ponies we're talking. Is it one man month, one man
> year, more, or what?

Well, the first thing that needs to happen is to redesign and
reimplement pg_upgrade so that it works with current releases and is
trustworthy for enterprise installations (the original script version
depended far too much on being run by someone who knew what they were
doing, I thought).  I guess that might take, say, six months for one
well-qualified hacker.  But it would be an open-ended commitment,
because pg_upgrade only really solves the problem of installing new
system catalogs.  Any time we do something that affects the contents or
placement of user table and index files, someone would have to figure
out and implement a migration strategy.

Some examples of things we have done recently that could not be handled
without much more work: modifying heap tuple headers to conserve
storage, changing the on-disk representation of array values, fixing
hash indexes.  Examples of probable future changes that will take work:
adding tablespaces, adding point-in-time recovery, fixing the interval
datatype, generalizing locale support so you can have more than one
locale per installation.

It could be that once pg_upgrade exists in a production-ready form,
PG developers will voluntarily do that extra work themselves.  But
I doubt it (and if it did happen that way, it would mean a significant
slowdown in the rate of development).  I think someone will have to
commit to doing the extra work, rather than just telling other people
what they ought to do.  It could be a permanent full-time task ...
at least until we stop finding reasons we need to change the on-disk
data representation, which may or may not ever happen.

            regards, tom lane

Re: need for in-place upgrades (was Re: State of Beta 2)

From
Dennis Gearon
Date:
>'k, but is it out of the question to pick up a duplicate server, and use
>something like eRServer to replicate the databases between the two
>systems, with the new system having the upgraded database version running
>on it, and then cutting over once its all in sync?
>
>
>
>
>
That's just what I was thinking. It might be an easy way aournd the
whole problem,for awhile, to set up the replication to be as version
independent as possible.


Re: need for in-place upgrades (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Sat, 2003-09-13 at 11:21, Marc G. Fournier wrote:
> On Sat, 13 Sep 2003, Ron Johnson wrote:
>
> > So instead of 1TB of 15K fiber channel disks (and the requisite
> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K
> > fiber channel disks (and the requisite controllers, shelves, RAID
> > overhead, etc) just for the 1 time per year when we'd upgrade
> > PostgreSQL?
>
> Ah, see, the post that I was responding to dealt with 300GB of data,
> which, a disk array for, is relatively cheap ... :)
>
> But even with 1TB of data, do you note have a redundant system?  If you
> can't afford 3 hours to dump/reload, can you actually afford any better
> the cost of the server itself going poof?

We've survived all h/w issues so far w/ minimal downtime, running
in degraded mode (i.e., having to yank out a CPU or RAM board) until
HP could come out and install a new one.  We also have dual-redun-
dant disk and storage controllers, even though it's been a good
long time since I've seen one of them die.

And I strongly dispute the notion that it would only take 3 hours
to dump/restore a TB of data.  This seems to point to a downside
of MVCC: this inability to to "page-level" database backups, which
allow for "rapid" restores, since all of the index structures are
part of the backup, and don't have to be created, in serial, as part
of the pg_restore.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"...always eager to extend a friendly claw"


Re: need for in-place upgrades (was Re: State of Beta 2)

From
Doug McNaught
Date:
Ron Johnson <ron.l.johnson@cox.net> writes:

> And I strongly dispute the notion that it would only take 3 hours
> to dump/restore a TB of data.  This seems to point to a downside
> of MVCC: this inability to to "page-level" database backups, which
> allow for "rapid" restores, since all of the index structures are
> part of the backup, and don't have to be created, in serial, as part
> of the pg_restore.

If you have a filesystem capable of atomic "snapshots" (Veritas offers
this I think), you *should* be able to do this fairly safely--take a
snapshot of the filesystem and back up the snapshot.  On a restore of
the snapshot, transactions in progress when the snapshot happened will
be rolled back, but everything that committed before then will be there
(same thing PG does when it recovers from a crash).  Of course, if you
have your database cluster split across multiple filesystems, this
might not be doable.

Note: I haven't done this, but it should work and I've seen it talked
about before.  I think Oracle does this at the storage manager level
when you put a database in backup mode; doing the same in PG would
probably be a lot of work.

This doesn't help with the upgrade issue, of course...

-Doug

Re: State of Beta 2

From
Lamar Owen
Date:
Joshua D. Drake wrote:
>  The initdb is not always a bad thing. In reality the idea of just being
> able to "upgrade" is not a good thing. Just think about the differences
> between 7.2.3 and 7.3.x... The most annoying (although appropriate) one
> being that integers can no longer be ''.

>  If we provide the ability to do a wholesale upgrade many things would
> just break. Heck even the connection protocol is different for 7.4.

Strawmen.  If we provide a good upgrade capability, we would just simply
have to think about upgrades before changing features like that.  The
upgrade code could be cognizant of these sorts of things; and shoud be,
in fact.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: need for in-place upgrades (was Re: State of Beta 2)

From
Lamar Owen
Date:
Marc G. Fournier wrote:
> 'k, but is it out of the question to pick up a duplicate server, and use
> something like eRServer to replicate the databases between the two
> systems, with the new system having the upgraded database version running
> on it, and then cutting over once its all in sync?

Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

Having the duplicate server is going to be a biggie; in my own case,
where I am contemplating a very large dataset (>100TB potentially), I am
being very thoughtful as to the storage mechanism, OS, etc.  eRserver
figures in to my plan, incidentally.  I am still in the early design
phase of this system; PostgreSQL may just be storing the index and the
metadata, and not the actual image data.  In which case we're only
talking a few million records.  The image data will be huge.  While I
_will_ have a redundant server (in a separate building), I'm not 100%
sure I'm going to do it at the application level.  As I have vast
amounts and numbers of 50/125 mm fiber run between buildings, as well as
a good amount of singlemode, I may be running a large SAN with Fibre
Channel (depending upon how cheaply the switches and HBA's can be
acquired).  I already have in place a fully meshed OC-12 network, which
I am expanding, to meet the regular data needs.  But ATM on OC-12 is
suboptimal for SAN use; really need fibre channel.

Now before anyone gets the idea that 'hey, you got money; buy another
server!' you might want to know that PARI is a non-profit; those OC-12
switches are either donated or surplus 3Com CoreBuilder 7000's
(available ridiculously cheaply on eBay), and the fiber was already here
when we acquired the site.  We are not rolling in dough, so to speak.
So there will be no surplus drives in the array, or surplus CPU's
either, to run a spare 'migration' server.  And I really don't want to
think about dump/restore of 100TB (if PostgreSQL actually stores the
image files, which it might).

As most everyone here knows, I am a big proponent of in-place upgrades,
and have been so for a very long time.  Read the archives; I've said my
piece, and am not going to rehash at this time.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: need for in-place upgrades (was Re: State of Beta 2)

From
Dennis Gearon
Date:
Lamar Owen wrote:

>
> As most everyone here knows, I am a big proponent of in-place
> upgrades, and have been so for a very long time.  Read the archives;
> I've said my piece, and am not going to rehash at this time.

I look forward to when or if a sponsor can add in-place upgrades to
Postgres. Big projects like that, vs. upgrades, take focused, mid or
long term efforts with people who are committted to only that project.
Translation: money and skills.


Re: need for in-place upgrades (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:

On Sat, 13 Sep 2003, Lamar Owen wrote:

> Marc G. Fournier wrote:
> > 'k, but is it out of the question to pick up a duplicate server, and use
> > something like eRServer to replicate the databases between the two
> > systems, with the new system having the upgraded database version running
> > on it, and then cutting over once its all in sync?
>
> Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

I thought we were talking about upgrades here?


Re: State of Beta 2

From
Network Administrator
Date:
Not that I know anything about the internal workings of PG but it seems like a
big part of the issue is the on disk representation of database.  I've never had
a problem with the whole dump/restore process and in fact anyone that has been
doing this long enough will remember when that process was gospel associated
with db upgrades.  However, with 24x7 opertations or in general anyone who
simply can NOT tolerant the downtown to do an upgrade I wondering if there is
perhaps a way to abstract the on disk representation of PG data so that 1)
Future upgrades to not have to maintain the same structure if it is deem another
respresentation is better 2) Upgrade could be down in place.

The abstraction I am talking about would be a logical layer that would handle
disk I/O including the format of that data (lets call this the ADH).  By
abstracting that information, the upgrade concerns *could* because if, "I
upgrade to say 7.2.x to 7.3.x or 7.4.x, do I *want* to take advantage of the new
disk representation.  If yes, then you would have go through the necessary
process of upgrading the database with would always default to the most current
representation.  If not, then because the ADH is abstact to the application, it
could run in a 7.2.x or 7.3.x "compatibility mode" so that you would not *need*
to do the dump and restore.

Again, I am completely ignorant to how this really works (and I don't have time
to read through the code) but I what I think I'm getting at is a DBI/DBD type
scenario.  As a result, there would be another layer of complexity and I would
think some performance loss as well but how much complexity and performance loss
to me is the question and when you juxtapose that against the ability to do
upgrades without the dump/restore I would think many organizations would say,
"ok, I'll take the x% performance hit and wait util I have the resources to
upgrade disk representation"

One of the things involved with in Philadelphia is
providing IT services to social service programs for outsourced agencies of the
local government.  In particular, there have been and are active moves in PA to
have these social service datawarehouses go up.  Even though it will probably
take years to actually realize this, by that time once you aggregate all the
local agency databases together, we're going to be talking about very large
datasets.  That means that (at least for) social service programs, IT is going
to have to take into account this whole upgrade question from what I think will
be a standpoint of availability.  In short, I don't think it is too far off to
consider that the "little guys" will need to do reliable "in place" upgrades
with 100% confidence.

Hopefully, I was clear on my macro-concept even if I got the micro-concepts wrong.

Quoting Tom Lane <tgl@sss.pgh.pa.us>:

> Kaare Rasmussen <kar@kakidata.dk> writes:
> >> "interesting" category.  It is in the category of things that will only
> >> happen if people pony up money to pay someone to do uninteresting work.
> >> And for all the ranting, I've not seen any ponying.
>
> > Just for the record now that there's an argument that big companies need
> 24x7
> > - could you or someone else with knowledge of what's involved give a
> > guesstimate of how many ponies we're talking. Is it one man month, one man
>
> > year, more, or what?
>
> Well, the first thing that needs to happen is to redesign and
> reimplement pg_upgrade so that it works with current releases and is
> trustworthy for enterprise installations (the original script version
> depended far too much on being run by someone who knew what they were
> doing, I thought).  I guess that might take, say, six months for one
> well-qualified hacker.  But it would be an open-ended commitment,
> because pg_upgrade only really solves the problem of installing new
> system catalogs.  Any time we do something that affects the contents or
> placement of user table and index files, someone would have to figure
> out and implement a migration strategy.
>
> Some examples of things we have done recently that could not be handled
> without much more work: modifying heap tuple headers to conserve
> storage, changing the on-disk representation of array values, fixing
> hash indexes.  Examples of probable future changes that will take work:
> adding tablespaces, adding point-in-time recovery, fixing the interval
> datatype, generalizing locale support so you can have more than one
> locale per installation.
>
> It could be that once pg_upgrade exists in a production-ready form,
> PG developers will voluntarily do that extra work themselves.  But
> I doubt it (and if it did happen that way, it would mean a significant
> slowdown in the rate of development).  I think someone will have to
> commit to doing the extra work, rather than just telling other people
> what they ought to do.  It could be a permanent full-time task ...
> at least until we stop finding reasons we need to change the on-disk
> data representation, which may or may not ever happen.
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>


--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

Re: need for in-place upgrades (was Re: State of

From
Lincoln Yeoh
Date:
>At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
>'migration' server.  And I really don't want to think about dump/restore
>of 100TB (if PostgreSQL actually stores the image files, which it might).

Hmm. Just curious, do people generally backup 100TB of data, or once most
reach this point they have to hope that it's just hardware failures they'll
deal with and not software/other issues?

100TB sounds like a lot of backup media and time... Not to mention ensuring
that the backups will work with available and functioning backup hardware.

Head hurts just to think about it,

Link.

Re: State of Beta 2

From
Tom Lane
Date:
Network Administrator <netadmin@vcsn.com> writes:
> The abstraction I am talking about would be a logical layer that would handle
> disk I/O including the format of that data (lets call this the ADH).

This sounds good in the abstract, but I don't see how you would define
such a layer in a way that was both thin and able to cope with large
changes in representation.  In a very real sense, "handle disk I/O
including the format of the data" describes the entire backend.  To
create an abstraction layer that will actually give any traction for
maintenance, you'd have to find a way to slice it much more narrowly
than that.

Even if the approach can be made to work, defining such a layer and then
revising all the existing code to go through it would be a huge amount
of work.

Ultimately there's no substitute for hard work :-(

            regards, tom lane

Re: need for in-place upgrades (was Re: State of

From
Martin Marques
Date:
El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
> >At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
> >'migration' server.  And I really don't want to think about dump/restore
> >of 100TB (if PostgreSQL actually stores the image files, which it might).
>
> Hmm. Just curious, do people generally backup 100TB of data, or once most
> reach this point they have to hope that it's just hardware failures they'll
> deal with and not software/other issues?

Normally you would have a RAID with mirroring and CRC, so that if one of the
disks in the array of disks falls, the system keeps working. You can even
have hot-pluggable disks, so you can change the disk that is broken without
rebooting.

You can also have a hot backup using eRServ (Replicate your DB server on a
backup server, just in case).

> 100TB sounds like a lot of backup media and time... Not to mention ensuring
> that the backups will work with available and functioning backup hardware.

I don't know, but there may be backup systems for that amount of space. We
have just got some 200Gb tape devices, and they are about 2 years old. With a
5 tape robot, you have 1TB of backup.

--
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-----------------------------------------------------------------
Martín Marqués                  |        mmarques@unl.edu.ar
Programador, Administrador, DBA |       Centro de Telematica
                       Universidad Nacional
                            del Litoral
-----------------------------------------------------------------


Re: need for in-place upgrades (was Re: State of

From
Christopher Browne
Date:
After a long battle with technology,martin@bugs.unl.edu.ar (Martin Marques), an earthling, wrote:
> El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
>> >At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
>> >'migration' server.  And I really don't want to think about dump/restore
>> >of 100TB (if PostgreSQL actually stores the image files, which it might).
>>
>> Hmm. Just curious, do people generally backup 100TB of data, or once most
>> reach this point they have to hope that it's just hardware failures they'll
>> deal with and not software/other issues?
>
> Normally you would have a RAID with mirroring and CRC, so that if one of the
> disks in the array of disks falls, the system keeps working. You can even
> have hot-pluggable disks, so you can change the disk that is broken without
> rebooting.
>
> You can also have a hot backup using eRServ (Replicate your DB server on a
> backup server, just in case).

In a High Availability situation, there is little choice but to create
some form of "hot backup."  And if you can't afford that, then reality
is that you can't afford to pretend to have "High Availability."

>> 100TB sounds like a lot of backup media and time... Not to mention
>> ensuring that the backups will work with available and functioning
>> backup hardware.
>
> I don't know, but there may be backup systems for that amount of
> space. We have just got some 200Gb tape devices, and they are about
> 2 years old. With a 5 tape robot, you have 1TB of backup.

Certainly there are backup systems designed to cope with those sorts
of quantities of data.  With 8 tape drives, and a rack system that
holds 200 cartridges, you not only can store a HUGE pile of data, but
you can push it onto tape about as quickly as you can generate it.

<http://spectralogic.com> discusses how to use their hardware and
software products to do terabytes of backups in an hour.  They sell a
software product called "Alexandria" that knows how to (at least
somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
systems.  (When I was at American Airlines, that was the software in
use._

Generally, this involves having a bunch of tape drives that are
simultaneously streaming different parts of the backup.

When it's Oracle that's in use, a common strategy involves
periodically doing a "hot" backup (so you can quickly get back to a
known database state), and then having a robot tape drive assigned to
regularly push archive logs to tape as they are produced.

That would more or less resemble taking a "consistent filesystem
backup" of a PG database, and then saving the sequence of WAL files.
(The disanalogies are considerable; that should improve at least a
_little_ once PITR comes along for PostgreSQL...)

None of this is particularly cheap or easy; need I remind gentle
readers that if you can't afford that, then you essentially can't
afford to claim "High Availability?"
--
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://www.ntlug.org/~cbbrowne/nonrdbms.html
Who's afraid of ARPA?

Re: need for in-place upgrades (was Re: State of

From
Ron Johnson
Date:
On Sun, 2003-09-14 at 14:17, Christopher Browne wrote:
> After a long battle with technology,martin@bugs.unl.edu.ar (Martin Marques), an earthling, wrote:
> > El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
> >> >At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
[snip]
> Certainly there are backup systems designed to cope with those sorts
> of quantities of data.  With 8 tape drives, and a rack system that
> holds 200 cartridges, you not only can store a HUGE pile of data, but
> you can push it onto tape about as quickly as you can generate it.
>
> <http://spectralogic.com> discusses how to use their hardware and
> software products to do terabytes of backups in an hour.  They sell a
> software product called "Alexandria" that knows how to (at least
> somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
> systems.  (When I was at American Airlines, that was the software in
> use._

HP, Hitachi, and a number of other vendors make similar hardware.

You mean the database vendors don't build that parallelism into
their backup procedures?

> Generally, this involves having a bunch of tape drives that are
> simultaneously streaming different parts of the backup.
>
> When it's Oracle that's in use, a common strategy involves
> periodically doing a "hot" backup (so you can quickly get back to a
> known database state), and then having a robot tape drive assigned to
> regularly push archive logs to tape as they are produced.

Rdb does the same thing.  You mean DB/2 can't/doesn't do that?

[snip]
> None of this is particularly cheap or easy; need I remind gentle
> readers that if you can't afford that, then you essentially can't
> afford to claim "High Availability?"

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"(Women are) like compilers. They take simple statements and
make them into big productions."
Pitr Dubovitch


Re: State of Beta 2

From
Network Administrator
Date:
Quoting Tom Lane <tgl@sss.pgh.pa.us>:

> Network Administrator <netadmin@vcsn.com> writes:
> > The abstraction I am talking about would be a logical layer that would
> handle
> > disk I/O including the format of that data (lets call this the ADH).
>
> This sounds good in the abstract, but I don't see how you would define
> such a layer in a way that was both thin and able to cope with large
> changes in representation.  In a very real sense, "handle disk I/O
> including the format of the data" describes the entire backend.  To
> create an abstraction layer that will actually give any traction for
> maintenance, you'd have to find a way to slice it much more narrowly
> than that.

*nod* I thought that would probably be the case.  The "thickness" of that layer
would be directly related to how the backend was sliced.  However it seems to me
that right now that this might not be possible while the backend is changing
between major releases.  Perhaps once that doesn't fluxate as much it might be
feasible to create these layer so that it is not too fat.

Maybe the goal is too aggressive.  To ask (hopefully) a simpler question.  Would
it be possible to at compile time choose the on disk representation?  I'm not
sure but I think that might reduce the complexity since the abstraction would
only exist before the application is built.  Once compiled there would be no
ambiguity in what representation is chosen.

> Even if the approach can be made to work, defining such a layer and then
> revising all the existing code to go through it would be a huge amount
> of work.
>
> Ultimately there's no substitute for hard work :-(
>
>             regards, tom lane

True, which is why I've never been bothered about going through a process to
maintain my database's integrity and performance.  However, over time, that
across my entire client base I will eventually reach a point where I will need
to do an "in place" upgrade or at least limit database downtime to a 60 minute
window- or less.



--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

Re: State of Beta 2

From
Tom Lane
Date:
Network Administrator <netadmin@vcsn.com> writes:
> ...  However it seems to me that right now that this might not be
> possible while the backend is changing between major releases.
> Perhaps once that doesn't fluxate as much it might be feasible to
> create these layer so that it is not too fat.

Yeah, that's been in the back of my mind also.  Once we have tablespaces
and a couple of the other basic features we're still missing, it might
be a more reasonable proposition to freeze the on-disk representation.

At the very least we could quantize it a little more --- say, group
changes that affect user table representation into every third or fourth
release.

But until we have a production-quality "pg_upgrade" this is all moot.

            regards, tom lane

Table spaces (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Sun, 2003-09-14 at 23:08, Tom Lane wrote:
> Network Administrator <netadmin@vcsn.com> writes:
> > ...  However it seems to me that right now that this might not be
> > possible while the backend is changing between major releases.
> > Perhaps once that doesn't fluxate as much it might be feasible to
> > create these layer so that it is not too fat.
>
> Yeah, that's been in the back of my mind also.  Once we have tablespaces
> and a couple of the other basic features we're still missing, it might
> be a more reasonable proposition to freeze the on-disk representation.

I think that every effort should be made so that the on-disk struct-
ure (ODS) doesn't have to change when tablespaces is implemented.
I.e., oid-based files live side-by-side with tablespaces.

At a minimum, it should be "ok, you don't *have* to do a dump/restore
to migrate to v7.7, but if you want the tablespaces that are brand
new in v7.7, you must dump data, and recreate the schema with table-
spaces before restoring".

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"(Women are) like compilers. They take simple statements and
make them into big productions."
Pitr Dubovitch


Re: need for in-place upgrades (was Re: State of

From
Christopher Browne
Date:
In the last exciting episode, ron.l.johnson@cox.net (Ron Johnson) wrote:
> On Sun, 2003-09-14 at 14:17, Christopher Browne wrote:
>> <http://spectralogic.com> discusses how to use their hardware and
>> software products to do terabytes of backups in an hour.  They sell a
>> software product called "Alexandria" that knows how to (at least
>> somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
>> systems.  (When I was at American Airlines, that was the software in
>> use._
>
> HP, Hitachi, and a number of other vendors make similar hardware.
>
> You mean the database vendors don't build that parallelism into
> their backup procedures?

They don't necessarily build every conceivable bit of possible
functionality into the backup procedures they provide, if that's what
you mean.

Of thee systems mentioned, I'm most familiar with SAP's backup
regimen; if you're using it with Oracle, you'll use tools called
"brbackup" and "brarchive", which provide a _moderately_ sophisticated
scheme for dealing with backing things up.

But if you need to do something wild, involving having two servers
each having 8 tape drives on a nearby servers that are used to manage
backups for a whole cluster of systems, including a combination of OS
backups, DB backups, and application backups, it's _not_ reasonable to
expect one DB vendor's backup tools to be totally adequate to that.

Alexandria (and similar software) certainly needs tool support from DB
makers to allow them to intelligently handle streaming the data out of
the databases.

At present, this unfortunately _isn't_ something PostgreSQL does, from
two perspectives:

 1.  You can't simply keep the WALs and reapply them in order to bring
     a second database up to date;

 2.  A pg_dump doesn't provide a way of streaming parts of the
     database in parallel, at least not if all the data is in
     one database.  (There's some nifty stuff in eRServ that
     might eventually be relevant, but probably not yet...)

There are partial answers:

 - If there are multiple databases, starting multiple pg_dump
   sessions provides some useful parallelism;

 - A suitable logical volume manager may allow splitting off
   a copy atomically, and then you can grab the resulting data
   in "strips" to pull it in parallel.

Life isn't always perfect.

>> Generally, this involves having a bunch of tape drives that are
>> simultaneously streaming different parts of the backup.
>>
>> When it's Oracle that's in use, a common strategy involves
>> periodically doing a "hot" backup (so you can quickly get back to a
>> known database state), and then having a robot tape drive assigned
>> to regularly push archive logs to tape as they are produced.
>
> Rdb does the same thing.  You mean DB/2 can't/doesn't do that?

I haven't the foggiest idea, although I would be somewhat surprised if
it doesn't have something of the sort.
--
(reverse (concatenate 'string "moc.enworbbc" "@" "enworbbc"))
http://www.ntlug.org/~cbbrowne/wp.html
Rules of  the Evil Overlord #139. "If  I'm sitting in my  camp, hear a
twig  snap, start  to  investigate, then  encounter  a small  woodland
creature, I  will send out some scouts  anyway just to be  on the safe
side. (If they disappear into the foliage, I will not send out another
patrol; I will break out napalm and Agent Orange.)"
<http://www.eviloverlord.com/>

Re: Table spaces (was Re: State of Beta 2)

From
Tom Lane
Date:
Ron Johnson <ron.l.johnson@cox.net> writes:
> On Sun, 2003-09-14 at 23:08, Tom Lane wrote:
>> Yeah, that's been in the back of my mind also.  Once we have tablespaces
>> and a couple of the other basic features we're still missing, it might
>> be a more reasonable proposition to freeze the on-disk representation.

> I think that every effort should be made so that the on-disk struct-
> ure (ODS) doesn't have to change when tablespaces is implemented.

That's not going to happen --- tablespaces will be complex enough
without trying to support a backwards-compatible special case.

If we have a workable pg_upgrade by the time tablespaces happen, it
would be reasonable to expect it to be able to rearrange the user data
files of an existing installation into the new directory layout.  If
we don't, the issue is moot anyway.

            regards, tom lane

Re: need for in-place upgrades (was Re: State of

From
Lamar Owen
Date:
Martin Marques wrote:
> El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
>>>At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
>>>'migration' server.  And I really don't want to think about dump/restore
>>>of 100TB (if PostgreSQL actually stores the image files, which it might).
>>Hmm. Just curious, do people generally backup 100TB of data, or once most
>>reach this point they have to hope that it's just hardware failures they'll
>>deal with and not software/other issues?
> Normally you would have a RAID with mirroring and CRC, so that if one of the
> disks in the array of disks falls, the system keeps working. You can even
> have hot-pluggable disks, so you can change the disk that is broken without
> rebooting.

I did mention a SAN running Fibre Channel.  I would have a portion of
the array in one building, and a portion of the array in another
building 1500 feet away.  I have lots of fiber between buildings, a
portion of which I am currently using.  So I can and will be doing RAID
over FC in a SAN, with spatial separation between portions of the array.
  Now whether it is geographically separate _enough_, well that's a
different question.  But I have thought through those issues already.

Using FC as a SAN in this way will complement my HA solution, which may
just be a hot failover server connected to the same SAN.  I am still
investigating the failover mechanism; having two separate database data
stores has its advantages (software errors can render a RAID worse than
useless, since the RAID will distribute file corruption very
effectively).  But I am not sure how it will work at present.

The buildings in question are somewhat unique, being that the portions
of the buildings I would be using were constructed by the US Army Corps
of Engineers.  See www.pari.edu for more information.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: need for in-place upgrades (was Re: State of Beta 2)

From
Lamar Owen
Date:
Marc G. Fournier wrote:
> On Sat, 13 Sep 2003, Lamar Owen wrote:
>>Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?
> I thought we were talking about upgrades here?

If eRserver can be used as a funnel for upgrading, then it by definition
must be able to replicate an older version to a newer.  I was just
asking to see if indeed eRserver has that capability.  If so, then that
may be usefule for those who can deal with a fully replicated datastore,
which might be an issue for various reasons.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: need for in-place upgrades (was Re: State of

From
"Joshua D. Drake"
Date:
>
> 100TB sounds like a lot of backup media and time... Not to mention
> ensuring that the backups will work with available and functioning
> backup hardware.

It is alot but is is not a lot for something like an Insurance company
or a bank. Also 100TB is probably non-compressed although 30TB is still
large.


>
> Head hurts just to think about it,
>
> Link.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
"Joshua D. Drake"
Date:
> Strawmen.  If we provide a good upgrade capability, we would just
> simply have to think about upgrades before changing features like
> that.  The upgrade code could be cognizant of these sorts of things;
> and shoud be, in fact.

Sure but IMHO it would be more important to fix bugs like the parser not
correctly using indexes on bigint unless the value is quoted...

I think everyone would agree that not having to use initdb would be nice
but I think there is much more important things to focus on.

Besides if you are upgrading PostgreSQL in a production environment I
would assume there would be an extremely valid reason. If the reason is
big enough to do a major version upgrade then an initdb shouldn't be all
that bad of a requirement.

J



--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: need for in-place upgrades (was Re: State of

From
Lamar Owen
Date:
Joshua D. Drake wrote:
> It is alot but is is not a lot for something like an Insurance company
> or a bank. Also 100TB is probably non-compressed although 30TB is still
> large.

Our requirements are such that this figure is our best guess after
compression.  The amount of data prior to compression is much larger,
and consists of highly compressible astronomical observations in FITS
format.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: State of Beta 2

From
Lamar Owen
Date:
Joshua D. Drake wrote:
> Sure but IMHO it would be more important to fix bugs like the parser not
> correctly using indexes on bigint unless the value is quoted...

> I think everyone would agree that not having to use initdb would be nice
> but I think there is much more important things to focus on.

Important is relative.

> Besides if you are upgrading PostgreSQL in a production environment I
> would assume there would be an extremely valid reason. If the reason is
> big enough to do a major version upgrade then an initdb shouldn't be all
> that bad of a requirement.

I'm not going to rehash the arguments I have made before; they are all
archived.  Suffice to say you are simply wrong.  The number of
complaints over the years shows that there IS a need.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: State of Beta 2

From
Andrew Rawnsley
Date:
When I started this thread I made comment on the fact that this initdb
issue was treated somewhat "casually" on the lists. Not trying to
flame, or be an ass or anything, but this is kind of what I meant.

Yes, I know there many important issues the developers (bless their
overworked fingers) want/need to address that affect many people, and
I'm not going to presume to fault their choices. Some things make more
difference to others (the bigint indexing issue means little to me, for
example), so we try to point out these things in the hope that someone
may pick up on it, or the discussion may bear fruitful solutions that
no one had considered.

The initdb situation is a significant problem/obstacle to many people.
Avoiding it would be far more than 'nice' for us.

On Monday, September 15, 2003, at 02:24 PM, Joshua D. Drake wrote:

>
>> Strawmen.  If we provide a good upgrade capability, we would just
>> simply have to think about upgrades before changing features like
>> that.  The upgrade code could be cognizant of these sorts of things;
>> and shoud be, in fact.
>
> Sure but IMHO it would be more important to fix bugs like the parser
> not correctly using indexes on bigint unless the value is quoted...
>
> I think everyone would agree that not having to use initdb would be
> nice but I think there is much more important things to focus on.
>
> Besides if you are upgrading PostgreSQL in a production environment I
> would assume there would be an extremely valid reason. If the reason
> is big enough to do a major version upgrade then an initdb shouldn't
> be all that bad of a requirement.
>
> J
>
>
>
> --
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
> The most reliable support for the most reliable Open Source database.
>
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>               http://archives.postgresql.org
>
--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
> I'm not going to rehash the arguments I have made before; they are all
> archived.  Suffice to say you are simply wrong.  The number of
> complaints over the years shows that there IS a need.


I at no point suggested that there was not a need. I only suggest that
the need may not be as great as some suspect or feel. To be honest -- if
your arguments were the "need" that everyone had... it would have been
implemented some how. It hasn't yet which would suggest that the number
of people that have the "need" at your level is not as great as the
number of people who have different "needs" from PostgreSQL.

Sincerely,

Joshua Drake




--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: need for in-place upgrades

From
Vivek Khera
Date:
>>>>> "MGF" == Marc G Fournier <scrappy@postgresql.org> writes:

MGF> On Sat, 13 Sep 2003, Lamar Owen wrote:

>> Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

MGF> I thought we were talking about upgrades here?


I'm *really* interested in how eRServer works on migrating from 7.2 to
7.4 (either eRServer 1.2 or 1.3 :-) )  I have hopes of doing this once
7.4 goes gold.  More testing for me, I guess.


--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "JDD" == Joshua D Drake <jd@commandprompt.com> writes:

JDD> Besides if you are upgrading PostgreSQL in a production environment I
JDD> would assume there would be an extremely valid reason. If the reason
JDD> is big enough to do a major version upgrade then an initdb shouldn't
JDD> be all that bad of a requirement.

One of my major reasons to want to move from 7.2 to 7.4 is that I
suffer from incredible index bloat.  Reindex on one of my tables takes
about 45 minutes per each of the 3 indexes on it during which time
part of my system is blocked.

Granted, the one-time cost of the migration to 7.4 will probably take
about 5 hours of dump/restore, but at least with the re-indexing I can
do one 45 minute block at a atime stretched over a few days early
in the morning.

I think some sort of scripted migration/upgrade tool that used
eRServer would be way cool.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Mon, 15 Sep 2003, Joshua D. Drake wrote:

>
> > I'm not going to rehash the arguments I have made before; they are all
> > archived.  Suffice to say you are simply wrong.  The number of
> > complaints over the years shows that there IS a need.
>
>
> I at no point suggested that there was not a need. I only suggest that
> the need may not be as great as some suspect or feel. To be honest -- if
> your arguments were the "need" that everyone had... it would have been
> implemented some how. It hasn't yet which would suggest that the number
> of people that have the "need" at your level is not as great as the
> number of people who have different "needs" from PostgreSQL.

Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
anyone else looking at extending it ... if the *need* was so great,
someone would have step'd up and looked into adding to what was already
there ...


Re: need for in-place upgrades (was Re: State of

From
Ron Johnson
Date:
On Mon, 2003-09-15 at 14:40, Lamar Owen wrote:
> Joshua D. Drake wrote:
> > It is alot but is is not a lot for something like an Insurance company
> > or a bank. Also 100TB is probably non-compressed although 30TB is still
> > large.
>
> Our requirements are such that this figure is our best guess after
> compression.  The amount of data prior to compression is much larger,
> and consists of highly compressible astronomical observations in FITS
> format.

Just MHO, but I'd think about keeping the images outside of the
database (or in a separate database), since pg_dump is single-
threaded, and thus 1 CPU will be hammered trying to compress the
FITS files, while the other CPU(s) sit idle.

Of course, you could compress the images on the front end, saving
disk space and do uncompressed pg_dumps.  The pg_dump would be IO
bound, then.  But I'm sure you thought of that already...

The images would have to be uncompressed at view time, but that
could happen on the client, thus saving bandwidth, and distributing
CPU needs.

http://h18006.www1.hp.com/products/storageworks/esl9000/index.html
This box is pretty spiffy: "up to 119 TB of native capacity",
"Multi-unit scalability supporting up to 64 drives and 2278
cartridges".
Too bad it doesn't mention Linux.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

Great Inventors of our time:
Al Gore -> Internet
Sun Microsystems -> Clusters


Re: need for in-place upgrades (was Re: State of

From
Ron Johnson
Date:
On Mon, 2003-09-15 at 14:40, Lamar Owen wrote:
> Joshua D. Drake wrote:
> > It is alot but is is not a lot for something like an Insurance company
> > or a bank. Also 100TB is probably non-compressed although 30TB is still
> > large.
>
> Our requirements are such that this figure is our best guess after
> compression.  The amount of data prior to compression is much larger,
> and consists of highly compressible astronomical observations in FITS
> format.

Wow, it just occurred to me: if you partition the data correctly,
you won't need to back it *all* up on a daily/weekly/monthly basis.

Once you back up a chunk of compressed images ("Orion, between 2001-
01-01 and 2001-01-31") a few times, no more need to back that data
up.

Thus, you don't need monster archival h/w like some of us do.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

484,246 sq mi are needed for 6 billion people to live, 4 persons
per lot, in lots that are 60'x150'.
That is ~ California, Texas and Missouri.
Alternatively, France, Spain and The United Kingdom.


Re: State of Beta 2

From
Ron Johnson
Date:
On Mon, 2003-09-15 at 13:24, Joshua D. Drake wrote:
> > Strawmen.  If we provide a good upgrade capability, we would just
> > simply have to think about upgrades before changing features like
> > that.  The upgrade code could be cognizant of these sorts of things;
> > and shoud be, in fact.
>
> Sure but IMHO it would be more important to fix bugs like the parser not
> correctly using indexes on bigint unless the value is quoted...
>
> I think everyone would agree that not having to use initdb would be nice
> but I think there is much more important things to focus on.
>
> Besides if you are upgrading PostgreSQL in a production environment I
> would assume there would be an extremely valid reason. If the reason is
> big enough to do a major version upgrade then an initdb shouldn't be all
> that bad of a requirement.

Hmmm.  A (US-oriented) hypothetical:
BOSS: The app works now.  Why rock the boat?
DBA: The new version has features that will save 20% disk space,
     and speed up certain operations by 75% every day.
BOSS: Fantastic!  How long will it take to upgrade?
DBA: 18 hours.
BOSS: 18 hours!!  We can only take that much downtime on Thanks-
      giving weekend, or 3-day July 4th, Christmas or New Year's
      weekends.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"(Women are) like compilers. They take simple statements and
make them into big productions."
Pitr Dubovitch


Re: State of Beta 2

From
Ron Johnson
Date:
On Mon, 2003-09-15 at 15:23, Joshua D. Drake wrote:
> > I'm not going to rehash the arguments I have made before; they are all
> > archived.  Suffice to say you are simply wrong.  The number of
> > complaints over the years shows that there IS a need.
>
>
> I at no point suggested that there was not a need. I only suggest that
> the need may not be as great as some suspect or feel. To be honest -- if
> your arguments were the "need" that everyone had... it would have been
> implemented some how. It hasn't yet which would suggest that the number
> of people that have the "need" at your level is not as great as the
> number of people who have different "needs" from PostgreSQL.

But the problem is that as more and more people put larger and larger
datasets, that are mission-critical, into PostgreSQL, the need will
grow larger and larger.

Of course, we understand the "finite resources" issue, and are not
badgering/complaining.  Simply, we are trying to make our case that
this is something that should go on the TODO list, and be kept in
the back of developers' minds.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"You ask us the same question every day, and we give you the
same answer every day. Someday, we hope that you will believe us..."
U.S. Secretary of Defense Donald Rumsfeld, to a reporter


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
>Hmmm.  A (US-oriented) hypothetical:
>BOSS: The app works now.  Why rock the boat?
>DBA: The new version has features that will save 20% disk space,
>     and speed up certain operations by 75% every day.
>BOSS: Fantastic!  How long will it take to upgrade?
>DBA: 18 hours.
>BOSS: 18 hours!!  We can only take that much downtime on Thanks-
>      giving weekend, or 3-day July 4th, Christmas or New Year's
>      weekends.
>
>

Sounds like you just found several weekends a year that you
can do the upgrade with ;).  Yes that was a joke.

Sincerley,

Joshua Drake





Re: State of Beta 2

From
Lamar Owen
Date:
Marc G. Fournier wrote:
> On Mon, 15 Sep 2003, Joshua D. Drake wrote:
>>>I'm not going to rehash the arguments I have made before;

>>I at no point suggested that there was not a need. I only suggest that
>>the need may not be as great as some suspect or feel. To be honest -- if
>>your arguments were the "need" that everyone had... it would have been
>>implemented some how. It hasn't yet which would suggest that the number

> Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
> anyone else looking at extending it ... if the *need* was so great,
> someone would have step'd up and looked into adding to what was already
> there ...

You'ns are going to make a liar out of me yet; I said I wasn't going to
rehash the arguments.  But I am going to answer Marc's statement.  Need
of the users != developer interest in implementing those.  This is the
ugly fact of open source software -- it is developer-driven, not
user-driven.  If it were user-driven in this case seamless upgrading
would have already happened.  But the sad fact is that the people who
have the necessary knowledge of the codebase in question are so
complacent and comfortable with the current dump/reload cycle that they
really don't seem to care about the upgrade issue.  That is quite a
harsh statement to make, yes, and I know that is kind of
uncharacteristic for me.  But, Marc, your statement thoroughy ignores
the archived history of this issue on the lists.

While pg_upgrade was a good first step (and I applaud Bruce for working
on it), it was promptly broken because the developers who changed the
on-disk format felt it wasn't important to make it continue working.

Stepping up to the plate on this issue will require an intimate
knowledge of the storage manager subsystem, a thorough knowledge of the
system catalogs, etc.  This has been discussed at length; I'll not
repeat it.  Just any old developer can't do this -- it needs the
long-term focused attention of Tom, Jan, or Bruce.  And that isn't going
to happen.  We know Tom's take on it; it's archived.  Maybe there's
someone out there with the deep knowledge of the backend to make this
happen who cares enough about it to make it happen, and who has the time
to do it.  I care enough to do the work; but I have neither the deep
knowledge necessary nor the time to make it happen.  There are many in
my position.  But those who could make it happen don't seem to have the
care level to do so.

And that has nothing to do with user need as a whole, since the care
level I mentioned is predicated by the developer interest level.  While
I know, Marc, how the whole project got started (I have read the first
posts), and I appreciate that you, Bruce, Thomas, and Vadim started the
original core team because you were and are users of PostgreSQL, I
sincerely believe that in this instance you are out of touch with this
need of many of today's userbase. And I say that with full knowledge of
PostgreSQL Inc.'s support role.  If given the choice between upgrading
capability, PITR, and Win32 support, my vote would go to upgrading.
Then migrating to PITR won't be a PITN.

What good are great features if it's a PITN to get upgraded to them?
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: State of Beta 2

From
Kaare Rasmussen
Date:
> repeat it.  Just any old developer can't do this -- it needs the
> long-term focused attention of Tom, Jan, or Bruce.  And that isn't going

I believe that neither of these people was born with the knowledge of how
PostgreSQL is working. An experienced developer with the time and the money
would be able to solve your problem.

> to do it.  I care enough to do the work; but I have neither the deep
> knowledge necessary nor the time to make it happen.  There are many in

You and others state that this is a very important issue. But it's really only
an issue if you can't ever have a service window. If people don't have
service windows, they have very expensive solutions and ought to be able to
afford the expense of a developer. They get so much for free, so if this is
the only problem they have, they should collect and hire a programmer.

> my position.  But those who could make it happen don't seem to have the
> care level to do so.

They're occupied with other matters. And yes - they often choose from personal
interest - and an upgrade tool will never make top 5 for any normal developer
;-)

> What good are great features if it's a PITN to get upgraded to them?

I still believe that users who can't upgrade are few and far between. If they
are so many, why won't they sponsor the cost for an upgrade utility ?

--
Kaare Rasmussen            --Linux, spil,--        Tlf:        3816 2582
Kaki Data                tshirts, merchandize      Fax:        3816 2501
Howitzvej 75               Åben 12.00-18.00        Email: kar@kakidata.dk
2000 Frederiksberg        Lørdag 12.00-16.00       Web:      www.suse.dk

Re: State of Beta 2

From
Lamar Owen
Date:
Kaare Rasmussen wrote:
>>repeat it.  Just any old developer can't do this -- it needs the
>>long-term focused attention of Tom, Jan, or Bruce.  And that isn't going

> I believe that neither of these people was born with the knowledge of how
> PostgreSQL is working. An experienced developer with the time and the money
> would be able to solve your problem.

There is a typo in my post; the indefinite article should be prepended
to the list of names; to solve this problem, we need _a_ Tom, Jan, or
Bruce, meaning a core-grade developer with substantial experience in
this codebase.

> You and others state that this is a very important issue. But it's really only
> an issue if you can't ever have a service window. If people don't have
> service windows, they have very expensive solutions and ought to be able to
> afford the expense of a developer. They get so much for free, so if this is
> the only problem they have, they should collect and hire a programmer.

This is an issue for more than those you state.  I have had numerous
complaints as RPM maintainer for the surprise people have when they find
out that PostgreSQL just has to be different from every other package
that they upgrade.  But again, the issues are well documented in the
archives, and my patience for people who want to rehash these well
documented things is wearing thin.  Tom has said that I have eloquently
stated my side of the argument, which, incidentally, I took as a massive
compliment (many thanks Tom), even though I don't personally feel it was
very eloquent.  So read the archives, it is very thoroughly stated
there.  But if I must continue restating what I have seen and heard,
then I guess I must.

And there are times dump/restore fails.  Read the archives for those times.

It is ludicrous to require dump restore.  I'm sorry, but that is my
studied opinion of the matter, over a period of 5 years.  nd I don't
care if Oracle or anybody else in the RDBMS field also does this; it is
still ludicrous.

> They're occupied with other matters. And yes - they often choose from personal
> interest - and an upgrade tool will never make top 5 for any normal developer
> ;-)

And that's the root of the problem, as I already stated.

> I still believe that users who can't upgrade are few and far between. If they
> are so many, why won't they sponsor the cost for an upgrade utility ?

Read the archives, and read Red Hat's bugzilla for PostgreSQL before
making blanket unsubstantiated statements like that.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: State of Beta 2

From
Andrew Rawnsley
Date:
On Tuesday, September 16, 2003, at 10:18 AM, Kaare Rasmussen wrote:

>> repeat it.  Just any old developer can't do this -- it needs the
>> long-term focused attention of Tom, Jan, or Bruce.  And that isn't
>> going
>
> I believe that neither of these people was born with the knowledge of
> how
> PostgreSQL is working. An experienced developer with the time and the
> money
> would be able to solve your problem.
>

I'll take Tom's word for it that it wouldn't be trivial, so I don't
think its quite so casual as that. I
wouldn't mind stepping up, but doing so would negate the need, as my
business would fail.

>> to do it.  I care enough to do the work; but I have neither the deep
>> knowledge necessary nor the time to make it happen.  There are many in
>
> You and others state that this is a very important issue. But it's
> really only
> an issue if you can't ever have a service window. If people don't have
> service windows, they have very expensive solutions and ought to be
> able to
> afford the expense of a developer. They get so much for free, so if
> this is
> the only problem they have, they should collect and hire a programmer.
>

Having a service window and wanting or being able to use it for
something that is bound
to make people nervous are two different things. And the idea that
people without service
windows having enough money to hire developers is complete fantasy, I'm
sorry.

Look, I'm not pissing and moaning about the developers lack of
attention or anything. They do
an amazing job. I understand the 'scratch the itch' nature of the
development, and the great amount
of progress they've made with what was a huge pile of garbled code. At
the same time,
everyone wants to advocate Postgres as an enterprise-ready system. If
you're going to do that,
you have to acknowledge the stumbling blocks. This is one of them. I
can pretty much guarantee
that I will not be allowed to upgrade several clients' systems unless
there's a real show-stopper,
because once I tell them what I have to do they'll tell me to get lost.

I've already given up on Oracle and DB2, and I'm not going back, so
I'll deal with this situation
as best I can. That and I run a small shop, so I don't need to bow to
politics or services/brand
requirements. A lot of people are not so fortunate, and anything that
can be said against
Postgres (or MySQL, FreeBSD, Linux, JBoss, whatever) becomes a hurdle
when trying to push for it.


>> my position.  But those who could make it happen don't seem to have
>> the
>> care level to do so.
>
> They're occupied with other matters. And yes - they often choose from
> personal
> interest - and an upgrade tool will never make top 5 for any normal
> developer
> ;-)
>
>> What good are great features if it's a PITN to get upgraded to them?
>
> I still believe that users who can't upgrade are few and far between.
> If they
> are so many, why won't they sponsor the cost for an upgrade utility ?
>

We don't exactly meet for beers every Thursday. In the end, I imagine
it will still take the attention
of one or more of the core developers. If any of them want to be
involved, I don't mind considering the
possibility.


> --
> Kaare Rasmussen            --Linux, spil,--        Tlf:        3816
> 2582
> Kaki Data                tshirts, merchandize      Fax:        3816
> 2501
> Howitzvej 75               Åben 12.00-18.00        Email:
> kar@kakidata.dk
> 2000 Frederiksberg        Lørdag 12.00-16.00       Web:
> www.suse.dk
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>
--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "MGF" == Marc G Fournier <scrappy@postgresql.org> writes:

MGF> On Mon, 15 Sep 2003, Joshua D. Drake wrote:


MGF> Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
MGF> anyone else looking at extending it ... if the *need* was so great,
MGF> someone would have step'd up and looked into adding to what was already
MGF> there ...

Hmmm, this is the math I just did in my head:

 time to implement pg_upgrade = X
 time to dump/restore once per year = Y

If X > Y*2 then why bother expending the effort?  Now, if the X was
distributed over a bunch of people, perhaps it would make sense to me.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "JDD" == Joshua D Drake <jd@commandprompt.com> writes:

>> BOSS: 18 hours!!  We can only take that much downtime on Thanks-
>> giving weekend, or 3-day July 4th, Christmas or New Year's
>> weekends.
>>

JDD> Sounds like you just found several weekends a year that you
JDD> can do the upgrade with ;).  Yes that was a joke.

it's not a joke around here!

every major long weekend, late saturday night or early saturday
morning, i log in and run major maintenance such as reindexing.  so
far i've only had to do the dump/restore once from 7.1 to 7.2, and
that too happened on a long weekend.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: need for in-place upgrades

From
Christopher Browne
Date:
khera@kcilink.com (Vivek Khera) writes:
>>>>>> "MGF" == Marc G Fournier <scrappy@postgresql.org> writes:
> MGF> On Sat, 13 Sep 2003, Lamar Owen wrote:
>
>>> Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?
>
> MGF> I thought we were talking about upgrades here?
>
> I'm *really* interested in how eRServer works on migrating from 7.2 to
> 7.4 (either eRServer 1.2 or 1.3 :-) )  I have hopes of doing this once
> 7.4 goes gold.  More testing for me, I guess.

I know that 7.2 to 7.3 is being actively looked at, but you're
presumably not getting straight answers on this because nobody has
FINISHED testing the process.

In any case, if your data is a "big deal" to you, there's no question
of doing some sort of blind "Download it, double click the icon;
accept the license agreement, and convert it all."

eRServer is complex enough critter that you would doubtless want to do
a "dry run" on a pair of test databases in order to make sure you know
what things need to be fiddled with in order to get it right.

There's going to be at least a _little_ bit of an outage involved in
switching the direction of replication between the databases, and you
surely want to do a dry run to let you know _all_ the details so that
you can build a checklist suitable to make sure that Going Live goes
as quickly and smoothly as possible, and to keep that outage as short
as possible.

Unfortunately, there are no "infinite" shortcuts to be had.  (Not that
there aren't vendors out there willing to try to sell them... :-))
--
(reverse (concatenate 'string "ofni.smrytrebil" "@" "enworbbc"))
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)

Re: State of Beta 2

From
Karsten Hilbert
Date:
> If X > Y*2 then why bother expending the effort?  Now, if the X was
> distributed over a bunch of people, perhaps it would make sense to me.
Since

 affordability(amount(X)) != amount(X)

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Re: State of Beta 2

From
"Joshua D. Drake"
Date:
>
> And that has nothing to do with user need as a whole, since the care
> level I mentioned is predicated by the developer interest level.
> While I know, Marc, how the whole project got started (I have read the
> first posts), and I appreciate that you, Bruce, Thomas, and Vadim
> started the original core team because you were and are users of
> PostgreSQL, I sincerely believe that in this instance you are out of
> touch with this need of many of today's userbase. And I say that with
> full knowledge of PostgreSQL Inc.'s support role.  If given the choice
> between upgrading capability, PITR, and Win32 support, my vote would
> go to upgrading. Then migrating to PITR won't be a PITN.

If someone is willing to pony up 2000.00 per month for a period of at
least 6 months, I will dedicated one of my programmers to the task. So
if you want it bad enough there it is. I will donate all changes,
patches etc.. to the project and I will cover the additional costs that
are over and above the 12,000. If we get it done quicker, all the better.

Sincerely,

Joshua Drake

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
Network Administrator
Date:
Hmmm, ok, I can't retask any of my people or reallocation funds for this year
but I can personally do 5 to 10% of that monthly cost.

Some more people and project plan- then the ball could roll  :)

Quoting "Joshua D. Drake" <jd@commandprompt.com>:

>
> >
> > And that has nothing to do with user need as a whole, since the care
> > level I mentioned is predicated by the developer interest level.
> > While I know, Marc, how the whole project got started (I have read the
> > first posts), and I appreciate that you, Bruce, Thomas, and Vadim
> > started the original core team because you were and are users of
> > PostgreSQL, I sincerely believe that in this instance you are out of
> > touch with this need of many of today's userbase. And I say that with
> > full knowledge of PostgreSQL Inc.'s support role.  If given the choice
> > between upgrading capability, PITR, and Win32 support, my vote would
> > go to upgrading. Then migrating to PITR won't be a PITN.
>
> If someone is willing to pony up 2000.00 per month for a period of at
> least 6 months, I will dedicated one of my programmers to the task. So
> if you want it bad enough there it is. I will donate all changes,
> patches etc.. to the project and I will cover the additional costs that
> are over and above the 12,000. If we get it done quicker, all the better.
>
> Sincerely,
>
> Joshua Drake
>
> --
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
> The most reliable support for the most reliable Open Source database.
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend
>


--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

Re: State of Beta 2

From
Andrew Rawnsley
Date:
Let me run some numbers. I'm interested in the idea, and I think I can
push one of my clients on it.

Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that
sort of time commitment? Is it maintainable over time? Or are we
pissing in the wind?

On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote:

>
>>
>> And that has nothing to do with user need as a whole, since the care
>> level I mentioned is predicated by the developer interest level.
>> While I know, Marc, how the whole project got started (I have read
>> the first posts), and I appreciate that you, Bruce, Thomas, and Vadim
>> started the original core team because you were and are users of
>> PostgreSQL, I sincerely believe that in this instance you are out of
>> touch with this need of many of today's userbase. And I say that with
>> full knowledge of PostgreSQL Inc.'s support role.  If given the
>> choice between upgrading capability, PITR, and Win32 support, my vote
>> would go to upgrading. Then migrating to PITR won't be a PITN.
>
> If someone is willing to pony up 2000.00 per month for a period of at
> least 6 months, I will dedicated one of my programmers to the task. So
> if you want it bad enough there it is. I will donate all changes,
> patches etc.. to the project and I will cover the additional costs
> that are over and above the 12,000. If we get it done quicker, all the
> better.
>
> Sincerely,
>
> Joshua Drake
>
> --
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
> The most reliable support for the most reliable Open Source database.
>
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 8: explain analyze is your friend
>
--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
"Marc G. Fournier"
Date:

> > And that has nothing to do with user need as a whole, since the care
> > level I mentioned is predicated by the developer interest level.
> > While I know, Marc, how the whole project got started (I have read the
> > first posts), and I appreciate that you, Bruce, Thomas, and Vadim
> > started the original core team because you were and are users of
> > PostgreSQL, I sincerely believe that in this instance you are out of
> > touch with this need of many of today's userbase.

Huh?  I have no disagreement that upgrading is a key feature that we are
lacking ... but, if there are any *on disk* changes between releases, how
do you propose 'in place upgrades'?  Granted, if its just changes to the
system catalogs and such, pg_upgrade should be able to be taught to handle
it .. I haven't seen anyone step up to do so, and for someone spending so
much time pushing for an upgrade path, I haven't seen you pony up the time
...

Just curious here ... but, with all the time you've spent pushing for an
"easy upgrade path", have you looked at the other RDBMSs and how they deal
with upgrades?  I think its going to be a sort of apples-to-oranges thing,
since I imagine that most of the 'big ones' don't change their disk
formats anymore ...

What I'd be curious about is how badly we compare as far as major releases
are concerned ... I don't believe we've had a x.y.z release yet that
required a dump/reload (and if so, it was a very very special
circumstance), but what about x.y releases?  In Oracle's case, i don't
think they do x.y.z releases, do they?  Only X and x.y?

K, looking back through that it almost sounds like a ramble ... hopefully
you understand what I'm asking ...

I know when I was at the University, and they dealt with Oracle upgrades,
the guys plan'd for a weekend ...

Re: State of Beta 2

From
Mike Mascari
Date:
Lamar Owen wrote:

> And that has nothing to do with user need as a whole, since the care
> level I mentioned is predicated by the developer interest level.  While
> I know, Marc, how the whole project got started (I have read the first
> posts), and I appreciate that you, Bruce, Thomas, and Vadim started the
> original core team because you were and are users of PostgreSQL, I
> sincerely believe that in this instance you are out of touch with this
> need of many of today's userbase. And I say that with full knowledge of
> PostgreSQL Inc.'s support role.  If given the choice between upgrading
> capability, PITR, and Win32 support, my vote would go to upgrading. Then
> migrating to PITR won't be a PITN.

Ouch. I'd like to see an easy upgrade path, but I'd rather have a 7.5
with PITR then an in-place upgrade. Perhaps the demand for either is
associated with the size of the db vs. the fear associated with an
inability to restore to a point-in-time. My fear of an accidental:

DELETE FROM foo;

is greater than my loathing of the upgrade process.

> What good are great features if it's a PITN to get upgraded to them?

What good is an in-place upgrade without new features?

(I'm kinda joking here) ;-)

Mike Mascari
mascarm@mascari.com




Re: State of Beta 2

From
Dennis Gearon
Date:
It's be EXTREMELY cool if there was some relationship betweenn the code for;

    PITR and
    Inplace upgrades

Any possibility of overlaps?

Mike Mascari wrote:

>Lamar Owen wrote:
>
>
>
>>And that has nothing to do with user need as a whole, since the care
>>level I mentioned is predicated by the developer interest level.  While
>>I know, Marc, how the whole project got started (I have read the first
>>posts), and I appreciate that you, Bruce, Thomas, and Vadim started the
>>original core team because you were and are users of PostgreSQL, I
>>sincerely believe that in this instance you are out of touch with this
>>need of many of today's userbase. And I say that with full knowledge of
>>PostgreSQL Inc.'s support role.  If given the choice between upgrading
>>capability, PITR, and Win32 support, my vote would go to upgrading. Then
>>migrating to PITR won't be a PITN.
>>
>>
>
>Ouch. I'd like to see an easy upgrade path, but I'd rather have a 7.5
>with PITR then an in-place upgrade. Perhaps the demand for either is
>associated with the size of the db vs. the fear associated with an
>inability to restore to a point-in-time. My fear of an accidental:
>
>DELETE FROM foo;
>
>is greater than my loathing of the upgrade process.
>
>
>
>>What good are great features if it's a PITN to get upgraded to them?
>>
>>
>
>What good is an in-place upgrade without new features?
>
>(I'm kinda joking here) ;-)
>
>Mike Mascari
>mascarm@mascari.com
>
>
>
>
>
>


Re: State of Beta 2

From
"scott.marlowe"
Date:
As I understand it, changes that require the dump restore fall into two
categories, catalog changes, and on disk format changes.  If that's the
case (I'm as likely wrong as right here, I know) then it could be that
most upgrades (say 7.4 to 7.5) could be accomplished more easier than the
occasional ones that require actual disk format changes (i.e. 7.5 to 8.0)

If that's the case, I'd imagine that as postgresql gets more mature, on
disk upgrades should become easier to implement, and dump/restore would
only be required for major version upgrades at some point.

Is that about right, and if so, would it make maintaining this kind of
program simpler if it only had to handle catalog changes?

On Tue, 16 Sep 2003, Andrew Rawnsley wrote:

>
> Let me run some numbers. I'm interested in the idea, and I think I can
> push one of my clients on it.
>
> Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that
> sort of time commitment? Is it maintainable over time? Or are we
> pissing in the wind?
>
> On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote:
>
> >
> >>
> >> And that has nothing to do with user need as a whole, since the care
> >> level I mentioned is predicated by the developer interest level.
> >> While I know, Marc, how the whole project got started (I have read
> >> the first posts), and I appreciate that you, Bruce, Thomas, and Vadim
> >> started the original core team because you were and are users of
> >> PostgreSQL, I sincerely believe that in this instance you are out of
> >> touch with this need of many of today's userbase. And I say that with
> >> full knowledge of PostgreSQL Inc.'s support role.  If given the
> >> choice between upgrading capability, PITR, and Win32 support, my vote
> >> would go to upgrading. Then migrating to PITR won't be a PITN.
> >
> > If someone is willing to pony up 2000.00 per month for a period of at
> > least 6 months, I will dedicated one of my programmers to the task. So
> > if you want it bad enough there it is. I will donate all changes,
> > patches etc.. to the project and I will cover the additional costs
> > that are over and above the 12,000. If we get it done quicker, all the
> > better.
> >
> > Sincerely,
> >
> > Joshua Drake
> >
> > --
> > Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> > Postgresql support, programming shared hosting and dedicated hosting.
> > +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
> > The most reliable support for the most reliable Open Source database.
> >
> >
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 8: explain analyze is your friend
> >
> --------------------
>
> Andrew Rawnsley
> President
> The Ravensfield Digital Resource Group, Ltd.
> (740) 587-0114
> www.ravensfield.com
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>


Re: State of Beta 2

From
Andrew Rawnsley
Date:
On Tuesday, September 16, 2003, at 04:51 PM, Marc G. Fournier wrote:
>
> Just curious here ... but, with all the time you've spent pushing for
> an
> "easy upgrade path", have you looked at the other RDBMSs and how they
> deal
> with upgrades?  I think its going to be a sort of apples-to-oranges
> thing,
> since I imagine that most of the 'big ones' don't change their disk
> formats anymore ...
>

That's probably the thing - they've written the on-disk stuff in stone
by now. DB2 has
a lot of function rebinding to do, but thats probably a different issue.

Tying to my last post, concerning Joshua's offer to put up the labor if
we can put up the dough, given the
fact that Postgres is still in flux, do you think its even possible to
do some sort of in-place upgrade, not knowing
what may come up when you're writing 7.6?

In other words, if we pony up and get something written now, will it
need further development every time an x.y release comes up.

> What I'd be curious about is how badly we compare as far as major
> releases
> are concerned ... I don't believe we've had a x.y.z release yet that
> required a dump/reload (and if so, it was a very very special
> circumstance), but what about x.y releases?  In Oracle's case, i don't
> think they do x.y.z releases, do they?  Only X and x.y?
>

Lord, who knows what they're up to. They do (or did) x.y.z releases
(I'm using 8.1.6), but publicly they're
calling everything 8i,9i,10g yahdah yahdah yahdah.

I certainly will concede that (to me), upgrading Postgres is easier
than Oracle, as I can configure, compile, install,
do an initdb, and generate an entire large DDL in the time it takes the
abysmal Oracle installer to even start. Then try
to install/upgrade it on an 'unsupported' linux, like Slack...but I
don't have to do anything with the data.

To a PHB/PHC (pointy-haired-client), saying 'Oracle' is like giving
them a box of Depends, even though it doesn't save them
from a fire hose. They feel safe.

> K, looking back through that it almost sounds like a ramble ...
> hopefully
> you understand what I'm asking ...
>
> I know when I was at the University, and they dealt with Oracle
> upgrades,
> the guys plan'd for a weekend ...
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>
--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
Hello,

  I would imagine that it would be maintainable but it would be
something that would have to be
constantly maintained from release to release. It would have to become
part of the actual project or
it would die.

  The reason I chose six months is that I figure it will be 30 days of
full time just dinking around to make
sure that we have a solid handle on how things are done for this part of
the code. Then we would know
what we think it would take. It was a gut theory but I believe it can be
done or at least a huge jump on it.


Sincerely,

Joshua Drake


Andrew Rawnsley wrote:

>
> Let me run some numbers. I'm interested in the idea, and I think I can
> push one of my clients on it.
>
> Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that
> sort of time commitment? Is it maintainable over time? Or are we
> pissing in the wind?
>
> On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote:
>
>>
>>>
>>> And that has nothing to do with user need as a whole, since the care
>>> level I mentioned is predicated by the developer interest level.
>>> While I know, Marc, how the whole project got started (I have read
>>> the first posts), and I appreciate that you, Bruce, Thomas, and
>>> Vadim started the original core team because you were and are users
>>> of PostgreSQL, I sincerely believe that in this instance you are out
>>> of touch with this need of many of today's userbase. And I say that
>>> with full knowledge of PostgreSQL Inc.'s support role.  If given the
>>> choice between upgrading capability, PITR, and Win32 support, my
>>> vote would go to upgrading. Then migrating to PITR won't be a PITN.
>>
>>
>> If someone is willing to pony up 2000.00 per month for a period of at
>> least 6 months, I will dedicated one of my programmers to the task.
>> So if you want it bad enough there it is. I will donate all changes,
>> patches etc.. to the project and I will cover the additional costs
>> that are over and above the 12,000. If we get it done quicker, all
>> the better.
>>
>> Sincerely,
>>
>> Joshua Drake
>>
>> --
>> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
>> Postgresql support, programming shared hosting and dedicated hosting.
>> +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
>> The most reliable support for the most reliable Open Source database.
>>
>>
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 8: explain analyze is your friend
>>
> --------------------
>
> Andrew Rawnsley
> President
> The Ravensfield Digital Resource Group, Ltd.
> (740) 587-0114
> www.ravensfield.com


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
"Joshua D. Drake"
Date:
> Tying to my last post, concerning Joshua's offer to put up the labor
> if we can put up the dough, given the
> fact that Postgres is still in flux, do you think its even possible to
> do some sort of in-place upgrade, not knowing
> what may come up when you're writing 7.6?
>
> In other words, if we pony up and get something written now, will it
> need further development every time an x.y release comes up.

There is probably no question that it will need further development.
However, I would imagine that once the intial grunt work is done it
would be much easier to migrate the code (especially if it is
continually maintained) to newer releases.

My thought process is that we would start with 7.4 codebase and as it
migrates to 7.5 move the work directly to 7.5 and if possible release
for 7.5 (although that really may be pushing it).

J




>
>> What I'd be curious about is how badly we compare as far as major
>> releases
>> are concerned ... I don't believe we've had a x.y.z release yet that
>> required a dump/reload (and if so, it was a very very special
>> circumstance), but what about x.y releases?  In Oracle's case, i don't
>> think they do x.y.z releases, do they?  Only X and x.y?
>>
>
> Lord, who knows what they're up to. They do (or did) x.y.z releases
> (I'm using 8.1.6), but publicly they're
> calling everything 8i,9i,10g yahdah yahdah yahdah.
>
> I certainly will concede that (to me), upgrading Postgres is easier
> than Oracle, as I can configure, compile, install,
> do an initdb, and generate an entire large DDL in the time it takes
> the abysmal Oracle installer to even start. Then try
> to install/upgrade it on an 'unsupported' linux, like Slack...but I
> don't have to do anything with the data.
>
> To a PHB/PHC (pointy-haired-client), saying 'Oracle' is like giving
> them a box of Depends, even though it doesn't save them
> from a fire hose. They feel safe.
>
>> K, looking back through that it almost sounds like a ramble ...
>> hopefully
>> you understand what I'm asking ...
>>
>> I know when I was at the University, and they dealt with Oracle
>> upgrades,
>> the guys plan'd for a weekend ...
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 4: Don't 'kill -9' the postmaster
>>
> --------------------
>
> Andrew Rawnsley
> President
> The Ravensfield Digital Resource Group, Ltd.
> (740) 587-0114
> www.ravensfield.com


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
Robert Creager
Date:
Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
"Joshua D. Drake" <jd@commandprompt.com> uttered something amazingly similar to:

> If someone is willing to pony up 2000.00 per month for a period of at
> least 6 months, I will dedicated one of my programmers to the task. So
> if you want it bad enough there it is. I will donate all changes,
> patches etc.. to the project and I will cover the additional costs that
> are over and above the 12,000. If we get it done quicker, all the better.
>

Well, if you're willing to set up some sort of escrow, I'll put in $100.  I
don't do db's except for play, but I hate the dump/restore part.  I've lost data
two times fat-fingering the upgrade, trying to use two running installations on
the same machine.  I'm that good...

Cheers,
Rob

--
 21:28:34 up 46 days, 14:03,  4 users,  load average: 2.00, 2.00, 2.00

Attachment

Re: State of Beta 2

From
Dennis Gearon
Date:
Robert Creager wrote:

>Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>"Joshua D. Drake" <jd@commandprompt.com> uttered something amazingly similar to:
>
>
>
>>If someone is willing to pony up 2000.00 per month for a period of at
>>least 6 months, I will dedicated one of my programmers to the task. So
>>if you want it bad enough there it is. I will donate all changes,
>>patches etc.. to the project and I will cover the additional costs that
>>are over and above the 12,000. If we get it done quicker, all the better.
>>
>>
>>
>
>Well, if you're willing to set up some sort of escrow, I'll put in $100.  I
>don't do db's except for play, but I hate the dump/restore part.  I've lost data
>two times fat-fingering the upgrade, trying to use two running installations on
>the same machine.  I'm that good...
>
>Cheers,
>Rob
>
>
>
Is that $100 times once, or $100 X 6mos anticiapated develop time.


Re: State of Beta 2

From
Tom Lane
Date:
Andrew Rawnsley <ronz@ravensfield.com> writes:
> On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote:
>> If someone is willing to pony up 2000.00 per month for a period of at
>> least 6 months, I will dedicated one of my programmers to the task.

> Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that
> sort of time commitment?

While I dislike staring gift horses in the mouth, I have to say that
the people I think could do it (a) are getting paid more than $24K/yr,
and (b) are names already seen regularly in the PG commit logs.  If
there's anyone in category (b) who works for Command Prompt, I missed
the connection.

I have no doubt that a competent programmer could learn the Postgres
innards well enough to do the job; as someone pointed out earlier in
this thread, none of the core committee was born knowing Postgres.
I do, however, doubt that it can be done in six months if one has
any significant learning curve to climb up first.

            regards, tom lane

Re: State of Beta 2

From
"Mark Cave-Ayland"
Date:
> Date: Tue, 16 Sep 2003 14:39:47 -0700
> From: "Joshua D. Drake" <jd@commandprompt.com>
> To: Andrew Rawnsley <ronz@ravensfield.com>
> Cc: "Marc G. Fournier" <scrappy@postgresql.org>,
>    PgSQL General ML <pgsql-general@postgresql.org>
> Subject: Re: State of Beta 2
> Message-ID: <3F678323.7000708@commandprompt.com>
>
> >
> > Tying to my last post, concerning Joshua's offer to put up the labor

> > if we can put up the dough, given the
> > fact that Postgres is still in flux, do you think its even possible
to
> > do some sort of in-place upgrade, not knowing
> > what may come up when you're writing 7.6?
> >
> > In other words, if we pony up and get something written now, will it

> > need further development every time an x.y release comes up.
>
> There is probably no question that it will need further development.
> However, I would imagine that once the intial grunt work is done it
> would be much easier to migrate the code (especially if it is
> continually maintained) to newer releases.
>
> My thought process is that we would start with 7.4 codebase and as it
> migrates to 7.5 move the work directly to 7.5 and if possible release
> for 7.5 (although that really may be pushing it).
>
> J

While everyone is throwing around ideas on this one.....

Would it not be possible to reserve the first few pages of each file
that stores tuples to store some metadata that describes the on-disk
structure and the DB version? If the DB version in the existing files
doesn't match the current version of the postmaster then it
automatically launches pg_upgrade on startup.

Hopefully this would minimise the work that would need to be done to
pg_upgrade between versions, since the only changes between versions
would be to provide the mappings between the on-disk structures of the
existing files (which could easily be determined by parsing the metadata
from the existing files) and the modified on-disk structure required by
the new version. (Ok I know this doesn't deal with the catalog issues
but hopefully it would be a step in the right direction).


Cheers,

Mark.

---

Mark Cave-Ayland
Webbased Ltd.
Tamar Science Park
Derriford
Plymouth
PL6 8BX
England

Tel: +44 (0)1752 764445
Fax: +44 (0)1752 764446


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender. You
should not copy it or use it for any purpose nor disclose or distribute
its contents to any other person.



Re: State of Beta 2

From
Kaare Rasmussen
Date:
>> If someone is willing to pony up 2000.00 per month for a period of at
>> least 6 months, I will dedicated one of my programmers to the task.

I stated the "how much will it cost" question, but I'm beginning to think that
it's the wrong approach. From the answers in this thread I do believe that it
will be an eternal chase with almost certainty of errors.

Some people have claimed that the big commercial databases don't change their
on-disk represantation anymore. Maybe PostgreSQL could try to aim for this
goal. At least try to get the on-disk changes ready for 7.5 - with or without
the functionality to use it. I think that any pg_* table changes could be
done with a small and efficient pg_upgrade.

Big items that will change the way PostgreSQL stores its data would be
Tablespaces
PITR
...
More ?

I know it's not possible to tell the future, but if Oracle is steady,
shouldn't it be possible?

How do other Open Source systems do ? MySQL (or maybe better: InnoDB),
FireBird ??

--
Kaare Rasmussen            --Linux, spil,--        Tlf:        3816 2582
Kaki Data                tshirts, merchandize      Fax:        3816 2501
Howitzvej 75               Åben 12.00-18.00        Email: kar@kakidata.dk
2000 Frederiksberg        Lørdag 12.00-16.00       Web:      www.suse.dk

Re: State of Beta 2

From
Peter Childs
Date:
On Wed, 17 Sep 2003, Mark Cave-Ayland wrote:

> > Date: Tue, 16 Sep 2003 14:39:47 -0700
> > From: "Joshua D. Drake" <jd@commandprompt.com>
> > To: Andrew Rawnsley <ronz@ravensfield.com>
> > Cc: "Marc G. Fournier" <scrappy@postgresql.org>,
> >    PgSQL General ML <pgsql-general@postgresql.org>
> > Subject: Re: State of Beta 2
> > Message-ID: <3F678323.7000708@commandprompt.com>
> >
> > >
> > > Tying to my last post, concerning Joshua's offer to put up the labor
>
> > > if we can put up the dough, given the
> > > fact that Postgres is still in flux, do you think its even possible
> to
> > > do some sort of in-place upgrade, not knowing
> > > what may come up when you're writing 7.6?
> > >
> > > In other words, if we pony up and get something written now, will it
>
> > > need further development every time an x.y release comes up.
> >
> > There is probably no question that it will need further development.
> > However, I would imagine that once the intial grunt work is done it
> > would be much easier to migrate the code (especially if it is
> > continually maintained) to newer releases.
> >
> > My thought process is that we would start with 7.4 codebase and as it
> > migrates to 7.5 move the work directly to 7.5 and if possible release
> > for 7.5 (although that really may be pushing it).
> >
> > J
>
> While everyone is throwing around ideas on this one.....
>
> Would it not be possible to reserve the first few pages of each file
> that stores tuples to store some metadata that describes the on-disk
> structure and the DB version? If the DB version in the existing files
> doesn't match the current version of the postmaster then it
> automatically launches pg_upgrade on startup.
>
> Hopefully this would minimise the work that would need to be done to
> pg_upgrade between versions, since the only changes between versions
> would be to provide the mappings between the on-disk structures of the
> existing files (which could easily be determined by parsing the metadata
> from the existing files) and the modified on-disk structure required by
> the new version. (Ok I know this doesn't deal with the catalog issues
> but hopefully it would be a step in the right direction).
>
>
    Silly point I know. But...

    I don't mind really having to rebuild the database from backup too
gain new features. What I really can't stand is that every new version
breaks half the clients. A Client that worked with 7.1 should still work
with 7.4.
    This is because much of the meta-data about the database is only
available from system catalogs.
    I know there is a standard for Meta-data available for SQL but
nobody follows it.
    If the clients did not break (like the did with 7.3) you could
then write the upgrade program as a client.

1. Initalises its own database root. With old database still running
2. Reads the old database. Into its new dataroot. (Backup like pgdump...)
3. Close Down Old Database
4. Open New Database properly
5. Delete New Database.

    This way the database *should* only be down for a few seconds
while we actually swap postmasters.
    This should even be faster (or seam faster) than modifying on-disk
stuctures because the database is only down for the time it takes to stop
one postmaster and start the new.
    The only problem I can see is the need to have lots of free disk
space to store two databases.....
    Replication would also help this if it ever gets finished!

Peter Childs


Re: State of Beta 2

From
Peter Childs
Date:
On Wed, 17 Sep 2003, Kaare Rasmussen wrote:

>
> How do other Open Source systems do ? MySQL (or maybe better: InnoDB),
> FireBird ??
>
>
    Well MySql for one has more than one on-disk format....
One that supports transactions and one that does not. Looks like they do
it my writting different tables in different ways. A very stupid thing to
do if you ask me.

Peter Childs


Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Wed, 17 Sep 2003, Kaare Rasmussen wrote:

> I know it's not possible to tell the future, but if Oracle is steady,
> shouldn't it be possible?

Also consider that Oracle has 'the big bucks' to dedicate a group of staff
to keep on top of the upgrade issues ...


Re: State of Beta 2

From
Ron Johnson
Date:
On Wed, 2003-09-17 at 03:45, Kaare Rasmussen wrote:
[snip]
> Some people have claimed that the big commercial databases don't change their
> on-disk represantation anymore. Maybe PostgreSQL could try to aim for this
> goal. At least try to get the on-disk changes ready for 7.5 - with or without
> the functionality to use it. I think that any pg_* table changes could be
> done with a small and efficient pg_upgrade.
[snip]

I think changes in the system catalog should be separated from
changes in the physical on-disk structures (i.e. how tables and
indexes are stored).

Maybe I'm totally wrong, but ALTERing the pg_* tables during each
version upgrade should be relatively easy to script, when the phys-
ical on-disk structures have been solidified.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

The difference between drunken sailors and Congressmen is that
drunken sailors spend their own money.


Re: State of Beta 2

From
Robert Creager
Date:
Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon <gearond@fireserve.net> uttered something amazingly similar to:

> Robert Creager wrote:
>
> >Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
> >"Joshua D. Drake" <jd@commandprompt.com> uttered something amazingly similar
> >to:
> >
> >
> >
> >>If someone is willing to pony up 2000.00 per month for a period of at
> >
> >Well, if you're willing to set up some sort of escrow, I'll put in $100.  I
>
> Is that $100 times once, or $100 X 6mos anticiapated develop time.

That's $100 once.  And last I looked, there are well over 1800 subscribers on
this list alone.  On the astronomically small chance everyone one of them did
what I'm doing, it would cover more than 6 months of development time ;-)  This
strikes me as like supporting public radio.  The individuals do some, and the
corporations do a bunch.

I'm just putting my money toward a great product, rather than complaining that
it's not done.  Just like Joshua is doing.  You cannot hire a competent
programmer for $24k a year, so he is putting up some money on this also.

There have been a couple of other bytes from small businesses, so who knows!

You game?

Cheers,
Rob

--
 07:47:48 up 47 days, 22 min,  4 users,  load average: 2.04, 2.07, 2.02

Attachment

Re: State of Beta 2

From
Network Administrator
Date:
This is along the lines of what I was talking about.  If at compile time a user
could chose their on disk representation by version within a reasonable history
(say two major versions back) then I that would give people a choice for a
certain about of time.

Backward compatibility is nice but at a certain point it will become "backward"
(or better yet awkward or maybe just damn near impossible) to support certain
past features.

This is a user reality, upgrades are part of owning and using any system.  Its
just that we don't want to seemingly force people to upgrade.  I don't think
that is hard for someone that 24/7 shop with very large databases to understand.

Quoting Ron Johnson <ron.l.johnson@cox.net>:

> On Wed, 2003-09-17 at 03:45, Kaare Rasmussen wrote:
> [snip]
> > Some people have claimed that the big commercial databases don't change
> their
> > on-disk represantation anymore. Maybe PostgreSQL could try to aim for this
>
> > goal. At least try to get the on-disk changes ready for 7.5 - with or
> without
> > the functionality to use it. I think that any pg_* table changes could be
> > done with a small and efficient pg_upgrade.
> [snip]
>
> I think changes in the system catalog should be separated from
> changes in the physical on-disk structures (i.e. how tables and
> indexes are stored).
>
> Maybe I'm totally wrong, but ALTERing the pg_* tables during each
> version upgrade should be relatively easy to script, when the phys-
> ical on-disk structures have been solidified.
>
> --
> -----------------------------------------------------------------
> Ron Johnson, Jr. ron.l.johnson@cox.net
> Jefferson, LA USA
>
> The difference between drunken sailors and Congressmen is that
> drunken sailors spend their own money.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>


--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

Re: State of Beta 2

From
Tom Lane
Date:
Ron Johnson <ron.l.johnson@cox.net> writes:
> I think changes in the system catalog should be separated from
> changes in the physical on-disk structures (i.e. how tables and
> indexes are stored).

We already know how to cope with changes in the system catalogs ---
pg_upgrade has pretty much proved out how to do that.  The original
shell-script implementation wasn't bulletproof enough for production use
(IMHO anyway), but that's because it was an experimental prototype, not
because there was anything fundamentally wrong with the concept.

The hard part is dealing with mostly-unforeseeable future changes in
our needs for representation of user data.  We can and already have done
some simple things like include version numbers in page headers, but it
would be a fatal mistake to suppose that that means the problem is
solved, or that actually doing in-place upgrades won't require a
tremendous amount of additional work.

            regards, tom lane

Re: State of Beta 2

From
Tom Lane
Date:
Kaare Rasmussen <kar@kakidata.dk> writes:
> Some people have claimed that the big commercial databases don't change their
> on-disk represantation anymore. Maybe PostgreSQL could try to aim for this
> goal.

At the very least we could try to quantize changes --- say, allow
on-disk changes only every third or fourth major release, and batch up
work requiring such changes.  Avoiding on-disk changes actually was a
design consideration for awhile, but we sort of stopped worrying about
it when the prototype version of pg_upgrade stopped working (which IIRC
was because it couldn't get at what it would need to get at without
being rewritten in C, and no one wanted to tackle that project).

> How do other Open Source systems do ? MySQL (or maybe better: InnoDB),
> FireBird ??

Dunno about MySQL.  I'm pretty sure I remember Ann Harrison stating that
FireBird's disk structures haven't changed since the beginning of
Interbase.  Which you might take as saying that they were a lot smarter
than we are, but I suspect what it really means is that
FireBird/Interbase hasn't undergone the kind of metamorphosis of purpose
that the Postgres code base has.  Keep in mind that it started as an
experimental academic prototype (representing some successful ideas and
some not-so-successful ones), and the current developers have been
laboring to convert it into an industrial-strength production tool ---
keeping the good experimental ideas, but weeding out the bad ones, and
adding production-oriented features that weren't in the original design.
The entire argument that version-to-version stability should be a
critical goal would have been foreign to the original developers of
Postgres.

            regards, tom lane

Re: State of Beta 2

From
"Joshua D. Drake"
Date:
>I have no doubt that a competent programmer could learn the Postgres
>innards well enough to do the job; as someone pointed out earlier in
>this thread, none of the core committee was born knowing Postgres.
>I do, however, doubt that it can be done in six months if one has
>any significant learning curve to climb up first.
>
>
Hello,

  This is a completely reasonable statement. However we have
three full time programmers right now that are fairly familiar with
the internals of PostgreSQL. They are the programmers that
are currently coding our transactional replication engine (which
is going beta in about 3 weeks), plPHP, and also did the work on
S/ODBC, S/JDBC and PgManage.

  I am not going to say that we are neccessarily Tom Lane material ;)
but my programmers are quite good and learning more everyday. They
have been in the guts of PostgreSQL for 9 months straight, 40 hours
a week now.

Sincerely,

Joshua Drake




>            regards, tom lane
>
>

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
Dennis Gearon
Date:
I had already committed $50/mo.

Robert Creager wrote:

>Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
>Dennis Gearon <gearond@fireserve.net> uttered something amazingly similar to:
>
>
>
>>Robert Creager wrote:
>>
>>
>>
>>>Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>>>"Joshua D. Drake" <jd@commandprompt.com> uttered something amazingly similar
>>>to:
>>>
>>>
>>>
>>>
>>>
>>>>If someone is willing to pony up 2000.00 per month for a period of at
>>>>
>>>>
>>>Well, if you're willing to set up some sort of escrow, I'll put in $100.  I
>>>
>>>
>>Is that $100 times once, or $100 X 6mos anticiapated develop time.
>>
>>
>
>That's $100 once.  And last I looked, there are well over 1800 subscribers on
>this list alone.  On the astronomically small chance everyone one of them did
>what I'm doing, it would cover more than 6 months of development time ;-)  This
>strikes me as like supporting public radio.  The individuals do some, and the
>corporations do a bunch.
>
>I'm just putting my money toward a great product, rather than complaining that
>it's not done.  Just like Joshua is doing.  You cannot hire a competent
>programmer for $24k a year, so he is putting up some money on this also.
>
>There have been a couple of other bytes from small businesses, so who knows!
>
>You game?
>
>Cheers,
>Rob
>
>
>


State of Beta (2)

From
"Joshua D. Drake"
Date:
Hello,

  O.k. here are my thoughts on how this could work:

  Command Prompt will set up an escrow account online at www.escrow.com.
  When the Escrow account totals 2000.00 and is released, Command Prompt
will dedicate a
  programmer for one month to debugging, documenting, reviewing,
digging, crying,
  screaming, begging and bleeding with the code. At the end of the month
and probably during
  depending on how everything goes Command Prompt will release its
findings.  The findings
  will include a project plan on moving forward over the next 5 months
(if that is what it takes) to
  produce the first functional pg_upgrade.

  If the project is deemed as moving in the right direction by the
community members and specifically
  the core members we will setup milestone payments for the project.

   What does everyone think?

   Sincerely,

   Joshua D. Drake


Dennis Gearon wrote:

> I had already committed $50/mo.
>
> Robert Creager wrote:
>
>> Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
>> Dennis Gearon <gearond@fireserve.net> uttered something amazingly
>> similar to:
>>
>>
>>
>>> Robert Creager wrote:
>>>
>>>
>>>
>>>> Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>>>> "Joshua D. Drake" <jd@commandprompt.com> uttered something
>>>> amazingly similar
>>>> to:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> If someone is willing to pony up 2000.00 per month for a period of
>>>>> at
>>>>
>>>> Well, if you're willing to set up some sort of escrow, I'll put in
>>>> $100.  I
>>>>
>>>
>>> Is that $100 times once, or $100 X 6mos anticiapated develop time.
>>>
>>
>>
>> That's $100 once.  And last I looked, there are well over 1800
>> subscribers on
>> this list alone.  On the astronomically small chance everyone one of
>> them did
>> what I'm doing, it would cover more than 6 months of development time
>> ;-)  This
>> strikes me as like supporting public radio.  The individuals do some,
>> and the
>> corporations do a bunch.
>>
>> I'm just putting my money toward a great product, rather than
>> complaining that
>> it's not done.  Just like Joshua is doing.  You cannot hire a competent
>> programmer for $24k a year, so he is putting up some money on this also.
>>
>> There have been a couple of other bytes from small businesses, so who
>> knows!
>>
>> You game?
>>
>> Cheers,
>> Rob
>>
>>
>>

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta (2)

From
"Joshua D. Drake"
Date:
Hello,

  Yes that would be expected. I was figuring the first 2k would be the
diagnostics/development
of that plan so that we would have a real idea of what the programmers
think it would take. Thus
the statement of the next 5 months etc..

J


Network Administrator wrote:

>That sounds good save two things.  We need to state what are the project run
>dates and what happens at or around the due date.  That to say we have the
>deliverable for testing (beta ready), more time is needed to complete core
>features (alpha ready) and therefore more funds are needed, project is one hold
>due to features needed outside the scope of the project, etc, etc, etc...
>
>You get the idea.
>
>Quoting "Joshua D. Drake" <jd@commandprompt.com>:
>
>
>
>>Hello,
>>
>>  O.k. here are my thoughts on how this could work:
>>
>>  Command Prompt will set up an escrow account online at www.escrow.com.
>>  When the Escrow account totals 2000.00 and is released, Command Prompt
>>will dedicate a
>>  programmer for one month to debugging, documenting, reviewing,
>>digging, crying,
>>  screaming, begging and bleeding with the code. At the end of the month
>>and probably during
>>  depending on how everything goes Command Prompt will release its
>>findings.  The findings
>>  will include a project plan on moving forward over the next 5 months
>>(if that is what it takes) to
>>  produce the first functional pg_upgrade.
>>
>>  If the project is deemed as moving in the right direction by the
>>community members and specifically
>>  the core members we will setup milestone payments for the project.
>>
>>   What does everyone think?
>>
>>   Sincerely,
>>
>>   Joshua D. Drake
>>
>>
>>Dennis Gearon wrote:
>>
>>
>>
>>>I had already committed $50/mo.
>>>
>>>Robert Creager wrote:
>>>
>>>
>>>
>>>>Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
>>>>Dennis Gearon <gearond@fireserve.net> uttered something amazingly
>>>>similar to:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>Robert Creager wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>>>>>>"Joshua D. Drake" <jd@commandprompt.com> uttered something
>>>>>>amazingly similar
>>>>>>to:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>If someone is willing to pony up 2000.00 per month for a period of
>>>>>>>at
>>>>>>>
>>>>>>>
>>>>>>Well, if you're willing to set up some sort of escrow, I'll put in
>>>>>>$100.  I
>>>>>>
>>>>>>
>>>>>>
>>>>>Is that $100 times once, or $100 X 6mos anticiapated develop time.
>>>>>
>>>>>
>>>>>
>>>>That's $100 once.  And last I looked, there are well over 1800
>>>>subscribers on
>>>>this list alone.  On the astronomically small chance everyone one of
>>>>them did
>>>>what I'm doing, it would cover more than 6 months of development time
>>>>;-)  This
>>>>strikes me as like supporting public radio.  The individuals do some,
>>>>and the
>>>>corporations do a bunch.
>>>>
>>>>I'm just putting my money toward a great product, rather than
>>>>complaining that
>>>>it's not done.  Just like Joshua is doing.  You cannot hire a competent
>>>>programmer for $24k a year, so he is putting up some money on this also.
>>>>
>>>>There have been a couple of other bytes from small businesses, so who
>>>>knows!
>>>>
>>>>You game?
>>>>
>>>>Cheers,
>>>>Rob
>>>>
>>>>
>>>>
>>>>
>>>>
>>--
>>Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
>>Postgresql support, programming shared hosting and dedicated hosting.
>>+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
>>The most reliable support for the most reliable Open Source database.
>>
>>
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 5: Have you checked our extensive FAQ?
>>
>>               http://www.postgresql.org/docs/faqs/FAQ.html
>>
>>
>>
>
>
>
>

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta (2)

From
"Sander Steffann"
Date:
Hi,

> Command Prompt will set up an escrow account online at www.escrow.com.
> When the Escrow account totals 2000.00 and is released, Command Prompt
> will dedicate a programmer for one month to debugging, documenting,
> reviewing, digging, crying, screaming, begging and bleeding with the
> code. At the end of the month and probably during depending on how
> everything goes Command Prompt will release its findings.  The findings
> will include a project plan on moving forward over the next 5 months
> (if that is what it takes) to produce the first functional pg_upgrade.
>
> If the project is deemed as moving in the right direction by the
> community members and specifically the core members we will setup
> milestone payments for the project.
>
> What does everyone think?

Sounds good. It provides a safe way for people to fund this development. I
can't promise anything yet on behalf of my company, but I'll donate at least
$50,- personally.

Sander.


Re: State of Beta (2)

From
Andrew Rawnsley
Date:
Sounds good to me. I can throw in $500 to start.

On Wednesday, September 17, 2003, at 12:06 PM, Joshua D. Drake wrote:

> Hello,
>
>  O.k. here are my thoughts on how this could work:
>
>  Command Prompt will set up an escrow account online at www.escrow.com.
>  When the Escrow account totals 2000.00 and is released, Command
> Prompt will dedicate a
>  programmer for one month to debugging, documenting, reviewing,
> digging, crying,
>  screaming, begging and bleeding with the code. At the end of the
> month and probably during
>  depending on how everything goes Command Prompt will release its
> findings.  The findings
>  will include a project plan on moving forward over the next 5 months
> (if that is what it takes) to
>  produce the first functional pg_upgrade.
>
>  If the project is deemed as moving in the right direction by the
> community members and specifically
>  the core members we will setup milestone payments for the project.
>
>   What does everyone think?
>
>   Sincerely,
>
>   Joshua D. Drake
>
> Dennis Gearon wrote:
>
>> I had already committed $50/mo.
>>
>> Robert Creager wrote:
>>
>>> Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
>>> Dennis Gearon <gearond@fireserve.net> uttered something amazingly
>>> similar to:
>>>
>>>
>>>> Robert Creager wrote:
>>>>
>>>>
>>>>> Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>>>>> "Joshua D. Drake" <jd@commandprompt.com> uttered something
>>>>> amazingly similar
>>>>> to:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> If someone is willing to pony up 2000.00 per month for a period
>>>>>> of at
>>>>>
>>>>> Well, if you're willing to set up some sort of escrow, I'll put in
>>>>> $100.  I
>>>>>
>>>>
>>>> Is that $100 times once, or $100 X 6mos anticiapated develop time.
>>>>
>>>
>>>
>>> That's $100 once.  And last I looked, there are well over 1800
>>> subscribers on
>>> this list alone.  On the astronomically small chance everyone one of
>>> them did
>>> what I'm doing, it would cover more than 6 months of development
>>> time ;-)  This
>>> strikes me as like supporting public radio.  The individuals do
>>> some, and the
>>> corporations do a bunch.
>>>
>>> I'm just putting my money toward a great product, rather than
>>> complaining that
>>> it's not done.  Just like Joshua is doing.  You cannot hire a
>>> competent
>>> programmer for $24k a year, so he is putting up some money on this
>>> also.
>>>
>>> There have been a couple of other bytes from small businesses, so
>>> who knows!
>>>
>>> You game?
>>>
>>> Cheers,
>>> Rob
>>>
>>>
>
> --
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
> The most reliable support for the most reliable Open Source database.
>
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faqs/FAQ.html
>
--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
Lamar Owen
Date:
Marc G. Fournier wrote:
>>>And that has nothing to do with user need as a whole, since the care
>>>level I mentioned is predicated by the developer interest level.
>>>While I know, Marc, how the whole project got started (I have read the
>>>first posts), and I appreciate that you, Bruce, Thomas, and Vadim
>>>started the original core team because you were and are users of
>>>PostgreSQL, I sincerely believe that in this instance you are out of
>>>touch with this need of many of today's userbase.

> Huh?  I have no disagreement that upgrading is a key feature that we are
> lacking ... but, if there are any *on disk* changes between releases, how
> do you propose 'in place upgrades'?

RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
7.3); that question has yet to be answered.  If it can do this, then I
would be a much happier camper.  I would be happy for a migration tool
that could read the old format _without_a_running_old_backend_ and
convert it to the new format _without_a_running_backend_.  That's always
been my beef, that the new backend is powerless to recover the old data.
  OS upgrades where PostgreSQL is part of the OS, FreeBSD ports upgrades
(according to a user report on the lists a few months back), and RPM
upgrades are absolutely horrid at this point. *You* might can stand it;
some cannot.

   Granted, if its just changes to the
> system catalogs and such, pg_upgrade should be able to be taught to handle
> it .. I haven't seen anyone step up to do so, and for someone spending so
> much time pushing for an upgrade path, I haven't seen you pony up the time

I believe I pony up quite a bit of time already, Marc.  Not as much as
some, by any means, but I am not making one red cent doing what I do for
the project.  And one time I was supposed to have gotten paid for a
related project, I didn't.  I did get paid by Great Bridge for RPM work
as a one-shot deal, though.

The time I've already spent on this is too much.  I've probably put
several hundred hours of my time into this issue in one form or another;
what I don't have time to do is climb the steep slope Tom mentioned
earlier.  I actually need to feed my family, and my employer has more
for me to do than something that should have already been done.

> Just curious here ... but, with all the time you've spent pushing for an
> "easy upgrade path", have you looked at the other RDBMSs and how they deal
> with upgrades?  I think its going to be a sort of apples-to-oranges thing,
> since I imagine that most of the 'big ones' don't change their disk
> formats anymore ...

I don't use the others; thus I don't care how they do it; only how we do
it.  But even MySQL has a better system than we -- they allow you to
migrate table by table, gaining the new features of the new format when
you migrate.  Tom and I pretty much reached consensus that the reason we
have a problem with this is the integration of features in the system
catalogs, and the lack of separation between 'system' information in the
catalogs and 'feature' or 'user' information in the catalogs.  It's all
in the archives that nobdy seems willing to read over again.  Why do we
even have archives if they're not going to be used?

If bugfixes were consistently backported, and support was provided for
older versions running on newer OS's, then this wouldn't be as much of a
problem.  But we orphan our code afte one version cycle; 7.0.x is
completely unsupported, for instance, while even 7.2.x is virtually
unsupported.  My hat's off to Red Hat for backporting the buffer
overflow fixes to all their supported versions; we certainly wouldn't
have don it.  And 7.3.x will be unsupported once we get past 7.4
release, right? So in order to get critical bug fixes, users must
upgrade to a later codebase, and go through the pain of upgrading their
data.

> K, looking back through that it almost sounds like a ramble ... hopefully
> you understand what I'm asking ...

*I* should complain about a ramble? :-)
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
Formerly of WGCR Internet Radio, and the PostgreSQL RPM maintainer since
1999.



Re: State of Beta 2

From
"Marc G. Fournier"
Date:
On Thu, 18 Sep 2003, Lamar Owen wrote:

> > Huh?  I have no disagreement that upgrading is a key feature that we are
> > lacking ... but, if there are any *on disk* changes between releases, how
> > do you propose 'in place upgrades'?
>
> RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
> eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
> 7.3); that question has yet to be answered.

'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with >1TB of disk space to do
the replication over to ...

See: http://archives.postgresql.org/pgsql-general/2003-09/msg00886.php

Re: State of Beta 2

From
Andrew Rawnsley
Date:
On Thursday, September 18, 2003, at 12:11 PM, Lamar Owen wrote:

>
> RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice
> if eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2
> onto a 7.3); that question has yet to be answered.  If it can do this,
> then I would be a much happier camper.  I would be happy for a
> migration tool that could read the old format
> _without_a_running_old_backend_ and convert it to the new format
> _without_a_running_backend_.  That's always been my beef, that the new
> backend is powerless to recover the old data.  OS upgrades where
> PostgreSQL is part of the OS, FreeBSD ports upgrades (according to a
> user report on the lists a few months back), and RPM upgrades are
> absolutely horrid at this point. *You* might can stand it; some > cannot.
>

eRserver should be able to migrate the data. If you make heavy use of
sequences, schemas and other such things it won't help you for those.

Its not a bad idea to do it that way, if you aren't dealing with large
or very complex databases. The first thing its going to do when you add
a slave is do a dump/restore to create the replication target. If you
can afford the disk space and time, that will migrate the data. By
itself that isn't any different than doing that by hand. Where eRserver
may help is keeping the data in sync while you work the other things
out.

Sequences and schemas are the two things it doesn't handle at the
moment. I've created a patch and some new client apps to manage the
schema part, but I haven't had the chance to send them off to someone
to see if they'll fit in. Sequences are on my list of things to do
next. Time time time time.....

Using eRserver may help you work around the problem, given certain
conditions. It doesn't solve it. I think if we can get Mr. Drake's
initiative off the ground we may at least figure out if there is a
solution.


--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com


Re: State of Beta 2

From
Dennis Gearon
Date:
Andrew Rawnsley wrote:

>
> eRserver should be able to migrate the data. If you make heavy use of
> sequences, schemas and other such things it won't help you for those.
>
> <snip>

> Using eRserver may help you work around the problem, given certain
> conditions. It doesn't solve it. I think if we can get Mr. Drake's
> initiative off the ground we may at least figure out if there is a
> solution.


So a replication application
IS
a method to migrate
OR CAN BE MADE
to do it somewhat
AND is a RELATED
project to the migration tool.

Again, I wonder what on the TODO's or any other roadmap is related and
should be part of a comprehensive plan to drain the swamp and not just
club alligators over the head?


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
> If bugfixes were consistently backported, and support was provided for
> older versions running on newer OS's, then this wouldn't be as much of
> a problem.  But we orphan our code afte one version cycle; 7.0.x is
> completely unsupported, for instance, while even 7.2.x is virtually
> unsupported.  My hat's off to Red Hat for backporting the buffer
> overflow fixes to all their supported versions; we certainly wouldn't
> have don it.  And 7.3.x will be unsupported once we get past 7.4
> release, right? So in order to get critical bug fixes, users must
> upgrade to a later codebase, and go through the pain of upgrading
> their data.


Command Prompt is supporting the 7.3 series until 2005 and that includes
backporting certain features and bug fixes. The reality is that most
(with the exception of the Linux kernel and maybe Apache) open source
projects don't support back releases. That is the point of commercial
releases such as RedHat DB and Mammoth. We will support the the older
releases for some time.

If you want to have continued support for an older rev, purchase a
commercial version. I am not trying to push my product here, but frankly
I think your argument is weak. There is zero reason for the community to
support previous version of code. Maybe until 7.4 reaches 7.4.1 or
something but longer? Why? The community should be focusing on
generating new, better, faster, cleaner code.

That is just my .02.

Joshua Drake




>
>> K, looking back through that it almost sounds like a ramble ...
>> hopefully
>> you understand what I'm asking ...
>
>
> *I* should complain about a ramble? :-)


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
Tom Lane
Date:
"Joshua D. Drake" <jd@commandprompt.com> writes:
> If you want to have continued support for an older rev, purchase a
> commercial version. I am not trying to push my product here, but frankly
> I think your argument is weak. There is zero reason for the community to
> support previous version of code. Maybe until 7.4 reaches 7.4.1 or
> something but longer? Why? The community should be focusing on
> generating new, better, faster, cleaner code.

I tend to agree on this point.  Red Hat is also in the business of
supporting back-releases of PG, and I believe PG Inc, SRA, and others
will happily do it too.  I don't think it's the development community's
job to do that.

[ This does not, however, really bear on the primary issue, which is how
can we make upgrading less unpleasant for people with large databases.
We do need to address that somehow. ]

            regards, tom lane

Re: need for in-place upgrades (was Re: State of Beta 2)

From
Andrew Sullivan
Date:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:

> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?

Nope.  You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.

A

--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: need for in-place upgrades (was Re: State of Beta 2)

From
Andrew Sullivan
Date:
On Sat, Sep 13, 2003 at 07:16:28PM -0400, Lamar Owen wrote:
>
> Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

Yes.  Well, 7.3 to 7.2, anyway: we just tested it (my colleague,
Tariq Muhammad did it).

A

----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: need for in-place upgrades (was Re: State of Beta 2)

From
Andrew Sullivan
Date:
On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
>
> I thought we were talking about upgrades here?

You do upgrades without being able to roll back?

A

--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: State of Beta 2

From
Andrew Sullivan
Date:
On Thu, Sep 18, 2003 at 12:11:18PM -0400, Lamar Owen wrote:
> RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
> eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
> 7.3); that question has yet to be answered.  If it can do this, then I

Sorry, I've been swamped, and not reading mail as much as I'd like.
But I just answered this for 7.2/7.3.

A

--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: need for in-place upgrades (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:

On Thu, 18 Sep 2003, Andrew Sullivan wrote:

> On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
> >
> > I thought we were talking about upgrades here?
>
> You do upgrades without being able to roll back?

Hadn't thought of it that way ... but, what would prompt someone to
upgrade, then use something like erserver to roll back?  All I can think
of is that the upgrade caused alot of problems with the application
itself, but in a case like that, would you have the time to be able to
're-replicate' back to the old version?



Re: need for in-place upgrades (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
> On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
>
> > So instead of 1TB of 15K fiber channel disks (and the requisite
> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> > 15K fiber channel disks (and the requisite controllers, shelves,
> > RAID overhead, etc) just for the 1 time per year when we'd upgrade
> > PostgreSQL?
>
> Nope.  You also need it for the time when your vendor sells
> controllers or chips or whatever with known flaws, and you end up
> having hardware that falls over 8 or 9 times in a row.

????

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"A C program is like a fast dance on a newly waxed dance floor
by people carrying razors."
Waldi Ravens


Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Thu, 18 Sep 2003, Lamar Owen wrote:

> Marc G. Fournier wrote:
> > 'K, I had already answered it as part of this thread when I suggested
> > doing exactly that ... in response to which several ppl questioned the
> > feasibility of setting up a duplicate system with >1TB of disk space to do
> > the replication over to ...
>
> The quote mentioned is a question, not an answer.  You said:
> > 'k, but is it out of the question to pick up a duplicate server, and use
> > something like eRServer to replicate the databases between the two
> > systems, with the new system having the upgraded database version running
> > on it, and then cutting over once its all in sync?
>
> 'Something like eRserver' doesn't give me enough detail; so I asked if
> eRserver could do this, mentioning specific version numbers.  A straight
> answer -- yes it can, or no it can't -- would be nice.  So you're saying
> that eRserver can do this, right?  Now if there just wasn't that java
> dependency....  Although the contrib rserv might suffice for data
> migration capabilities.

Sorry, but I hadn't actually seen your question about it ... but, yes,
erserver can do this ... as far as I know, going from, say, v7.2 -> v7.4
shouldn't be an issue either, but I only know of a few doing v7.2->v7.3
migrations with it so far ...


Re: State of Beta (2)

From
Network Administrator
Date:
That sounds good save two things.  We need to state what are the project run
dates and what happens at or around the due date.  That to say we have the
deliverable for testing (beta ready), more time is needed to complete core
features (alpha ready) and therefore more funds are needed, project is one hold
due to features needed outside the scope of the project, etc, etc, etc...

You get the idea.

Quoting "Joshua D. Drake" <jd@commandprompt.com>:

> Hello,
>
>   O.k. here are my thoughts on how this could work:
>
>   Command Prompt will set up an escrow account online at www.escrow.com.
>   When the Escrow account totals 2000.00 and is released, Command Prompt
> will dedicate a
>   programmer for one month to debugging, documenting, reviewing,
> digging, crying,
>   screaming, begging and bleeding with the code. At the end of the month
> and probably during
>   depending on how everything goes Command Prompt will release its
> findings.  The findings
>   will include a project plan on moving forward over the next 5 months
> (if that is what it takes) to
>   produce the first functional pg_upgrade.
>
>   If the project is deemed as moving in the right direction by the
> community members and specifically
>   the core members we will setup milestone payments for the project.
>
>    What does everyone think?
>
>    Sincerely,
>
>    Joshua D. Drake
>
>
> Dennis Gearon wrote:
>
> > I had already committed $50/mo.
> >
> > Robert Creager wrote:
> >
> >> Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
> >> Dennis Gearon <gearond@fireserve.net> uttered something amazingly
> >> similar to:
> >>
> >>
> >>
> >>> Robert Creager wrote:
> >>>
> >>>
> >>>
> >>>> Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
> >>>> "Joshua D. Drake" <jd@commandprompt.com> uttered something
> >>>> amazingly similar
> >>>> to:
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> If someone is willing to pony up 2000.00 per month for a period of
> >>>>> at
> >>>>
> >>>> Well, if you're willing to set up some sort of escrow, I'll put in
> >>>> $100.  I
> >>>>
> >>>
> >>> Is that $100 times once, or $100 X 6mos anticiapated develop time.
> >>>
> >>
> >>
> >> That's $100 once.  And last I looked, there are well over 1800
> >> subscribers on
> >> this list alone.  On the astronomically small chance everyone one of
> >> them did
> >> what I'm doing, it would cover more than 6 months of development time
> >> ;-)  This
> >> strikes me as like supporting public radio.  The individuals do some,
> >> and the
> >> corporations do a bunch.
> >>
> >> I'm just putting my money toward a great product, rather than
> >> complaining that
> >> it's not done.  Just like Joshua is doing.  You cannot hire a competent
> >> programmer for $24k a year, so he is putting up some money on this also.
> >>
> >> There have been a couple of other bytes from small businesses, so who
> >> knows!
> >>
> >> You game?
> >>
> >> Cheers,
> >> Rob
> >>
> >>
> >>
>
> --
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
> The most reliable support for the most reliable Open Source database.
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>


--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

Re: State of Beta 2

From
Manfred Koizar
Date:
On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lowen@pari.edu> wrote:
>Marc G. Fournier wrote:
>> [...] upgrading is a key feature [...]
> a migration tool
>that could read the old format _without_a_running_old_backend_ [...]
> the new backend is powerless to recover the old data.
>  OS upgrades [...], FreeBSD ports upgrades, and RPM
>upgrades are absolutely horrid at this point. [...]
>[censored] has a better system than we
>[...] the pain of upgrading [...]
>*I* should complain about a ramble? :-)

Lamar, I *STRONGLY* agree with almost everything you say here and in
other posts, except perhaps ...

You et al. seem to think that system catalog changes wouldn't be a
problem if only we could avoid page format changes.  This is not
necessarily so.  Page format changes can be handled without much
effort, if

. the changes are local to each page (the introduction of a level
indicator in btree pages is a counter-example),

. we can tell page type and version for every page,

. the new format does not need more space than the old one.

You wrote earlier:
| the developers who changed the on-disk format ...

Oh, that's me, I think.  I am to blame for the heap tuple header
changes between 7.2 and 7.3;  Tom did some cleanup work behind me but
cannot be held responsible for the on-disk-format incompatibilities.
I'm not aware of any other changes falling into this category for 7.3.
So you might as well have used the singular form ;-)

| ... felt it wasn't important to make it continue working.

This is simply not true.  Seamless upgrade is *very* important, IMHO.
See http://archives.postgresql.org/pgsql-hackers/2002-06/msg00136.php
for example, and please keep in mind that I was still very "fresh" at
that time.  Nobody demanded that I keep my promise and I got the
impression that a page format conversion tool was not needed because
there wouldn't be a pg_upgrade anyway.

Later, in your "Upgrading rant" thread, I even posted some code
(http://archives.postgresql.org/pgsql-hackers/2003-01/msg00294.php).
Unfortunately this went absolutely unnoticed, probably because it
looked so long because I fat-fingered the mail and included the code
twice.  :-(

>It's all
>in the archives that nobdy seems willing to read over again.  Why do we
>even have archives if they're not going to be used?

Sic!

While I'm at it, here are some comments not directly addressed to
Lamar:

Elsewhere in this current thread it has been suggested that the
on-disk format will stabilize at some time in the future and should
then be frozen to ensure painless upgrades.  IMHO, at the moment when
data structures are declared stable and immutable the project is dead.

And I don't believe the myth that commercial database vendors have
reached a stable on-disk representation.  Whoever said this, is kindly
asked to reveal his source of insight.

A working pg_upgrade is *not* the first thing we need.  What we need
first is willingness to not break backwards compatibility.  When
Postgres adopts a strategy of not letting in any change unless it is
fully compatible with the previous format or accompanied by an upgrade
script/program/whatever, that would be a huge step forward.  First
breaking things for six month or more and then, when the release date
comes nearer, trying to build an upgrade tool is not the right
approach.

A - hopefully not too unrealistic - vision:  _At_any_time_ during a
development cycle for release n+1 it is possible to take a cvs
snapshot, build it, take any release n database cluster, run a
conversion script over it (or not), and start the new postmaster with
-D myOldDataDir ...

Granted, this slows down development, primarily while developers are
not yet used to it.  But once the infrastructure is in place, things
should get easier.  While a developer is working on a new feature he
knows the old data structures as well as the new ones;  this is the
best moment to design and implement an upgrade path, which is almost
hopeless if tried several months later by someone else.

And who says that keeping compatibility in mind while developing new
features cannot be fun?  I assure you, it is!

Servus
 Manfred

Re: State of Beta 2

From
Tom Lane
Date:
"Marc G. Fournier" <scrappy@postgresql.org> writes:
> hmmm ... k, is it feasible to go a release or two at a time without on
> disk changes?  if so, pg_upgrade might not be as difficult to maintain,
> since, unless someone an figure out a way of doing it, 'on disk change
> releases' could still require dump/reloads, with a period of stability in
> between?

Yeah, for the purposes of this discussion I'm just taking "pg_upgrade"
to mean something that does what Bruce's old script does, namely
transfer the schema into the new installation using "pg_dump -s" and
then push the user tables and indexes physically into place.  We could
imagine that pg_upgrade would later get some warts added to it to handle
some transformations of the user data, but that might or might not ever
need to happen.

I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade, even
without doing any extra work to allow updates.  People who didn't
want to wait for the next incompatible release could have their change
sooner if they were willing to do the work to provide an update path.

> *Or* ... as we've seen more with this dev cycle then previous ones, how
> much could be easily back-patched to the previous version(s) relatively
> easily, without requiring on-disk changes?

It's very difficult to back-port anything beyond localized bug fixes.
We change the code too much --- for instance, almost no 7.4 patch will
apply exactly to 7.3 or before because of the elog-to-ereport changes.

But the real problem IMHO is we don't have the manpower to do adequate
testing of back-branch changes that would need to be substantially
different from what gets applied to HEAD.  I think it's best to leave
that activity to commercial support outfits, rather than put community
resources into it.

(Some might say I have a conflict of interest here, since I work for Red
Hat which is one of said commercial support outfits.  But I really do
think it's more reasonable to let those companies do this kind of
gruntwork than to expect the community hackers to do it.)

            regards, tom lane

Re: State of Beta 2

From
Manfred Koizar
Date:
On Fri, 19 Sep 2003 17:38:13 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>> A working pg_upgrade is *not* the first thing we need.
>
>Yes it is.

At the risk of being called a stubborn hairsplitter, I continue to say
that pg_upgrade is not the *first* thing we need.  Maybe the second
...

>  As you say later,
>
>> ... But once the infrastructure is in place, things
>> should get easier.

Yes, at some point in time we need an infrastructure/upgrade
process/tool/pg_upgrade, whatever we call it.  What I tried to say is
that *first* developers must change their point of view and give
backwards compatibility a higher priority.  As long as I don't write
page conversion functions because you changed the system catalogs and
you see no need for pg_upgrade because I broke the page format,
seamless upgrade cannot become a reality.

>Until we have a working pg_upgrade, every little catalog change will
>break backwards compatibility.  And I do not feel that the appropriate
>way to handle catalog changes is to insist on one-off solutions for each
>one.

I tend to believe that every code change or new feature that gets
implemented is unique by its nature, and if it involves catalog
changes it requires a unique upgrade script/tool.  How should a
generic tool guess the contents of a new catalog relation?

Rod's adddepend is a good example.  It is a one-off upgrade solution,
which is perfectly adequate because Rod's dependency patch was a
singular work, too.  Somebody had to sit down and code some logic into
a script.

>  Any quick look at the CVS logs will show that minor and major
>catalog revisions occur *far* more frequently than changes that would
>affect on-disk representation of user data.

Some catalog changes can be done by scripts executed by a standalone
backend, others might require more invasive surgery.  Do you have any
feeling which kind is the majority?

I've tried to produce a prototype for seamless upgrade with the patch
announced in
http://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php.  It
implements new backend functionality (index scan cost estimation using
index correlation) and needs a new system table (pg_indexstat) to
work.  I wouldn't call it perfect (for example, I still don't know how
to insert the new table into template0), but at least it shows that
there is a class of problems that require catalog changes and *can* be
solved without initdb.

Servus
 Manfred

Re: State of Beta 2

From
Manfred Koizar
Date:
On Fri, 19 Sep 2003 18:51:00 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>transfer the schema into the new installation using "pg_dump -s" and
>then push the user tables and indexes physically into place.

I'm more in favour of in-place upgrade.  This might seem risky, but I
think we can expect users to backup their PGDATA directory before they
start the upgrade.

I don't trust pg_dump because

. it doesn't help when the old postmaster binaries are not longer
available

. it does not always produce scripts that can be loaded without manual
intervention.  Sometimes you create a dump and cannot restore it with
the same Postmaster version.  RTA.

Servus
 Manfred

Re: State of Beta 2

From
Manfred Koizar
Date:
On Fri, 19 Sep 2003 20:06:39 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>Perhaps you should go back and study what
>pg_upgrade actually did.

Thanks for the friendly invitation.  I did that.

>  It needed only minimal assumptions about the
>format of either old or new catalogs.  The reason is that it mostly
>relied on portability work done elsewhere (in pg_dump, for example).

I was hoping that you had a more abstract concept in mind when you
said pg_upgrade; not that particular implementation.  I should have
been more explicit that I'm not a friend of that pg_dump approach, cf.
my other mail.

>> Rod's adddepend is a good example.
>I don't think it's representative.

>> ... I wouldn't call it perfect
>... in other words, it doesn't work and can't be made to work.

Hmm, "not perfect" == "can't be made to work".  Ok.  If you want to
see it this way ...

Servus
 Manfred

Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Fri, 19 Sep 2003, Tom Lane wrote:

> I think we could definitely adopt a policy of "on-disk changes not
> oftener than every X releases" if we had a working pg_upgrade, even
> without doing any extra work to allow updates.  People who didn't want
> to wait for the next incompatible release could have their change sooner
> if they were willing to do the work to provide an update path.

'K, but let's put the horse in front of the cart ... adopt the policy so
that the work on a working pg_upgrade has a chance of succeeding ... if we
said no on disk changes for, let's say, the next release, then that would
provide an incentive (I think!) for someone(s) to pick up the ball and
make sure that pg_upgrade would provide a non-dump/reload upgrade for it
...

> But the real problem IMHO is we don't have the manpower to do adequate
> testing of back-branch changes that would need to be substantially
> different from what gets applied to HEAD.  I think it's best to leave
> that activity to commercial support outfits, rather than put community
> resources into it.

What would be nice is if we could create a small QA group ...
representative of the various supported platforms, who could be called
upon for testing purposes ... any bugs reported get fixed, its finding the
bugs ...

Re: State of Beta 2

From
Tom Lane
Date:
"Marc G. Fournier" <scrappy@postgresql.org> writes:
> On Fri, 19 Sep 2003, Tom Lane wrote:
>> I think we could definitely adopt a policy of "on-disk changes not
>> oftener than every X releases" if we had a working pg_upgrade,

> 'K, but let's put the horse in front of the cart ... adopt the policy so
> that the work on a working pg_upgrade has a chance of succeeding ... if we
> said no on disk changes for, let's say, the next release, then that would
> provide an incentive (I think!) for someone(s) to pick up the ball and

No can do, unless your intent is to force people to work on pg_upgrade
and nothing else (a position I for one would ignore ;-)).  With such a
policy and no pg_upgrade we'd be unable to apply any catalog changes at
all, which would pretty much mean that 7.5 would look exactly like 7.4.

If someone wants to work on pg_upgrade, great.  But I'm not in favor of
putting all other development on hold until it happens.

            regards, tom lane

Re: State of Beta 2

From
Tom Lane
Date:
Manfred Koizar <mkoi-pg@aon.at> writes:
> I'm more in favour of in-place upgrade.  This might seem risky, but I
> think we can expect users to backup their PGDATA directory before they
> start the upgrade.

> I don't trust pg_dump because

You don't trust pg_dump, but you do trust in-place upgrade?  I think
that's backwards.

The good thing about the pg_upgrade process is that if it's gonna fail,
it will fail before any damage has been done to the old installation.
(If we multiply-link user data files instead of moving them, we could
even promise that the old installation is still fully valid at the
completion of the process.)  The failure scenarios for in-place upgrade
are way nastier.

As for "expect users to back up in case of trouble", I thought the whole
point here was to make life simpler for people who couldn't afford the
downtime needed for a complete backup.  To have a useful backup for an
in-place-upgrade failure, you'd have to run that full backup after
stopping the old postmaster, so you are still looking at long downtime
for an update.

> it doesn't help when the old postmaster binaries are not longer
> available

[shrug] This is a matter of design engineering for pg_upgrade.  The fact
that we've packaged it in the past as a script that depends on having
the old postmaster executable available is not an indication of how it
ought to be built when we redesign it.  Perhaps it should include
back-version executables in it.  Or not; but clearly it has to be built
with an understanding of what the total upgrade process would look like.

            regards, tom lane

Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Sat, 20 Sep 2003, Tom Lane wrote:

> "Marc G. Fournier" <scrappy@postgresql.org> writes:
> > On Fri, 19 Sep 2003, Tom Lane wrote:
> >> I think we could definitely adopt a policy of "on-disk changes not
> >> oftener than every X releases" if we had a working pg_upgrade,
>
> > 'K, but let's put the horse in front of the cart ... adopt the policy so
> > that the work on a working pg_upgrade has a chance of succeeding ... if we
> > said no on disk changes for, let's say, the next release, then that would
> > provide an incentive (I think!) for someone(s) to pick up the ball and
>
> No can do, unless your intent is to force people to work on pg_upgrade
> and nothing else (a position I for one would ignore ;-)).  With such a
> policy and no pg_upgrade we'd be unable to apply any catalog changes at
> all, which would pretty much mean that 7.5 would look exactly like 7.4.

No, I'm not suggesting no catalog changes ... wait, I might be wording
this wrong ... there are two changes that right now requires a
dump/reload, changes to the catalogs and changes to the data structures,
no?  Or are these effectively inter-related?

If they aren't inter-related, what I'm proposing is to hold off on any
data structure changes, but still make catalog changes ... *if*, between
v7.4 and v7.5, nobody can bring pg_upgrade up to speed to be able to
handle the catalog changes without a dump/reload, then v7.5 will require
one ... but, at least it would give a single 'moving target' for the
pg_upgrade development to work on, instead of two ...

Make better sense?


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
>>I don't trust pg_dump because
>>
>>
>
>You don't trust pg_dump, but you do trust in-place upgrade?  I think
>that's backwards.
>
Well to be honest. I have personally had nightmares of problems with
pg_dump. In fact I have
a large production database right now that can't use it to restore
because of the way pg_dump
handles large objects. So I can kind of see his point here. I had to
move to a rsync based backup/restore system.

The reality of pg_dump is not a good one. It is buggy and not very
reliable. This I am hoping
changes in 7.4 as we moved to a pure "c" implementation.

But I do not argue any of the other points you make below.

Sincerely,

Joshua Drake

>The good thing about the pg_upgrade process is that if it's gonna fail,
>it will fail before any damage has been done to the old installation.
>(If we multiply-link user data files instead of moving them, we could
>even promise that the old installation is still fully valid at the
>completion of the process.)  The failure scenarios for in-place upgrade
>are way nastier.
>
>As for "expect users to back up in case of trouble", I thought the whole
>point here was to make life simpler for people who couldn't afford the
>downtime needed for a complete backup.  To have a useful backup for an
>in-place-upgrade failure, you'd have to run that full backup after
>stopping the old postmaster, so you are still looking at long downtime
>for an update.
>
>
>
>>it doesn't help when the old postmaster binaries are not longer
>>available
>>
>>
>
>[shrug] This is a matter of design engineering for pg_upgrade.  The fact
>that we've packaged it in the past as a script that depends on having
>the old postmaster executable available is not an indication of how it
>ought to be built when we redesign it.  Perhaps it should include
>back-version executables in it.  Or not; but clearly it has to be built
>with an understanding of what the total upgrade process would look like.
>
>            regards, tom lane
>
>



Re: State of Beta 2

From
Tom Lane
Date:
"Marc G. Fournier" <scrappy@postgresql.org> writes:
> No, I'm not suggesting no catalog changes ... wait, I might be wording
> this wrong ... there are two changes that right now requires a
> dump/reload, changes to the catalogs and changes to the data structures,
> no?  Or are these effectively inter-related?

Oh, what you're saying is no changes in user table format.  Yeah, we
could probably commit to that now.  Offhand the only thing I think it
would hold up is the one idea about converting "interval" into a
three-component value, and I'm not sure if anyone had really committed
to work on that anyway ...

            regards, tom lane

Re: State of Beta 2

From
Dennis Gearon
Date:
You know, I can't help but thinking that there are a NUMBER of major
items on the TO DO list, this one, and several others that are related.
The point made that future clients and backends can't talk to old tables
is a good one. I used to rant and rave about Microslop doing that every
third or fourth version, and Postgres does it every minor revision. Hmmmm.

Is there a ROADMAP of integrated todo's somewhere?

Marc G. Fournier wrote:

>On Thu, 18 Sep 2003, Lamar Owen wrote:
>
>
>
>>>Huh?  I have no disagreement that upgrading is a key feature that we are
>>>lacking ... but, if there are any *on disk* changes between releases, how
>>>do you propose 'in place upgrades'?
>>>
>>>
>>RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
>>eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
>>7.3); that question has yet to be answered.
>>
>>
>
>'K, I had already answered it as part of this thread when I suggested
>doing exactly that ... in response to which several ppl questioned the
>feasibility of setting up a duplicate system with >1TB of disk space to do
>the replication over to ...
>
>See: http://archives.postgresql.org/pgsql-general/2003-09/msg00886.php
>
>
>


Re: State of Beta 2

From
Lamar Owen
Date:
Marc G. Fournier wrote:
> 'K, I had already answered it as part of this thread when I suggested
> doing exactly that ... in response to which several ppl questioned the
> feasibility of setting up a duplicate system with >1TB of disk space to do
> the replication over to ...

The quote mentioned is a question, not an answer.  You said:
> 'k, but is it out of the question to pick up a duplicate server, and use
> something like eRServer to replicate the databases between the two
> systems, with the new system having the upgraded database version running
> on it, and then cutting over once its all in sync?

'Something like eRserver' doesn't give me enough detail; so I asked if
eRserver could do this, mentioning specific version numbers.  A straight
answer -- yes it can, or no it can't -- would be nice.  So you're saying
that eRserver can do this, right?  Now if there just wasn't that java
dependency....  Although the contrib rserv might suffice for data
migration capabilities.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: State of Beta (2)

From
"Joshua D. Drake"
Date:
Hello,

  Sure this is all reasonable but would come after the initial 30 days.
The first 30 days is specifically a proof of concept,
validitity study, review of findings type of thing. Which is why I
stated that we would produce the findings and our project
plan to proceed after the initial 30 days.

Sincerely,

Joshua Drake



Network Administrator wrote:

>That sounds good save two things.  We need to state what are the project run
>dates and what happens at or around the due date.  That to say we have the
>deliverable for testing (beta ready), more time is needed to complete core
>features (alpha ready) and therefore more funds are needed, project is one hold
>due to features needed outside the scope of the project, etc, etc, etc...
>
>You get the idea.
>
>Quoting "Joshua D. Drake" <jd@commandprompt.com>:
>
>
>
>>Hello,
>>
>>  O.k. here are my thoughts on how this could work:
>>
>>  Command Prompt will set up an escrow account online at www.escrow.com.
>>  When the Escrow account totals 2000.00 and is released, Command Prompt
>>will dedicate a
>>  programmer for one month to debugging, documenting, reviewing,
>>digging, crying,
>>  screaming, begging and bleeding with the code. At the end of the month
>>and probably during
>>  depending on how everything goes Command Prompt will release its
>>findings.  The findings
>>  will include a project plan on moving forward over the next 5 months
>>(if that is what it takes) to
>>  produce the first functional pg_upgrade.
>>
>>  If the project is deemed as moving in the right direction by the
>>community members and specifically
>>  the core members we will setup milestone payments for the project.
>>
>>   What does everyone think?
>>
>>   Sincerely,
>>
>>   Joshua D. Drake
>>
>>
>>Dennis Gearon wrote:
>>
>>
>>
>>>I had already committed $50/mo.
>>>
>>>Robert Creager wrote:
>>>
>>>
>>>
>>>>Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
>>>>Dennis Gearon <gearond@fireserve.net> uttered something amazingly
>>>>similar to:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>Robert Creager wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>>>>>>"Joshua D. Drake" <jd@commandprompt.com> uttered something
>>>>>>amazingly similar
>>>>>>to:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>If someone is willing to pony up 2000.00 per month for a period of
>>>>>>>at
>>>>>>>
>>>>>>>
>>>>>>Well, if you're willing to set up some sort of escrow, I'll put in
>>>>>>$100.  I
>>>>>>
>>>>>>
>>>>>>
>>>>>Is that $100 times once, or $100 X 6mos anticiapated develop time.
>>>>>
>>>>>
>>>>>
>>>>That's $100 once.  And last I looked, there are well over 1800
>>>>subscribers on
>>>>this list alone.  On the astronomically small chance everyone one of
>>>>them did
>>>>what I'm doing, it would cover more than 6 months of development time
>>>>;-)  This
>>>>strikes me as like supporting public radio.  The individuals do some,
>>>>and the
>>>>corporations do a bunch.
>>>>
>>>>I'm just putting my money toward a great product, rather than
>>>>complaining that
>>>>it's not done.  Just like Joshua is doing.  You cannot hire a competent
>>>>programmer for $24k a year, so he is putting up some money on this also.
>>>>
>>>>There have been a couple of other bytes from small businesses, so who
>>>>knows!
>>>>
>>>>You game?
>>>>
>>>>Cheers,
>>>>Rob
>>>>
>>>>
>>>>
>>>>
>>>>
>>--
>>Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
>>Postgresql support, programming shared hosting and dedicated hosting.
>>+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
>>The most reliable support for the most reliable Open Source database.
>>
>>
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 5: Have you checked our extensive FAQ?
>>
>>               http://www.postgresql.org/docs/faqs/FAQ.html
>>
>>
>>
>
>
>
>

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
Tom Lane
Date:
Manfred Koizar <mkoi-pg@aon.at> writes:
> I tend to believe that every code change or new feature that gets
> implemented is unique by its nature, and if it involves catalog
> changes it requires a unique upgrade script/tool.  How should a
> generic tool guess the contents of a new catalog relation?

*It does not have to*.  Perhaps you should go back and study what
pg_upgrade actually did.  It needed only minimal assumptions about the
format of either old or new catalogs.  The reason is that it mostly
relied on portability work done elsewhere (in pg_dump, for example).

> Rod's adddepend is a good example.

adddepend was needed because it was inserting knowledge not formerly
present.  I don't think it's representative.  Things we do more commonly
involve refactoring information --- for example, changing the division
of labor between pg_aggregate and pg_proc, or adding pg_cast to replace
what had been some hard-wired parser behavior.

> ... I wouldn't call it perfect (for example, I still don't know how
> to insert the new table into template0),

... in other words, it doesn't work and can't be made to work.

pg_upgrade would be a one-time solution for a fairly wide range of
upgrade problems.  I don't want to get into developing custom solutions
for each kind of catalog change we might want to make.  That's not a
productive use of time.

            regards, tom lane

Re: need for in-place upgrades

From
Christopher Browne
Date:
ron.l.johnson@cox.net (Ron Johnson) wrote:
> On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
>> On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
>>
>> > So instead of 1TB of 15K fiber channel disks (and the requisite
>> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of
>> > 15K fiber channel disks (and the requisite controllers, shelves,
>> > RAID overhead, etc) just for the 1 time per year when we'd upgrade
>> > PostgreSQL?
>>
>> Nope.  You also need it for the time when your vendor sells
>> controllers or chips or whatever with known flaws, and you end up
>> having hardware that falls over 8 or 9 times in a row.
>
> ????

This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would /never/ claim to have lost sleep as a result of flakey
hardware.  Particularly not when it's a HA fibrechannel array.  I'm
/sure/ that has never happened to anyone.  [The irony herre should be
causing people to say "ow!"]
--
"cbbrowne","@","cbbrowne.com"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh

Re: State of Beta 2

From
Tom Lane
Date:
Manfred Koizar <mkoi-pg@aon.at> writes:
> Elsewhere in this current thread it has been suggested that the
> on-disk format will stabilize at some time in the future and should
> then be frozen to ensure painless upgrades.  IMHO, at the moment when
> data structures are declared stable and immutable the project is dead.

This is something that concerns me also.

> A working pg_upgrade is *not* the first thing we need.

Yes it is.  As you say later,

> ... But once the infrastructure is in place, things
> should get easier.

Until we have a working pg_upgrade, every little catalog change will
break backwards compatibility.  And I do not feel that the appropriate
way to handle catalog changes is to insist on one-off solutions for each
one.  Any quick look at the CVS logs will show that minor and major
catalog revisions occur *far* more frequently than changes that would
affect on-disk representation of user data.  If we had a working
pg_upgrade then I'd be willing to think about committing to "no user
data changes without an upgrade path" as project policy.  Without it,
any such policy would simply stop development in its tracks.

            regards, tom lane

Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Fri, 19 Sep 2003, Tom Lane wrote:

> Manfred Koizar <mkoi-pg@aon.at> writes:
> > Elsewhere in this current thread it has been suggested that the
> > on-disk format will stabilize at some time in the future and should
> > then be frozen to ensure painless upgrades.  IMHO, at the moment when
> > data structures are declared stable and immutable the project is dead.
>
> This is something that concerns me also.

But, is there anything wrong with striving for something you mentioned
earlier ... "spooling" data structure changes so that they don't happen
every release, but every other one, maybe?

> > ... But once the infrastructure is in place, things
> > should get easier.
>
> Until we have a working pg_upgrade, every little catalog change will
> break backwards compatibility.  And I do not feel that the appropriate
> way to handle catalog changes is to insist on one-off solutions for each
> one.  Any quick look at the CVS logs will show that minor and major
> catalog revisions occur *far* more frequently than changes that would
> affect on-disk representation of user data.  If we had a working
> pg_upgrade then I'd be willing to think about committing to "no user
> data changes without an upgrade path" as project policy.  Without it,
> any such policy would simply stop development in its tracks.

hmmm ... k, is it feasible to go a release or two at a time without on
disk changes?  if so, pg_upgrade might not be as difficult to maintain,
since, unless someone an figure out a way of doing it, 'on disk change
releases' could still require dump/reloads, with a period of stability in
between?

*Or* ... as we've seen more with this dev cycle then previous ones, how
much could be easily back-patched to the previous version(s) relatively
easily, without requiring on-disk changes?

Re: need for in-place upgrades

From
Christopher Browne
Date:
Centuries ago, Nostradamus foresaw when ron.l.johnson@cox.net (Ron Johnson) would write:
> On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
>> On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
>>
>> > So instead of 1TB of 15K fiber channel disks (and the requisite
>> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of
>> > 15K fiber channel disks (and the requisite controllers, shelves,
>> > RAID overhead, etc) just for the 1 time per year when we'd upgrade
>> > PostgreSQL?
>>
>> Nope.  You also need it for the time when your vendor sells
>> controllers or chips or whatever with known flaws, and you end up
>> having hardware that falls over 8 or 9 times in a row.
>
> ????

This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would never mislead anyone, either.  I'm sure I got a full 8
hours sleep last night.  I'm sure of it...
--
"cbbrowne","@","cbbrowne.com"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh

Re: need for in-place upgrades

From
Christopher Browne
Date:
scrappy@postgresql.org ("Marc G. Fournier") writes:
> On Thu, 18 Sep 2003, Andrew Sullivan wrote:
>
>> On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
>> >
>> > I thought we were talking about upgrades here?
>>
>> You do upgrades without being able to roll back?
>
> Hadn't thought of it that way ... but, what would prompt someone to
> upgrade, then use something like erserver to roll back?  All I can
> think of is that the upgrade caused alot of problems with the
> application itself, but in a case like that, would you have the time
> to be able to 're-replicate' back to the old version?

Suppose we have two dbs:

  db_a - Old version
  db_b - New version

Start by replicating db_a to db_b.

The approach would presumably be that at the time of the upgrade, you
shut off the applications hitting db_a (injecting changes into the
source), and let the final set of changes flow thru to db_b.

That brings db_a and db_b to having the same set of data.

Then reverse the flow, so that db_b becomes master, flowing changes to
db_a.  Restart the applications, configuring them to hit db_b.

db_a should then be just a little bit behind db_b, and be a "recovery
plan" in case the new version played out badly.

That's surely not what you'd _expect_; the point of the exercise was
for the upgrade to be an improvement.  But if something Truly Evil
happened, you might have to.  And when people are talking about "risk
management," and ask what you do if Evil Occurs, this is the way the
answer works.

It ought to be pretty cheap, performance-wise, to do things this way,
certainly not _more_ expensive than the replication was to keep db_b
up to date.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://www.ntlug.org/~cbbrowne/oses.html
Rules of  the Evil Overlord  #149. "Ropes supporting  various fixtures
will not be  tied next to open windows  or staircases, and chandeliers
will be hung way at the top of the ceiling."
<http://www.eviloverlord.com/>

Re: State of Beta 2

From
Tom Lane
Date:
"Joshua D. Drake" <jd@commandprompt.com> writes:
> The reality of pg_dump is not a good one. It is buggy and not very
> reliable.

I think everyone acknowledges that we have more work to do on pg_dump.
But we have to do that work anyway.  Spreading ourselves thinner by
creating a whole new batch of code for in-place upgrade isn't going to
improve the situation.  The thing I like about the pg_upgrade approach
is that it leverages a lot of code we already have and will need to
continue to maintain in any case.

Also, to be blunt: if pg_dump still has problems after all the years
we've put into it, what makes you think that in-place upgrade will
magically work reliably?

> This I am hoping
> changes in 7.4 as we moved to a pure "c" implementation.

Eh?  AFAIR, pg_dump has always been in C.

            regards, tom lane

Catalog vs. user table format (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Sat, 2003-09-20 at 11:17, Tom Lane wrote:
> "Marc G. Fournier" <scrappy@postgresql.org> writes:
> > No, I'm not suggesting no catalog changes ... wait, I might be wording
> > this wrong ... there are two changes that right now requires a
> > dump/reload, changes to the catalogs and changes to the data structures,
> > no?  Or are these effectively inter-related?
>
> Oh, what you're saying is no changes in user table format.  Yeah, we

Whew, we're finally on the same page!

So, some definitions we can agree on?
"catalog change": CREATE or ALTER a pg_* table.
"on-disk structure", a.k.a. "user table format": the way that the
tables/fields are actually stored on disk.

So, a catalog change should *not* require a dump/restore, but an
ODS/UTF change should.

Agreed?

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"they love our milk and honey, but preach about another way of living"
Merle Haggard, "The Fighting Side Of Me"


Re: Catalog vs. user table format (was Re: State of Beta

From
"Marc G. Fournier"
Date:

On Sat, 20 Sep 2003, Ron Johnson wrote:

> On Sat, 2003-09-20 at 11:17, Tom Lane wrote:
> > "Marc G. Fournier" <scrappy@postgresql.org> writes:
> > > No, I'm not suggesting no catalog changes ... wait, I might be wording
> > > this wrong ... there are two changes that right now requires a
> > > dump/reload, changes to the catalogs and changes to the data structures,
> > > no?  Or are these effectively inter-related?
> >
> > Oh, what you're saying is no changes in user table format.  Yeah, we
>
> Whew, we're finally on the same page!
>
> So, some definitions we can agree on?
> "catalog change": CREATE or ALTER a pg_* table.
> "on-disk structure", a.k.a. "user table format": the way that the
> tables/fields are actually stored on disk.
>
> So, a catalog change should *not* require a dump/restore, but an
> ODS/UTF change should.

As long as pg_update is updated/tested for this, yes, that is what the
thought is ... but, that still requires someone(s) to step up and work
on/maintain pg_upgrade for this to happen ... all we are agreeing to right
now is implement a policy whereby maintaining pg_upgrade is *possible*,
not one where maintaining pg_upgrade is *done* ...


Re: need for in-place upgrades

From
Ron Johnson
Date:
On Fri, 2003-09-19 at 06:37, Christopher Browne wrote:
> ron.l.johnson@cox.net (Ron Johnson) wrote:
> > On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
> >> On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> >>
> >> > So instead of 1TB of 15K fiber channel disks (and the requisite
> >> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> >> > 15K fiber channel disks (and the requisite controllers, shelves,
> >> > RAID overhead, etc) just for the 1 time per year when we'd upgrade
> >> > PostgreSQL?
> >>
> >> Nope.  You also need it for the time when your vendor sells
> >> controllers or chips or whatever with known flaws, and you end up
> >> having hardware that falls over 8 or 9 times in a row.
> >
> > ????
>
> This of course never happens in real life; expensive hardware is
> _always_ UTTERLY reliable.
>
> And the hardware vendors all have the same high standards as, well,
> certain database vendors we might think of.
>
> After all, Oracle and MySQL AB would surely never mislead their
> customers about the merits of their database products any more than
> HP, Sun, or IBM would about the possibility of their hardware having
> tiny flaws.

Well, I use Rdb, so I wouldn't know about that!

(But then, it's an Oracle product, and runs on HPaq h/w...)

> And I would /never/ claim to have lost sleep as a result of flakey
> hardware.  Particularly not when it's a HA fibrechannel array.  I'm
> /sure/ that has never happened to anyone.  [The irony herre should be
> causing people to say "ow!"]

Sure, I've seen expensive h/e flake out.  It was the "8 or 9 times
in a row" that confused me.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

The difference between drunken sailors and Congressmen is that
drunken sailors spend their own money.


Re: State of Beta 2

From
Kaare Rasmussen
Date:
> No can do, unless your intent is to force people to work on pg_upgrade
> and nothing else (a position I for one would ignore ;-)).  With such a
> policy and no pg_upgrade we'd be unable to apply any catalog changes at
> all, which would pretty much mean that 7.5 would look exactly like 7.4.

Not sure about your position here. You claimed that it would be a good idea to
freeze the on disk format for at least a couple of versions. Do you argue
here that this cycle shouldn't start with the next version, or did you
reverse your thought ?

If the former, I think you're right. There are some too big changes close to
being made - if I have read this list correctly. Table spaces and PITR would
certainly change it.

But if the freeze could start after 7.5 and last two-three years, it might
help things.

--
Kaare Rasmussen            --Linux, spil,--        Tlf:        3816 2582
Kaki Data                tshirts, merchandize      Fax:        3816 2501
Howitzvej 75               Åben 12.00-18.00        Email: kar@kakidata.dk
2000 Frederiksberg        Lørdag 12.00-16.00       Web:      www.suse.dk

Re: State of Beta 2

From
Tom Lane
Date:
Kaare Rasmussen <kar@kakidata.dk> writes:
> Not sure about your position here. You claimed that it would be a good idea to
> freeze the on disk format for at least a couple of versions.

I said it would be a good idea to freeze the format of user tables (and
indexes) across multiple releases.  That's distinct from the layout and
contents of system catalogs, which are things that we revise constantly.
We could not freeze the system catalogs without blocking development
work, and we should not make every individual catalog change responsible
for devising its own in-place-upgrade scheme either.  We need some
comprehensive tool for handling catalog upgrades automatically.  I think
pg_upgrade points the way to one fairly good solution, though I'd not
rule out other approaches if someone has a bright idea.

Clear now?

> Do you argue here that this cycle shouldn't start with the next
> version,

I have not said anything about that in this thread.  Now that you
mention it, I do think it'd be easier to start with the freeze cycle
after tablespaces are in place.  On the other hand, tablespaces might
not appear in 7.5 (they already missed the boat for 7.4).  And
tablespaces are something that we could expect pg_upgrade to handle
without a huge amount more work.  pg_upgrade would already need to
contain logic to determine the mapping from old-installation user table
file names to new-installation ones, because the table OIDs would
normally be different.  Migrating to tablespaces simply complicates that
mapping somewhat.  (I am assuming that tablespaces won't affect the
contents of user table files, only their placement in the Unix directory
tree.)

I think a reasonable development plan is to work on pg_upgrade assuming
the current physical database layout (no tablespaces), and concurrently
work on tablespaces.  The eventual merge would require teaching
pg_upgrade about mapping old to new filenames in a tablespace world.
It should only be a small additional amount of work to teach it how to
map no-tablespaces to tablespaces.

In short, if people actually are ready to work on pg_upgrade now,
I don't see any big reason not to let them ...

            regards, tom lane

Re: State of Beta 2

From
Lamar Owen
Date:
Manfred Koizar wrote:
> On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lowen@pari.edu> wrote:
>>Marc G. Fournier wrote:
>>>[...] upgrading is a key feature [...]

>>a migration tool
>>that could read the old format _without_a_running_old_backend_ [...]
>>the new backend is powerless to recover the old data.
>> OS upgrades [...], FreeBSD ports upgrades, and RPM
>>upgrades are absolutely horrid at this point. [...]
>>[censored] has a better system than we
>>[...] the pain of upgrading [...]
>>*I* should complain about a ramble? :-)

> Lamar, I *STRONGLY* agree with almost everything you say here and in
> other posts, except perhaps ...

> You et al. seem to think that system catalog changes wouldn't be a
> problem if only we could avoid page format changes.  This is not
> necessarily so.  Page format changes can be handled without much
> effort, if

No, I'm aware of the difference, and I understand the issues with
catalog changes.  Tom and I, among others, have discussed this.  We
talked about reorganizing the system catalog to separate the data that
typically changes with a release from the data that describes the user's
tables.  It is a hard thing to do, separating this data.

> Oh, that's me, I think.  I am to blame for the heap tuple header
> changes between 7.2 and 7.3;

It has happened at more than one version change, not just 7.2->7.3.  I
actually was thinking about a previous flag day.  So the plural still
stands.

> Later, in your "Upgrading rant" thread, I even posted some code
> (http://archives.postgresql.org/pgsql-hackers/2003-01/msg00294.php).
> Unfortunately this went absolutely unnoticed, probably because it
> looked so long because I fat-fingered the mail and included the code
> twice.  :-(

I don't recall that, but I believe you.  My antivirus software may have
flagged it if it had more than one . in the file name.  But I may go
back and look at it.  Again, I wasn't fingering 7.2->7.3 -- it has
happened more than once prior to that.

> A working pg_upgrade is *not* the first thing we need.  What we need
> first is willingness to not break backwards compatibility.

To this I agree.  But it must be done in stages, as Tom, Marc, and
others have already said (I read the rest of the thread before replying
to this message).  We can't simply declare a catalog freeze (which you
didn't do, I know), nor can we declare an on-disk format change freeze.
  We need to think about what is required to make upgrades easy, not
what is required to write a one-off upgrade tool (which each version of
pg_upgrade ends up being).  Can the system catalog be made more
friendly?  Is upgrading by necessity a one-step process (that is, can we
stepwise migrate tables as they are used/upgraded individually)?  Can we
decouple the portions of the system catalogs that change from the
portions that give basic access to the user's data? That is, what would
be required to allow a backend to read old data tables?  An upgrade tool
is redundant if the backend is version agnostic and version aware.

Look, my requirements are simple.  I should be able to upgrade the
binaries and not lose access to my data.  That's the bottom line.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute



Re: need for in-place upgrades (was Re: State of Beta 2)

From
Andrew Sullivan
Date:
On Thu, Sep 18, 2003 at 06:49:56PM -0300, Marc G. Fournier wrote:
>
> Hadn't thought of it that way ... but, what would prompt someone to
> upgrade, then use something like erserver to roll back?  All I can think
> of is that the upgrade caused alot of problems with the application
> itself, but in a case like that, would you have the time to be able to
> 're-replicate' back to the old version?

The trick is to have your former master set up as slave before you
turn your application back on.

The lack of a rollback strategy in PostgreSQL upgrades is a major
barrier for corporate use.  One can only do so much testing, and it's
always possible you've missed something.  You need to be able to go
back to some known-working state.

A

--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: need for in-place upgrades

From
Andrew Sullivan
Date:
On Sat, Sep 20, 2003 at 04:54:30PM -0500, Ron Johnson wrote:
> Sure, I've seen expensive h/e flake out.  It was the "8 or 9 times
> in a row" that confused me.

You need to talk to people who've had Sun Ex500s with the UltraSPARC
II built with the IBM e-cache modules.  Ask 'em about the reliability
of replacement parts.

A

--
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8
                                         +1 416 646 3304 x110


Re: State of Beta 2

From
Bruce Momjian
Date:
Marc G. Fournier wrote:
>
>
> On Mon, 15 Sep 2003, Joshua D. Drake wrote:
>
> >
> > > I'm not going to rehash the arguments I have made before; they are all
> > > archived.  Suffice to say you are simply wrong.  The number of
> > > complaints over the years shows that there IS a need.
> >
> >
> > I at no point suggested that there was not a need. I only suggest that
> > the need may not be as great as some suspect or feel. To be honest -- if
> > your arguments were the "need" that everyone had... it would have been
> > implemented some how. It hasn't yet which would suggest that the number
> > of people that have the "need" at your level is not as great as the
> > number of people who have different "needs" from PostgreSQL.
>
> Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
> anyone else looking at extending it ... if the *need* was so great,
> someone would have step'd up and looked into adding to what was already
> there ...

I was thinking of working on pg_upgrade for 7.4, but other things seemed
more important.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
"Joshua D. Drake"
Date:

>Also, to be blunt: if pg_dump still has problems after all the years
>we've put into it, what makes you think that in-place upgrade will
>magically work reliably?
>
>

Fair enough. On another front then... would all this energy we are
talking about with pg_upgrade
be better spent on pg_dump/pg_dumpall/pg_restore?


>>This I am hoping
>>changes in 7.4 as we moved to a pure "c" implementation.
>>
>>

Your right that was a mistype. I was very tired and reading three
different threads at the same time.

Sincerely,

Joshua Drake


>Eh?  AFAIR, pg_dump has always been in C.
>
>            regards, tom lane
>
>

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.



Re: State of Beta 2

From
Joseph Shraibman
Date:
Tom Lane wrote:
> Kaare Rasmussen <kar@kakidata.dk> writes:
>
>>Not sure about your position here. You claimed that it would be a good idea to
>>freeze the on disk format for at least a couple of versions.
>
>
> I said it would be a good idea to freeze the format of user tables (and
> indexes) across multiple releases.

Indexes aren't as big a deal.  Reindexing is less painful than dump/restore.  It could
still lead to significant downtime for very large databases (at least the for the tables
that are being reindexed), but not nearly as much.


Re: State of Beta 2

From
Tom Lane
Date:
"Joshua D. Drake" <jd@commandprompt.com> writes:
> Fair enough. On another front then... would all this energy we are
> talking about with pg_upgrade
> be better spent on pg_dump/pg_dumpall/pg_restore?

Well, we need to work on pg_dump too.  But I don't foresee it ever
getting fast enough to satisfy the folks who want zero-downtime
upgrades.  So pg_upgrade is also an important project.

            regards, tom lane

Re: State of Beta 2

From
Ron Johnson
Date:
On Mon, 2003-09-22 at 18:30, Tom Lane wrote:
> "Joshua D. Drake" <jd@commandprompt.com> writes:
> > Fair enough. On another front then... would all this energy we are
> > talking about with pg_upgrade
> > be better spent on pg_dump/pg_dumpall/pg_restore?
>
> Well, we need to work on pg_dump too.  But I don't foresee it ever
> getting fast enough to satisfy the folks who want zero-downtime

Multi-threaded pg_dump.

"It'll choke the IO system!!!" you say?  Well, heck, get a better
IO system!!!!

Or... use fewer threads.

No, it won't eliminate down-time, but is necessary for big data-
bases.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"You ask us the same question every day, and we give you the
same answer every day. Someday, we hope that you will believe us..."
U.S. Secretary of Defense Donald Rumsfeld, to a reporter


Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "JS" == Joseph Shraibman <jks@selectacast.net> writes:

JS> Indexes aren't as big a deal.  Reindexing is less painful than
JS> dump/restore.  It could still lead to significant downtime for very
JS> large databases (at least the for the tables that are being
JS> reindexed), but not nearly as much.

Well, for me the create index part of the restore is what takes about
3x the time for the data load.  Total about 4 hours.  The dump takes 1
hour.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
Tom Lane
Date:
Vivek Khera <khera@kcilink.com> writes:
> Well, for me the create index part of the restore is what takes about
> 3x the time for the data load.  Total about 4 hours.  The dump takes 1
> hour.

What sort_mem do you use for the restore?  Have you tried increasing it?

            regards, tom lane

Re: State of Beta 2

From
"Marc G. Fournier"
Date:

On Tue, 23 Sep 2003, Tom Lane wrote:

> Vivek Khera <khera@kcilink.com> writes:
> > Well, for me the create index part of the restore is what takes about
> > 3x the time for the data load.  Total about 4 hours.  The dump takes 1
> > hour.
>
> What sort_mem do you use for the restore?  Have you tried increasing it?

I've tried restoring a >5gig database with sort_mem up to 100Meg in size,
and didn't find that it sped up the index creation enough to make a
difference ... shaved off a couple of minutes over the whole reload, so
seconds off of each index ... and that was with the WAL logs also disabled
:(


Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "TL" == Tom Lane <tgl@sss.pgh.pa.us> writes:

TL> Vivek Khera <khera@kcilink.com> writes:
>> Well, for me the create index part of the restore is what takes about
>> 3x the time for the data load.  Total about 4 hours.  The dump takes 1
>> hour.

TL> What sort_mem do you use for the restore?  Have you tried increasing it?

All tests have these non-default settings:
vacuum_mem = 131702
max_fsm_pages = 1000000
random_page_cost = 2
effective_cache_size = 12760    # `sysctl -n vfs.hibufspace` / BLKSZ
16k pages
30000 shared buffers


The four tests I've run so far are:

checkpoint_segments default
sort_mem 8192
restore time: 15344.57 seconds

checkpoint_segments default
sort_mem 131702
restore time:  15042.00 seconds

checkpoint_segments 50
sort_mem 8192
restore time: 11511.24 seconds

checkpoint_segments 50
sort_mem 131702
restore time:  11287.94 seconds


I have also enabled the extra query/timing logging you requested last
week, and just need to sanitize the table names to prevent any
confidential information from leaking out of the office.  I'll send
those along shortly to you directly, as they are pretty large.  I
wasn't able to do much work since the 'storm' hit on thursday,
knocking out power to the house until sunday...

Right now I'm running the same above tests with fsync=false to see if
that improves anything.  Next test will be Marc's test to disable the
WAL entirely.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "MGF" == Marc G Fournier <scrappy@postgresql.org> writes:

MGF> I've tried restoring a >5gig database with sort_mem up to 100Meg in size,
MGF> and didn't find that it sped up the index creation enough to make a
MGF> difference ... shaved off a couple of minutes over the whole reload, so
MGF> seconds off of each index ... and that was with the WAL logs also disabled
MGF> :(

Ditto for me.  Can you reproduce my results by increasing
checkpoint_buffers to some large value (I use 50)?  this shaved
something like 60 minutes off of my restore.

Re: State of Beta 2

From
Tom Lane
Date:
Vivek Khera <khera@kcilink.com> writes:
> Ditto for me.  Can you reproduce my results by increasing
> checkpoint_buffers to some large value (I use 50)?

You meant checkpoint_segments, right?  It might be interesting to
experiment with wal_buffers, too, though I'm not convinced that will
have a big impact.

            regards, tom lane

Re: State of Beta 2

From
"Marc G. Fournier"
Date:
actually, I didn't get near that kind of benefit ... with wal disabled,
and sort_mem/checkpoint_segments at default, I got:

  import start: 22:31:38
           end: 23:21:42 (~50min)
       buffers: 64
      sort_mem: 1024
  wal disabled: yes

with checkpoint_segments and sort_mem raised, I shaved about 8min:

       import start: 15:56:07
                end: 16:38:56 (~42min)
            buffers: 640
           sort_mem: 102400
checkpoint_segments: 64
       wal disabled: yes
     fsync disabled: yes

As a side note, a default install with 64 shared memory buffers came in
around 56min ... then again, if looking at percentages, that is about a
25% improvement ... it just doesn't look to be that big looking at the
straight #s :)

 On Tue, 23 Sep 2003, Vivek Khera wrote:

> >>>>> "MGF" == Marc G Fournier <scrappy@postgresql.org> writes:
>
> MGF> I've tried restoring a >5gig database with sort_mem up to 100Meg in size,
> MGF> and didn't find that it sped up the index creation enough to make a
> MGF> difference ... shaved off a couple of minutes over the whole reload, so
> MGF> seconds off of each index ... and that was with the WAL logs also disabled
> MGF> :(
>
> Ditto for me.  Can you reproduce my results by increasing
> checkpoint_buffers to some large value (I use 50)?  this shaved
> something like 60 minutes off of my restore.
>

Re: State of Beta 2

From
Bruce Momjian
Date:
Ron Johnson wrote:
> On Mon, 2003-09-15 at 15:23, Joshua D. Drake wrote:
> > > I'm not going to rehash the arguments I have made before; they are all
> > > archived.  Suffice to say you are simply wrong.  The number of
> > > complaints over the years shows that there IS a need.
> >
> >
> > I at no point suggested that there was not a need. I only suggest that
> > the need may not be as great as some suspect or feel. To be honest -- if
> > your arguments were the "need" that everyone had... it would have been
> > implemented some how. It hasn't yet which would suggest that the number
> > of people that have the "need" at your level is not as great as the
> > number of people who have different "needs" from PostgreSQL.
>
> But the problem is that as more and more people put larger and larger
> datasets, that are mission-critical, into PostgreSQL, the need will
> grow larger and larger.
>
> Of course, we understand the "finite resources" issue, and are not
> badgering/complaining.  Simply, we are trying to make our case that
> this is something that should go on the TODO list, and be kept in
> the back of developers' minds.

Added to TODO:

    * Allow major upgrades without dump/reload, perhaps using
      pg_upgrade

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: need for in-place upgrades (was Re: State of Beta 2)

From
Guy Fraser
Date:
Under circumstances where I have had critical upgrades I have usualy
used a new machine
to build the new upgrade. This allows me to use revitalized equipment
and a "clean" install
for the upgraded server. If something goes sideways, you just switch
back to the old machine.
This is ususaly the quickest and most reliable method of upgrading a server.

Under a few circumstances I have not had a second machine, so I either
put in a new drive
and installed fresh, mounting the original drive to copy the old data to
the new drive before
any modification. Then if the upgrade goes sideways, just switch drives.
This takes longer to
recover.

When I have upgraded under the most stringent ecomonic restraints, I
have backed up the
original data and configuration files before making any changes. This is
the most error
prone method of upgrading a server, and takes the longest time to recover.

Using mirrored drives and splitting the mirror so that you have two
identical data sets can also
be feasible. I did this once successfuly but it requires having a spare
drive or two to rebuild the
mirror without losing the old data.

Andrew Sullivan wrote:

>On Thu, Sep 18, 2003 at 06:49:56PM -0300, Marc G. Fournier wrote:
>
>
>>Hadn't thought of it that way ... but, what would prompt someone to
>>upgrade, then use something like erserver to roll back?  All I can think
>>of is that the upgrade caused alot of problems with the application
>>itself, but in a case like that, would you have the time to be able to
>>'re-replicate' back to the old version?
>>
>>
>
>The trick is to have your former master set up as slave before you
>turn your application back on.
>
>The lack of a rollback strategy in PostgreSQL upgrades is a major
>barrier for corporate use.  One can only do so much testing, and it's
>always possible you've missed something.  You need to be able to go
>back to some known-working state.
>
>A
>
>
>

--
Guy Fraser
Network Administrator
The Internet Centre
780-450-6787 , 1-888-450-6787

There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.





Re: State of Beta 2

From
Vivek Khera
Date:
>>>>> "TL" == Tom Lane <tgl@sss.pgh.pa.us> writes:

TL> Vivek Khera <khera@kcilink.com> writes:
>> Ditto for me.  Can you reproduce my results by increasing
>> checkpoint_buffers to some large value (I use 50)?

TL> You meant checkpoint_segments, right?  It might be interesting to
TL> experiment with wal_buffers, too, though I'm not convinced that will
TL> have a big impact.

The difference on restore with fsync=false was 2 seconds.  I'm
rebuilding PG with Marc's WAL-disabling patch and will see the change
there.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

Re: State of Beta 2

From
Bruce Momjian
Date:
Tom Lane wrote:
> Dunno about MySQL.  I'm pretty sure I remember Ann Harrison stating that
> FireBird's disk structures haven't changed since the beginning of
> Interbase.  Which you might take as saying that they were a lot smarter
> than we are, but I suspect what it really means is that
> FireBird/Interbase hasn't undergone the kind of metamorphosis of purpose
> that the Postgres code base has.  Keep in mind that it started as an
> experimental academic prototype (representing some successful ideas and
> some not-so-successful ones), and the current developers have been
> laboring to convert it into an industrial-strength production tool ---
> keeping the good experimental ideas, but weeding out the bad ones, and
> adding production-oriented features that weren't in the original design.
> The entire argument that version-to-version stability should be a
> critical goal would have been foreign to the original developers of
> Postgres.

Thought the fact PostgreSQL came from an academic world are part of it,
the big reason we change on-disk format so often is that we are
improving faster than any other database on the planet.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Bruce Momjian
Date:
With all the discussion and pg_upgrade, I saw no one offer to work on
it.

Does someone want to convert it to Perl?  I think that would be a better
language than shell script for this purpose, and C is too low-level.

---------------------------------------------------------------------------

Lamar Owen wrote:
> Marc G. Fournier wrote:
> > On Mon, 15 Sep 2003, Joshua D. Drake wrote:
> >>>I'm not going to rehash the arguments I have made before;
>
> >>I at no point suggested that there was not a need. I only suggest that
> >>the need may not be as great as some suspect or feel. To be honest -- if
> >>your arguments were the "need" that everyone had... it would have been
> >>implemented some how. It hasn't yet which would suggest that the number
>
> > Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
> > anyone else looking at extending it ... if the *need* was so great,
> > someone would have step'd up and looked into adding to what was already
> > there ...
>
> You'ns are going to make a liar out of me yet; I said I wasn't going to
> rehash the arguments.  But I am going to answer Marc's statement.  Need
> of the users != developer interest in implementing those.  This is the
> ugly fact of open source software -- it is developer-driven, not
> user-driven.  If it were user-driven in this case seamless upgrading
> would have already happened.  But the sad fact is that the people who
> have the necessary knowledge of the codebase in question are so
> complacent and comfortable with the current dump/reload cycle that they
> really don't seem to care about the upgrade issue.  That is quite a
> harsh statement to make, yes, and I know that is kind of
> uncharacteristic for me.  But, Marc, your statement thoroughy ignores
> the archived history of this issue on the lists.
>
> While pg_upgrade was a good first step (and I applaud Bruce for working
> on it), it was promptly broken because the developers who changed the
> on-disk format felt it wasn't important to make it continue working.
>
> Stepping up to the plate on this issue will require an intimate
> knowledge of the storage manager subsystem, a thorough knowledge of the
> system catalogs, etc.  This has been discussed at length; I'll not
> repeat it.  Just any old developer can't do this -- it needs the
> long-term focused attention of Tom, Jan, or Bruce.  And that isn't going
> to happen.  We know Tom's take on it; it's archived.  Maybe there's
> someone out there with the deep knowledge of the backend to make this
> happen who cares enough about it to make it happen, and who has the time
> to do it.  I care enough to do the work; but I have neither the deep
> knowledge necessary nor the time to make it happen.  There are many in
> my position.  But those who could make it happen don't seem to have the
> care level to do so.
>
> And that has nothing to do with user need as a whole, since the care
> level I mentioned is predicated by the developer interest level.  While
> I know, Marc, how the whole project got started (I have read the first
> posts), and I appreciate that you, Bruce, Thomas, and Vadim started the
> original core team because you were and are users of PostgreSQL, I
> sincerely believe that in this instance you are out of touch with this
> need of many of today's userbase. And I say that with full knowledge of
> PostgreSQL Inc.'s support role.  If given the choice between upgrading
> capability, PITR, and Win32 support, my vote would go to upgrading.
> Then migrating to PITR won't be a PITN.
>
> What good are great features if it's a PITN to get upgraded to them?
> --
> Lamar Owen
> Director of Information Technology
> Pisgah Astronomical Research Institute
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Lamar Owen
Date:
On Saturday 27 September 2003 04:50 pm, Bruce Momjian wrote:
> With all the discussion and pg_upgrade, I saw no one offer to work on
> it.

> Does someone want to convert it to Perl?  I think that would be a better
> language than shell script for this purpose, and C is too low-level.

Wow, Bruce, are you _that_ behind on your e-mail?  Hmmm, a perl conversion
might be interesting....  BTW, thanks for the TODO item.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
(828)862-5554
www.pari.edu


Re: State of Beta 2

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> With all the discussion and pg_upgrade, I saw no one offer to work on
> it.
> Does someone want to convert it to Perl?  I think that would be a better
> language than shell script for this purpose, and C is too low-level.

The reason that it needs to be rewritten in C is that it needs access to
internal stuff that the backend doesn't expose.  (For example, the
transaction counter, end-of-WAL pointer, etc.)  I don't think Perl would
offer anything except creating an entirely new dependency for Postgres.
Also, C code would be easier to keep in sync with the backend code that
accesses the same stuff.

            regards, tom lane

Re: State of Beta 2

From
Bruce Momjian
Date:
Lamar Owen wrote:
> On Saturday 27 September 2003 04:50 pm, Bruce Momjian wrote:
> > With all the discussion and pg_upgrade, I saw no one offer to work on
> > it.
>
> > Does someone want to convert it to Perl?  I think that would be a better
> > language than shell script for this purpose, and C is too low-level.
>
> Wow, Bruce, are you _that_ behind on your e-mail?  Hmmm, a perl conversion
> might be interesting....  BTW, thanks for the TODO item.

This is more the sweep-up of items that need some extra attention.  I am
not reading this for the first time.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > With all the discussion and pg_upgrade, I saw no one offer to work on
> > it.
> > Does someone want to convert it to Perl?  I think that would be a better
> > language than shell script for this purpose, and C is too low-level.
>
> The reason that it needs to be rewritten in C is that it needs access to
> internal stuff that the backend doesn't expose.  (For example, the
> transaction counter, end-of-WAL pointer, etc.)  I don't think Perl would
> offer anything except creating an entirely new dependency for Postgres.
> Also, C code would be easier to keep in sync with the backend code that
> accesses the same stuff.

True, but doing all that text manipulation is C is going to be very hard
to do and maintain.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
"Nigel J. Andrews"
Date:
On Sat, 27 Sep 2003, Bruce Momjian wrote:

> Tom Lane wrote:
> > Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > > With all the discussion and pg_upgrade, I saw no one offer to work on
> > > it.
> > > Does someone want to convert it to Perl?  I think that would be a better
> > > language than shell script for this purpose, and C is too low-level.
> >
> > The reason that it needs to be rewritten in C is that it needs access to
> > internal stuff that the backend doesn't expose.  (For example, the
> > transaction counter, end-of-WAL pointer, etc.)  I don't think Perl would
> > offer anything except creating an entirely new dependency for Postgres.
> > Also, C code would be easier to keep in sync with the backend code that
> > accesses the same stuff.
>
> True, but doing all that text manipulation is C is going to be very hard
> to do and maintain.

What about using embedded perl? I've never done it before but the mention of it
in manpages has flashed past my eyes a couple of times so I know it's possible.

Did the discuss decide on what was required for this. Last I noticed was that
there was a distinction being made between system and user tables but I don't
recall seeing a 'requirements' summary.


Nigel



Re: State of Beta 2

From
Bruce Momjian
Date:
Nigel J. Andrews wrote:
> > > The reason that it needs to be rewritten in C is that it needs access to
> > > internal stuff that the backend doesn't expose.  (For example, the
> > > transaction counter, end-of-WAL pointer, etc.)  I don't think Perl would
> > > offer anything except creating an entirely new dependency for Postgres.
> > > Also, C code would be easier to keep in sync with the backend code that
> > > accesses the same stuff.
> >
> > True, but doing all that text manipulation is C is going to be very hard
> > to do and maintain.
>
> What about using embedded perl? I've never done it before but the mention of it
> in manpages has flashed past my eyes a couple of times so I know it's possible.
>
> Did the discuss decide on what was required for this. Last I noticed was that
> there was a distinction being made between system and user tables but I don't
> recall seeing a 'requirements' summary.

My guess is that we could do it in Perl, and call some C programs as
required.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Tom Lane wrote:
>> The reason that it needs to be rewritten in C is that it needs access to
>> internal stuff that the backend doesn't expose.

> True, but doing all that text manipulation is C is going to be very hard
> to do and maintain.

Text manipulation?  I don't think that pg_upgrade has to do much text
manipulation.  (The shell-script version might do so, but that's only
because it has to cast the problem in terms of program I/O.)

            regards, tom lane

Re: State of Beta 2

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Tom Lane wrote:
> >> The reason that it needs to be rewritten in C is that it needs access to
> >> internal stuff that the backend doesn't expose.
>
> > True, but doing all that text manipulation is C is going to be very hard
> > to do and maintain.
>
> Text manipulation?  I don't think that pg_upgrade has to do much text
> manipulation.  (The shell-script version might do so, but that's only
> because it has to cast the problem in terms of program I/O.)

Uh, it seems to have to push a lot of data around, filename/relname
mapping, etc.  It almost wants a database.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: State of Beta 2

From
Ron Johnson
Date:
On Sat, 2003-09-27 at 17:13, Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Tom Lane wrote:
> >> The reason that it needs to be rewritten in C is that it needs access to
> >> internal stuff that the backend doesn't expose.
>
> > True, but doing all that text manipulation is C is going to be very hard
> > to do and maintain.
>
> Text manipulation?  I don't think that pg_upgrade has to do much text
> manipulation.  (The shell-script version might do so, but that's only
> because it has to cast the problem in terms of program I/O.)

There's always the general point that C has more pitfalls (mainly
from pointers/free()/malloc(), and HLLs do more for you, thus you
have to code less, and, consequently, there are fewer bugs.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

484,246 sq mi are needed for 6 billion people to live, 4 persons
per lot, in lots that are 60'x150'.
That is ~ California, Texas and Missouri.
Alternatively, France, Spain and The United Kingdom.


Re: State of Beta 2

From
"Joshua D. Drake"
Date:
Bruce Momjian wrote:

>With all the discussion and pg_upgrade, I saw no one offer to work on
>it.
>
>Does someone want to convert it to Perl?  I think that would be a better
>language than shell script for this purpose, and C is too low-level.
>
>
Actually I offered to put a full time programmer on it for 6 months. If
you review some of my
earlier posts you will see my proposal.

Sincerely,

Joshua Drake





>>
>>
>
>
>



Re: State of Beta 2

From
Bruce Momjian
Date:
Joshua D. Drake wrote:
> Bruce Momjian wrote:
>
> >With all the discussion and pg_upgrade, I saw no one offer to work on
> >it.
> >
> >Does someone want to convert it to Perl?  I think that would be a better
> >language than shell script for this purpose, and C is too low-level.
> >
> >
> Actually I offered to put a full time programmer on it for 6 months. If
> you review some of my
> earlier posts you will see my proposal.

$$$ -- I wasn't looking to purchase a programmer.  :-)

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Rewriting pg_upgrade (was Re: State of Beta 2)

From
Ron Johnson
Date:
On Sat, 2003-09-27 at 16:50, Nigel J. Andrews wrote:
> On Sat, 27 Sep 2003, Bruce Momjian wrote:
>
> > Tom Lane wrote:
> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > > > With all the discussion and pg_upgrade, I saw no one offer to work on
> > > > it.
> > > > Does someone want to convert it to Perl?  I think that would be a better
> > > > language than shell script for this purpose, and C is too low-level.
> > >
> > > The reason that it needs to be rewritten in C is that it needs access to
> > > internal stuff that the backend doesn't expose.  (For example, the
> > > transaction counter, end-of-WAL pointer, etc.)  I don't think Perl would
> > > offer anything except creating an entirely new dependency for Postgres.
> > > Also, C code would be easier to keep in sync with the backend code that
> > > accesses the same stuff.

Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
Unixware....

> > True, but doing all that text manipulation is C is going to be very hard
> > to do and maintain.
>
> What about using embedded perl? I've never done it before but the mention of it
> in manpages has flashed past my eyes a couple of times so I know it's possible.
>
> Did the discuss decide on what was required for this. Last I noticed was that
> there was a distinction being made between system and user tables but I don't
> recall seeing a 'requirements' summary.

What about Perl w/ C modules?  Of course, there's my favorite: Python.
It's got a good facility for writing C modules, and I think it's
better for writing s/w that needs to be constantly updated.

(I swear, it's just circumstance that this particular .signature
came up at this time, but it is apropos.)

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

YODA: Code! Yes. A programmer's strength flows from code
maintainability. But beware of Perl. Terse syntax... more
than one way to do it...default variables. The dark side of code
maintainability are they. Easily they flow, quick to join you
when code you write. If once you start down the dark path,
forever will it dominate your destiny, consume you it will.


Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:

On Sat, 27 Sep 2003, Ron Johnson wrote:

> Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
> Unixware....

I know that Solaris now has it included by default ...


Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
Larry Rosenman
Date:
perl ships on UnixWare (5.005, but that will change in UP3).

LER


--On Saturday, September 27, 2003 22:42:02 -0300 "Marc G. Fournier"
<scrappy@postgresql.org> wrote:

>
>
> On Sat, 27 Sep 2003, Ron Johnson wrote:
>
>> Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
>> Unixware....
>
> I know that Solaris now has it included by default ...
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend
>



--
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

Attachment

Re: State of Beta 2

From
"Joshua D. Drake"
Date:
>
>$$$ -- I wasn't looking to purchase a programmer.  :-)
>
>

Well sometimes it takes money to get things done. Personally I don't see
a big need
for pg_upgrade but there was enough people making noise about it that it
made sense
to make the proposal. Several people did come back and offer to cough up
a little bit
but not enough to get the project done.

My prefernce is to see all that work going into pg_dump, pg_dumpall and
pg_restore.

Sincerely,

Joshua Drake





Re: State of Beta 2

From
Lamar Owen
Date:
On Saturday 27 September 2003 09:45 pm, Joshua D. Drake wrote:
> >$$$ -- I wasn't looking to purchase a programmer.  :-)

> Well sometimes it takes money to get things done. Personally I don't see
> a big need
> for pg_upgrade but there was enough people making noise about it that it
> made sense
> to make the proposal. Several people did come back and offer to cough up
> a little bit
> but not enough to get the project done.

I could always forward you my fan mail (context for the following message is
that I was extolling the group of people that help me build the various RPM
sets as an example of how backports of Fedora Core packages could be done to
'Fedora Legacy' stuff (many thanks to those who help me, BTW.)):

===================
Re: I volunteer
From: Chuck Wolber <chuckw@quantumlinux.com>
To: fedora-devel-list@redhat.com

> I as PostgreSQL RPM maintainer for the PostgreSQL Global Development
> Group do something similar to this using a loose group of volunteers.

<TROLL>
Ahhh, so you're the one. Perhaps you could write a postgreSQL RPM with
upgrade functionality that actually works?
</TROLL>

-Chuck

--
Quantum Linux Laboratories - ACCELERATING Business with Open Technology
   * Education                  | -=^ Ad Astra Per Aspera ^=-
   * Integration                | http://www.quantumlinux.com
   * Support                    | chuckw@quantumlinux.com
=====================
You know, I don't mind owning up to my own bugs.  But this bug ain't mine.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
(828)862-5554
www.pari.edu


Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:

On Sat, 27 Sep 2003, Larry Rosenman wrote:

> perl ships on UnixWare (5.005, but that will change in UP3).

In what way? :)  It won't ship anymore ... or upgraded?
>
> LER
>
>
> --On Saturday, September 27, 2003 22:42:02 -0300 "Marc G. Fournier"
> <scrappy@postgresql.org> wrote:
>
> >
> >
> > On Sat, 27 Sep 2003, Ron Johnson wrote:
> >
> >> Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
> >> Unixware....
> >
> > I know that Solaris now has it included by default ...
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 8: explain analyze is your friend
> >
>
>
>
> --
> Larry Rosenman                     http://www.lerctr.org/~ler
> Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
>

Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
Larry Rosenman
Date:

--On Sunday, September 28, 2003 00:14:18 -0300 "Marc G. Fournier"
<scrappy@postgresql.org> wrote:

>
>
> On Sat, 27 Sep 2003, Larry Rosenman wrote:
>
>> perl ships on UnixWare (5.005, but that will change in UP3).
>
> In what way? :)  It won't ship anymore ... or upgraded?
upgraded to 5.8.0

(sorry, should have been more clear :-))

>>
>> LER
>>
>>
>> --On Saturday, September 27, 2003 22:42:02 -0300 "Marc G. Fournier"
>> <scrappy@postgresql.org> wrote:
>>
>> >
>> >
>> > On Sat, 27 Sep 2003, Ron Johnson wrote:
>> >
>> >> Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
>> >> Unixware....
>> >
>> > I know that Solaris now has it included by default ...
>> >
>> >
>> > ---------------------------(end of
>> > broadcast)--------------------------- TIP 8: explain analyze is your
>> > friend
>> >
>>
>>
>>
>> --
>> Larry Rosenman                     http://www.lerctr.org/~ler
>> Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
>> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
>>



--
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

Attachment

Re: State of Beta 2

From
Dennis Gearon
Date:
Ron Johnson wrote:

>There's always the general point that C has more pitfalls (mainly
>from pointers/free()/malloc(), and HLLs do more for you, thus you
>have to code less, and, consequently, there are fewer bugs.
>
Someday, they're going to make a langauge called:

    CBC, "C Bounds Checked"

No buffer overflows, all memory allocs and mallocs create a memory
object that self expands or contracts as necessary, or issues an
exception if it tries to go past a limit you put as an argumen to a malloc.

With gigabytes of real memory and 100 gigibytes plus of virtual memory,
the programmer should not handle memory management any more. The
consumers and software users expect programmers to give up their pride
and let go of total control of the memory model, (like they have it now
). The only excetion might be hardware drivers.

Nobody say C#, OK? An Msoft imposed solution that integrates all their
products, mistakes, football stadium sized APIs, and private backdoors
is not the answer.



Re: State of Beta 2

From
Ron Johnson
Date:
On Sat, 2003-09-27 at 22:19, Dennis Gearon wrote:
> Ron Johnson wrote:
>
> >There's always the general point that C has more pitfalls (mainly
> >from pointers/free()/malloc(), and HLLs do more for you, thus you
> >have to code less, and, consequently, there are fewer bugs.
> >
> Someday, they're going to make a langauge called:
>
>     CBC, "C Bounds Checked"
>
> No buffer overflows, all memory allocs and mallocs create a memory
> object that self expands or contracts as necessary, or issues an
> exception if it tries to go past a limit you put as an argumen to a malloc.
>
> With gigabytes of real memory and 100 gigibytes plus of virtual memory,
> the programmer should not handle memory management any more. The
> consumers and software users expect programmers to give up their pride
> and let go of total control of the memory model, (like they have it now
> ). The only excetion might be hardware drivers.

Some would say that that's what Java and C++ are for.  I'd do more
Java programming if it didn't have an API the size of Montana, no
make that Alaska and a good chunk of Siberia.

But still, multiple pointers being able to point to the same chunk
of the heap will doom any solution to inefficiency.

IMNSHO, only the kernel and *high-performance* products should be
written in C.  Everything else should be written in HLLs.  Anything
from COBOL (still a useful language), FORTRAN, modern BASICs, to
pointer-less Pascal, Java, Smalltalk, Lisp, and scripting languages.

Note that I did *not* mention C++.

> Nobody say C#, OK? An Msoft imposed solution that integrates all their
> products, mistakes, football stadium sized APIs, and private backdoors
> is not the answer.

natch!

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"they love our milk and honey, but preach about another way of living"
Merle Haggard, "The Fighting Side Of Me"


Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
Greg Stark
Date:
Ron Johnson <ron.l.johnson@cox.net> writes:

> > > Tom Lane wrote:
> > > > The reason that it needs to be rewritten in C is that it needs access to
> > > > internal stuff that the backend doesn't expose.  (For example, the
> > > > transaction counter, end-of-WAL pointer, etc.)  I don't think Perl would
> > > > offer anything except creating an entirely new dependency for Postgres.
> > > > Also, C code would be easier to keep in sync with the backend code that
> > > > accesses the same stuff.

> What about Perl w/ C modules?  Of course, there's my favorite: Python.

Fwiw, it's pretty easy to call out to C functions from perl code these days.

bash-2.05b$ perl -e 'use Inline C => "int a(int i,int j) { return i+j;}"; print(a(1,2),"\n")'
3

That said I don't know if this is really such a good approach. I don't see why
you would need much string manipulation at all. The C code can just construct
directly whatever data structures it needs and call directly whatever
functions it needs. Doing string manipulation to construct dynamic sql code
and then hope it gets interpreted and executed the way it's expecting seems a
roundabout way to go about getting things done.

--
greg

C/C++/Java [was Re: State of Beta 2]

From
Shridhar Daithankar
Date:
On Sunday 28 September 2003 09:36, Ron Johnson wrote:
> On Sat, 2003-09-27 at 22:19, Dennis Gearon wrote:
> > Ron Johnson wrote:
> > >There's always the general point that C has more pitfalls (mainly
> > >from pointers/free()/malloc(), and HLLs do more for you, thus you
> > >have to code less, and, consequently, there are fewer bugs.
> >
> > Someday, they're going to make a langauge called:
> >
> >     CBC, "C Bounds Checked"
> >
> > No buffer overflows, all memory allocs and mallocs create a memory
> > object that self expands or contracts as necessary, or issues an
> > exception if it tries to go past a limit you put as an argumen to a
> > malloc.
> >
> > With gigabytes of real memory and 100 gigibytes plus of virtual memory,
> > the programmer should not handle memory management any more. The
> > consumers and software users expect programmers to give up their pride
> > and let go of total control of the memory model, (like they have it now
> > ). The only excetion might be hardware drivers.
>
> Some would say that that's what Java and C++ are for.  I'd do more
> Java programming if it didn't have an API the size of Montana, no
> make that Alaska and a good chunk of Siberia.
>
> But still, multiple pointers being able to point to the same chunk
> of the heap will doom any solution to inefficiency.
>
> IMNSHO, only the kernel and *high-performance* products should be
> written in C.  Everything else should be written in HLLs.  Anything
> from COBOL (still a useful language), FORTRAN, modern BASICs, to
> pointer-less Pascal, Java, Smalltalk, Lisp, and scripting languages.
>
> Note that I did *not* mention C++.

Duh. I would say smart pointers in C++ take care of memory errors without
adding inefficiencies and latency of garbage collection. There are plenty of
examples floating on net.

Its not about C's ability to provide built in bounds checking. Its about
programmers follow discipline, abstraction and design. Its just that C makes
those error apparent in very rude and blunt way..:-)

I hate java except for unified APIs it provides. Compensating programmers
mistake with throwing additional resources is not my idea of a good product.
But unfortunately most of the people are concerned about getting a product
out of door than giving it due attention making a robust product( Like the
one I work on.. 10 years old and still going strong..:-))

Business of software development has commoditized itself.. Its just a sad side
effect of it..

 Shridhar


Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
"Jim C. Nasby"
Date:
On Sat, Sep 27, 2003 at 10:42:02PM -0300, Marc G. Fournier wrote:
> On Sat, 27 Sep 2003, Ron Johnson wrote:
>
> > Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
> > Unixware....
>
> I know that Solaris now has it included by default ...

FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can
still easily install it from ports.
--
Jim C. Nasby, Database Consultant                  jim@nasby.net
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
Bruce Momjian
Date:
Jim C. Nasby wrote:
> On Sat, Sep 27, 2003 at 10:42:02PM -0300, Marc G. Fournier wrote:
> > On Sat, 27 Sep 2003, Ron Johnson wrote:
> >
> > > Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
> > > Unixware....
> >
> > I know that Solaris now has it included by default ...
>
> FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can
> still easily install it from ports.

Interesting.  Why would they remove it?

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
"Marc G. Fournier"
Date:

On Sun, 28 Sep 2003, Bruce Momjian wrote:

> Jim C. Nasby wrote:
> > On Sat, Sep 27, 2003 at 10:42:02PM -0300, Marc G. Fournier wrote:
> > > On Sat, 27 Sep 2003, Ron Johnson wrote:
> > >
> > > > Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
> > > > Unixware....
> > >
> > > I know that Solaris now has it included by default ...
> >
> > FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can
> > still easily install it from ports.
>
> Interesting.  Why would they remove it?

I can't recall the full justification, but there was alot of work done to
remove any operating dependencies on pelr so that it could be removed ...
there are several versions of perl currently 'stable', and too many ppl
crying out for "can't we have this one default", that it was just easier
to let ppl install which versino they want from ports when they install
the OS ...


Re: Rewriting pg_upgrade (was Re: State of Beta 2)

From
"Jim C. Nasby"
Date:
On Sun, Sep 28, 2003 at 12:38:03PM -0400, Bruce Momjian wrote:
> Jim C. Nasby wrote:
> > On Sat, Sep 27, 2003 at 10:42:02PM -0300, Marc G. Fournier wrote:
> > > On Sat, 27 Sep 2003, Ron Johnson wrote:
> > >
> > > > Isn't Perl pretty ubiquitous on "Unix" now, though?  Except maybe
> > > > Unixware....
> > >
> > > I know that Solaris now has it included by default ...
> >
> > FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can
> > still easily install it from ports.
>
> Interesting.  Why would they remove it?

I believe it was essentially because it was starting to take up a good
chunk of space in the base install and it was beginning to cause
trouble. The parts of the OS that used it depended on version X, while
the user wanted version Y, etc. So they rewrote all the perl code in the
OS to use some other language and pulled it from the base distro.
There's more info to be had in the mailling list archives, either in
freebsd-stable or freebsd-current.

Realistically, many systems will still end up with perl installed, but I
can see where dedicated database servers might well not. And it'd be a
bit of a pain if the PostgreSQL port required perl. But of course this
is just one OS.
--
Jim C. Nasby, Database Consultant                  jim@nasby.net
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

Re: Rewriting pg_upgrade

From
Christopher Browne
Date:
pgman@candle.pha.pa.us (Bruce Momjian) writes:
> Jim C. Nasby wrote:
>> FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can
>> still easily install it from ports.
>
> Interesting.  Why would they remove it?

Because it's a REALLY BIG ball of mud to include as a core dependancy?

Don't get me wrong; I have several Emacs buffers presently open to
Perl programs; it's a _useful_ ball of mud.  But big ball of mud it
certainly is, and it seems unremarkable that there would be some
reluctance to be dependent on it.

There's a LOT of stuff going on with Perl (Parrot + Perl6), and for
the FreeBSD folk to be reluctant to "contract" to all that change
seems unsurprising.
--
output = reverse("ofni.smrytrebil" "@" "enworbbc")
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)

Re: Rewriting pg_upgrade

From
Doug McNaught
Date:
Christopher Browne <cbbrowne@libertyrms.info> writes:

> pgman@candle.pha.pa.us (Bruce Momjian) writes:
> > Jim C. Nasby wrote:
> >> FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can
> >> still easily install it from ports.
> >
> > Interesting.  Why would they remove it?
>
> Because it's a REALLY BIG ball of mud to include as a core dependancy?

/agree

Also, it's quite probable that people installing/upgrading PG would
have anything from 5.003 onward.  Knowing what programming constructs
you can use and still be compatible with older versions requires quite
a bit of scholarship.

I know that if I were upgrading PG on, say, a Red Hat 6.2 box and was
forced to upgrade Perl to 5.8.1 as a dependency I'd be swearing pretty
hard...

-Doug