Thread: 7.2.3?

7.2.3?

From
Bruce Momjian
Date:
I have seen no discussion on whether to go ahead with a 7.2.3 to add
several serious fixes Tom has made to the code in the past few days. 

Are we too close to 7.3 for this to be worthwhile?  Certainly there will
be people distributing 7.2.X for some time as 7.3 stabilizes.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: 7.2.3?

From
Justin Clift
Date:
Bruce Momjian wrote:
> 
> I have seen no discussion on whether to go ahead with a 7.2.3 to add
> several serious fixes Tom has made to the code in the past few days.

This will allow production sites to run the 7.2 series and also do
VACUUM FULL won't it?

If so, then the idea is already pretty good.  :-)

Which other fixes would be included?

Regards and best wishes,

Justin Clift

> Are we too close to 7.3 for this to be worthwhile?  Certainly there will
> be people distributing 7.2.X for some time as 7.3 stabilizes.
> 
> --
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 359-1001
>   +  If your life is a hard drive,     |  13 Roberts Road
>   +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."  - Indira Gandhi


Re: 7.2.3?

From
Lamar Owen
Date:
On Saturday 28 September 2002 02:36 pm, Bruce Momjian wrote:
> I have seen no discussion on whether to go ahead with a 7.2.3 to add
> several serious fixes Tom has made to the code in the past few days.

> Are we too close to 7.3 for this to be worthwhile?  Certainly there will
> be people distributing 7.2.X for some time as 7.3 stabilizes.

IMHO, I believe a 7.2.3 is worthwhile.  It isn't _that_ much effort, is it?  I 
am most certainly of the school of thought that backporting serious issues 
into the last stable release is a Good Thing.  I don't think a released 7.3 
should prevent us from a 7.2.4 down the road, either -- or even a 7.1.4 if a 
serious security issue were to be found there.  Probably not a 7.0.4, though.  
And definitely not a 6.5.4.  Some people can have great difficulty migrating 
-- if we're not going to make it easy for people to migrate, we should 
support older versions with fixes.  IMHO, of course.

If it hasn't already, a fix for the Red Hat 7.3/glibc mktime(3) issue 
(workaround really) would be nice, as I understand the 7.3 branch has one.

RPM's will take me all of an hour if I'm at work when it's released.  That is 
if my wife doesn't go into labor first (she's at 37 weeks and having 
Braxton-Hicks already). #4.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


Re: 7.2.3?

From
Alvaro Herrera
Date:
Justin Clift dijo: 

> Bruce Momjian wrote:
>
> > I have seen no discussion on whether to go ahead with a 7.2.3 to add
> > several serious fixes Tom has made to the code in the past few days.
> 
> This will allow production sites to run the 7.2 series and also do
> VACUUM FULL won't it?
> 
> If so, then the idea is already pretty good.  :-)
> 
> Which other fixes would be included?

At least the VACUUM code should prevent VACUUM from running inside a
function.  At least one user has been bitten by it.

Memory leaks and such in the PL modules should be backported also.

-- 
Alvaro Herrera (<alvherre[a]atentus.com>)
"El sentido de las cosas no viene de las cosas, sino de
las inteligencias que las aplican a sus problemas diarios
en busca del progreso." (Ernesto Hernández-Novich)



Re: 7.2.3?

From
Tom Lane
Date:
Alvaro Herrera <alvherre@atentus.com> writes:
> Memory leaks and such in the PL modules should be backported also.

This is getting out of hand :-(

7.2 is in maintenance status at this point.  I'm willing to do backports
for bugs that cause data loss, like this VACUUM/CLOG issue.
Performance problems are not on the radar screen at all (especially
not when the putative fixes for them haven't received much of any
testing, and are barely worthy to be called beta status).

We do not have either the developer manpower or the testing resources
to do more than the most minimal maintenance on back versions.  Major
back-port efforts just aren't going to happen.  If they did, they would
significantly impact our ability to work on 7.3 and up; does that seem
like a good tradeoff to you?
        regards, tom lane


Re: 7.2.3?

From
Alvaro Herrera
Date:
Tom Lane dijo: 

> Alvaro Herrera <alvherre@atentus.com> writes:
> > Memory leaks and such in the PL modules should be backported also.
> 
> This is getting out of hand :-(

Yes, I agree with you.

> Major back-port efforts just aren't going to happen.  If they did,
> they would significantly impact our ability to work on 7.3 and up;
> does that seem like a good tradeoff to you?

I understand the issue.  I also understand that is very nice for
PostgreSQL to advance very quickly, and requiring backports (and
subsequent slowdown) is not nice at all.  However, for users it's very
important to have the fixes present in newer versions...  _without_ the
burden of having to upgrade!

I agree with Lamar that upgrading is a very difficult process right now.
Requiring huge amounts of disk space and database downtime to do
dump/restore is in some cases too high a price to pay.  So maybe the
upgrading process should be observed instead of wasting time on people
trying to stay behind because of the price of that process.

Maybe there is some way of making the life easier for the upgrader.
Let's see, when you upgrade there are basically two things that change:

a) system catalogs  Going from one version to another requires a number of changes: new  tuples, deleted tuples, new
attributes,deleted attributes.  On-line  transforming syscatalogs for the three first types seems easy.  The  last one
maybe difficult, but it also may not be, I'm not sure.  It  will require a standalone backend for shared relations and
such,but  hey, it's much cheaper than the process that's required now.
 

b) on-disk representation of user data  This is not easy.  Upgrading means changing each filenode from one  version to
another;it requires a tool that understands both (and  more than two) versions.  It also requires a backend that is
ableto  detect that a page is not the version it should, and either abort or  convert it on the fly (this last
possibilityseems very nice).
 
  Note that only tables should be converted: other objects (indexes)  should just be rebuilt.

There are other things that change.  For example, dependencies are new
in 7.3; building them without the explicit schema construction seems
difficult, but it's certainly possible.  The implicit/explicit cast
system is also new, but it doesn't depend on user data (except for user
defined datatypes, and that should be done manually by the user), so
should just be created from scratch.

Is this at least remotely possible to do?

-- 
Alvaro Herrera (<alvherre[a]atentus.com>)
"La fuerza no está en los medios físicos
sino que reside en una voluntad indomable" (Gandhi)



Re: 7.2.3?

From
Stephan Szabo
Date:
On Sat, 28 Sep 2002, Bruce Momjian wrote:

> I have seen no discussion on whether to go ahead with a 7.2.3 to add
> several serious fixes Tom has made to the code in the past few days.
>
> Are we too close to 7.3 for this to be worthwhile?  Certainly there will
> be people distributing 7.2.X for some time as 7.3 stabilizes.

The vacuum thing is big enough that there should be since as always people
aren't going to move immediately forward with a major version change.




Upgrade process (was Re: 7.2.3?)

From
Tom Lane
Date:
Alvaro Herrera <alvherre@atentus.com> writes:
> Maybe there is some way of making the life easier for the upgrader.
> Let's see, when you upgrade there are basically two things that change:
> a) system catalogs
> b) on-disk representation of user data
> [much snipped]

Yup.  I see nothing wrong with the pg_upgrade process that we've
previously used for updating the system catalogs, however.  Trying to
do it internally in some way will be harder and more dangerous (ie,
much less reliable) than relying on schema-only dump and restore
followed by moving the physical data.

Updates that change the on-disk representation of user data are much
harder, as you say.  But I think they can be made pretty infrequent.
We've only had two such updates that I know of in Postgres' history:
adding WAL in 7.1 forced some additions to page headers, and now in
7.3 we've changed tuple headers for space-saving reasons, and fixed
some problems with alignment in array data.

pg_upgrade could have worked for the 7.2 cycle, but it wasn't done,
mostly for lack of effort.

Going forward I think we should try to maintain compatibility of on-disk
user data and ensure that pg_upgrade works.
        regards, tom lane


Re: 7.2.3?

From
Lamar Owen
Date:
On Saturday 28 September 2002 04:14 pm, Tom Lane wrote:
> 7.2 is in maintenance status at this point.  I'm willing to do backports
> for bugs that cause data loss, like this VACUUM/CLOG issue.
> Performance problems are not on the radar screen at all (especially
> not when the putative fixes for them haven't received much of any
> testing, and are barely worthy to be called beta status).

A fix that is beta-quality for a non-serious issue (serious issues being of 
the level of the VACUUM/CLOG issue) is, in my mind at least, not for 
inclusion into a _stable_ release.  Simple fixes (the localtime versus mktime 
fix) might be doable, but might not depending upon the particular fix, how 
difficult the packport, etc.  But 7.2 is considered _stable_ -- and I agree 
that this means maintenance mode only.  Only the most trivial or the most 
serious problems should be tackled here.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


Re: Upgrade process (was Re: 7.2.3?)

From
Lamar Owen
Date:
On Saturday 28 September 2002 04:57 pm, Tom Lane wrote:
> 7.3 we've changed tuple headers for space-saving reasons, and fixed
> some problems with alignment in array data.

> Going forward I think we should try to maintain compatibility of on-disk
> user data and ensure that pg_upgrade works.

This is of course a two-edged sword.

1.)    Keeping pg_upgrade working, which depends upon pg_dump working;
2.)    Maintaining security fixes for 7.2 for a good period of time to come, 
since migration from 7.2 to >7.2 isn't easy.

If pg_upgrade is going to be the cookie, then let's all try to test the 
cookie.  I'll certainly try to do my part.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


Re: Upgrade process (was Re: 7.2.3?)

From
Tom Lane
Date:
Lamar Owen <lamar.owen@wgcr.org> writes:
> This is of course a two-edged sword.

> 1.)    Keeping pg_upgrade working, which depends upon pg_dump working;

... which we have to have anyway, of course ...

> 2.)    Maintaining security fixes for 7.2 for a good period of time to come, 
> since migration from 7.2 to >7.2 isn't easy.

True, but I think we'll have to deal with that anyway.  Even if the
physical database upgrade were trivial, people are going to find
application compatibility problems due to schemas and other 7.3 changes.
So we're going to have to expend at least some work on fixing critical
7.2.* problems.  (I just want to keep a tight rein on how much.)
        regards, tom lane


Re: 7.2.3?

From
Justin Clift
Date:
Alvaro Herrera wrote:
<snip>
> I agree with Lamar that upgrading is a very difficult process right now.
> Requiring huge amounts of disk space and database downtime to do
> dump/restore is in some cases too high a price to pay.  So maybe the
> upgrading process should be observed instead of wasting time on people
> trying to stay behind because of the price of that process.

As a "simple for the user approach", would it be
too-difficult-to-bother-with to add to the postmaster an ability to
start up with the data files from the previous version, for it to
recognise an old data format automatically, then for it to do the
conversion process of the old data format to the new one before going
any further?

Sounds like a pain to create initially, but nifty in the end.

:-)

Regards and best wishes,

Justin Clift


<snip>
> --
> Alvaro Herrera (<alvherre[a]atentus.com>)
> "La fuerza no está en los medios físicos
> sino que reside en una voluntad indomable" (Gandhi)

-- 
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."  - Indira Gandhi


Re: Upgrade process (was Re: 7.2.3?)

From
Giles Lean
Date:
Tom lane wrote:

> True, but I think we'll have to deal with that anyway.  Even if the
> physical database upgrade were trivial, people are going to find
> application compatibility problems due to schemas and other 7.3 changes.

More reasons:

a) learning curve -- I want to use 7.3 and gain some experience with  7.2.x -> 7.3 migration before rolling out 7.3 to
myusers.
 

b) change control and configuration freezes sometimes dictate when  upgrades may be done.  A 7.2.2 -> 7.2.3 upgrade for
bugfixes is  much less intrusive than an upgrade to 7.3.
 

> So we're going to have to expend at least some work on fixing critical
> 7.2.* problems.  (I just want to keep a tight rein on how much.)

No argument here.  Supporting multiple versions eats resources and
eventually destabilises the earlier releases, so critial fixes only,
please.  New features and non-critical fixes however minor are
actually unhelpful.

Since PostgreSQL is open source, anyone who "just has" to have some
minor new feature back ported can do it, or pay for it to be done.
But this doesn't have to effect all users.

Regards,

Giles


Re: 7.2.3?

From
Bruce Momjian
Date:
Justin Clift wrote:
> Alvaro Herrera wrote:
> <snip>
> > I agree with Lamar that upgrading is a very difficult process right now.
> > Requiring huge amounts of disk space and database downtime to do
> > dump/restore is in some cases too high a price to pay.  So maybe the
> > upgrading process should be observed instead of wasting time on people
> > trying to stay behind because of the price of that process.
> 
> As a "simple for the user approach", would it be
> too-difficult-to-bother-with to add to the postmaster an ability to
> start up with the data files from the previous version, for it to
> recognise an old data format automatically, then for it to do the
> conversion process of the old data format to the new one before going
> any further?
> 
> Sounds like a pain to create initially, but nifty in the end.

Yes, we could, but if we are going to do that, we may as well just
automate the dump/reload.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: 7.2.3?

From
Lamar Owen
Date:
On Saturday 28 September 2002 09:23 pm, Bruce Momjian wrote:
> Justin Clift wrote:
> > Alvaro Herrera wrote:
> > > I agree with Lamar that upgrading is a very difficult process right

> > As a "simple for the user approach", would it be
> > too-difficult-to-bother-with to add to the postmaster an ability to
> > start up with the data files from the previous version, for it to
> > recognise an old data format automatically, then for it to do the
> > conversion process of the old data format to the new one before going
> > any further?

> > Sounds like a pain to create initially, but nifty in the end.

> Yes, we could, but if we are going to do that, we may as well just
> automate the dump/reload.

Automating the dump/reload is fraught with pitfalls.  Been there; done that; 
got the t-shirt.  The dump from the old version many times requires 
hand-editing for cases where the complexity is above a certain threshold.  
The 7.2->7.3 threshold is just a little lower than normal.  

Our whole approach to the system catalog is wrong for what Justin (and many 
others would like to see).

With MySQL, for instance, one can migrate on a table-by-table basis from one 
table type to another.  As older table types are continuously supported, one 
can upgrade each table in turn as you need the featureset supported by that 
tabletype.

Yes, I know that doesn't fit our existing model of 'all in one' system 
catalogs.  And the solution doesn't present itself readily -- but one day 
someone will see the way to do this, and it will be good.  It _will_ involve 
refactoring the system catalog schema so that user 'system catalog' metadata 
and system 'system catalog' data aren't codependent.  A more modular data 
storage approach at a level above the existing broken storage manager 
modularity will result, and things will be different.

However, the number of messages on this subject has increased; one day it will 
become an important feature worthy of core developer attention.  That will be 
a happy day for me, as well as many others.  I have not the time to do it 
myself; but I can be a gadfly, at least.  In the meantime we have pg_upgrade 
for the future 7.3 -> 7.4 upgrade.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


Re: 7.2.3?

From
Alvaro Herrera
Date:
Bruce Momjian dijo: 

> Justin Clift wrote:
> > Alvaro Herrera wrote:

> > As a "simple for the user approach", would it be
> > too-difficult-to-bother-with to add to the postmaster an ability to
> > start up with the data files from the previous version, for it to
> > recognise an old data format automatically, then for it to do the
> > conversion process of the old data format to the new one before going
> > any further?
> 
> Yes, we could, but if we are going to do that, we may as well just
> automate the dump/reload.

I don't think that's an acceptable solution.  It requires too much free
disk space and too much time.  On-line upgrading, meaning altering the
databases on a table-by-table basis (or even page-by-page) solves both
problems (binary conversion sure takes less than converting to text
representation and parsing it to binary again).

I think a converting postmaster would be a waste, because it's unneeded
functionality 99.999% of the time.  I'm leaning towards an external
program doing the conversion, and the backend just aborting if it finds
old or in-conversion data.  The converter should be able to detect that
it has aborted and resume conversion.

What would that converter need:
- the old system catalog  (including user defined data)
- the new system catalog  (ditto, including the schema)
- the storage manager subsystem

I think that should be enough for converting table files.  I'd like to
experiment with something like this when I have some free time.  Maybe
next year...

-- 
Alvaro Herrera (<alvherre[a]atentus.com>)
"I think my standards have lowered enough that now I think 'good design'
is when the page doesn't irritate the living fuck out of me." (JWZ)



Re: 7.2.3?

From
Tom Lane
Date:
Alvaro Herrera <alvherre@atentus.com> writes:
> What would that converter need:
> [snip]
> I think that should be enough for converting table files.  I'd like to
> experiment with something like this when I have some free time.  Maybe
> next year...

It's difficult to say anything convincing on this topic without a
specific conversion requirement in mind.

Localized conversions like 7.3's tuple header change could be done on a
page-by-page basis as you suggest.  (In fact, one reason I insisted on
putting in a page header version number was to leave the door open for
such a converter, if someone wants to do one.)

But one likely future format change for user data is combining parent
and child tables into a single physical table, per recent inheritance
thread.  (I'm not yet convinced that that's feasible or desirable,
I'm just using it as an example of a possible conversion requirement.)
You can't very well do that page-by-page; it'd require a completely
different approach.
        regards, tom lane


Re: 7.2.3?

From
Hannu Krosing
Date:
On Sun, 2002-09-29 at 07:19, Lamar Owen wrote:
> On Saturday 28 September 2002 09:23 pm, Bruce Momjian wrote:
> > Justin Clift wrote:
> > > Alvaro Herrera wrote:
> > > > I agree with Lamar that upgrading is a very difficult process right
> 
> > > As a "simple for the user approach", would it be
> > > too-difficult-to-bother-with to add to the postmaster an ability to
> > > start up with the data files from the previous version, for it to
> > > recognise an old data format automatically, then for it to do the
> > > conversion process of the old data format to the new one before going
> > > any further?
> 
> > > Sounds like a pain to create initially, but nifty in the end.
> 
> > Yes, we could, but if we are going to do that, we may as well just
> > automate the dump/reload.
> 
> Automating the dump/reload is fraught with pitfalls.  Been there; done that; 
> got the t-shirt.  The dump from the old version many times requires 
> hand-editing for cases where the complexity is above a certain threshold.  
> The 7.2->7.3 threshold is just a little lower than normal.  
> 
> Our whole approach to the system catalog is wrong for what Justin (and many 
> others would like to see).
> 
> With MySQL, for instance, one can migrate on a table-by-table basis from one 
> table type to another.  As older table types are continuously supported, one 
> can upgrade each table in turn as you need the featureset supported by that 
> tabletype.

The initial Postgres design had a notion of StorageManager's, which
should make this very easy indeed, if it had been kept working .

IIRC the black box nature of storage manager interface was broken at
latest when adding WAL (if it had really been there in the first place).

----------------------
Hannu




Re: 7.2.3?

From
Hannu Krosing
Date:
On Sun, 2002-09-29 at 09:47, Tom Lane wrote:
> Alvaro Herrera <alvherre@atentus.com> writes:
> > What would that converter need:
> > [snip]
> > I think that should be enough for converting table files.  I'd like to
> > experiment with something like this when I have some free time.  Maybe
> > next year...
> 
> It's difficult to say anything convincing on this topic without a
> specific conversion requirement in mind.
> 
> Localized conversions like 7.3's tuple header change could be done on a
> page-by-page basis as you suggest.  (In fact, one reason I insisted on
> putting in a page header version number was to leave the door open for
> such a converter, if someone wants to do one.)
> 
> But one likely future format change for user data is combining parent
> and child tables into a single physical table, per recent inheritance
> thread.  (I'm not yet convinced that that's feasible or desirable,
> I'm just using it as an example of a possible conversion requirement.)
> You can't very well do that page-by-page; it'd require a completely
> different approach.

I started to think about possible upgrade strategy for this scenario and
came up with a whole new way for the whole storage :

We could extend our current way of 1G split files for inheritance, so
that each inherited table is in its own (set of) physical files which
represent a (set of) 1G segment(s) for the logical file definition of
all parent. This would even work for both single and multiple
inheritance !

In this case the indexes (which enforce the uniquenaess and are required
for RI) would see the thing as a single file and can use plain TIDs. The
process of mapping from TID.PAGENR to actual file will happen below the
level visible to executor. It would also naturally cluster similar
tuples.

Aa an extra bonus migration can be done only by changing system catalogs
and recreating indexes.

It will limit the size of inherited structure to at most 16K different
tables (max unsigned int/pagesize), but I don't think this will be a
real limit anytime soon.

---------------------
Hannu




Re: 7.2.3?

From
Tom Lane
Date:
Hannu Krosing <hannu@tm.ee> writes:
> The initial Postgres design had a notion of StorageManager's, which
> should make this very easy indeed, if it had been kept working .

But the storage manager interface was never built to hide issues like
tuple representation --- storage managers just deal in raw pages.
I doubt it would have helped in the least for anything we've been
concerned about.
        regards, tom lane


Re: 7.2.3?

From
Hannu Krosing
Date:
On Sun, 2002-09-29 at 19:28, Tom Lane wrote:
> Hannu Krosing <hannu@tm.ee> writes:
> > The initial Postgres design had a notion of StorageManager's, which
> > should make this very easy indeed, if it had been kept working .
> 
> But the storage manager interface was never built to hide issues like
> tuple representation --- storage managers just deal in raw pages.

I had an impression that SM was meant to be a little higher-level. IIRC
the original Berkeley Postgres had at one point a storage manager for
write-once storage on CDWr jukeboxes.

the README in src/backend/storage/smgr still contains mentions about
Sony jukebox drivers.

http://www.ndim.edrc.cmu.edu/postgres95/www/pglite1.html also claims
this:

Version 3 appeared in 1991 and added support for multiple storage
managers, an improved query executor and a rewritten rewrite rule
system. For the most part, releases since then have focused on
portability and reliability. 

> I doubt it would have helped in the least for anything we've been
> concerned about.

Yes, it seems that we do not have a SM in the semse I hoped.

Still, if we could use a clean SM interface over old page format, then
the tuple conversion could be done there.

That of course would need the storage manager to be aware of old/new
tuple structures ;(

-----------------
Hannu





Re: 7.2.3?

From
Greg Copeland
Date:
Should an advisory be issued for production sites to not perform a
vacuum full with a notice that a bug fix will be coming shortly?

Greg



On Sat, 2002-09-28 at 13:45, Justin Clift wrote:
> Bruce Momjian wrote:
> >
> > I have seen no discussion on whether to go ahead with a 7.2.3 to add
> > several serious fixes Tom has made to the code in the past few days.
>
> This will allow production sites to run the 7.2 series and also do
> VACUUM FULL won't it?
>
> If so, then the idea is already pretty good.  :-)
>
> Which other fixes would be included?
>
> Regards and best wishes,
>
> Justin Clift
>
>
> > Are we too close to 7.3 for this to be worthwhile?  Certainly there will
> > be people distributing 7.2.X for some time as 7.3 stabilizes.
> >
> > --
> >   Bruce Momjian                        |  http://candle.pha.pa.us
> >   pgman@candle.pha.pa.us               |  (610) 359-1001
> >   +  If your life is a hard drive,     |  13 Roberts Road
> >   +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 5: Have you checked our extensive FAQ?
> >
> > http://www.postgresql.org/users-lounge/docs/faq.html
>
> --
> "My grandfather once told me that there are two kinds of people: those
> who work and those who take the credit. He told me to try to be in the
> first group; there was less competition there."
>    - Indira Gandhi
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


Re: 7.2.3?

From
Tom Lane
Date:
Greg Copeland <greg@CopelandConsulting.Net> writes:
> Should an advisory be issued for production sites to not perform a
> vacuum full with a notice that a bug fix will be coming shortly?

People seem to be misunderstanding the bug.  Whether your vacuum is FULL
or not (or VERBOSE or not, or ANALYZE or not) is not relevant.  The
dangerous thing is to execute a VACUUM that's not a single-table VACUUM
*as a non-superuser*.  The options don't matter.  If you see any notices
about "skipping tables" out of VACUUM, then you are at risk.

I'm not averse to issuing an announcement, but let's be sure we have
the details straight.
        regards, tom lane