Thread: pg_upgrade problem with invalid indexes
I got a report today on the IRC channel about a pg_upgrade problem with upgrading clusters with indexes that exist but are invalid. For example, if you use CREATE INDEX CONCURRENTLY, then shut down the server while it is running, the index will be left as INVALID; from our CREATE INDEX docs: If a problem arises while scanning the table, such as a uniqueness violation in a unique index, the CREATE INDEXcommand will fail but leave behind an 'invalid' index. This index will be ignored for querying purposes becauseit might be incomplete; however it will still consume update overhead. The psql \d command will report suchan index as INVALID: postgres=# \d tab Table "public.tab" Column | Type | Modifiers --------+---------+----------- col | integer | Indexes: "idx" btree (col) INVALID The recommended recovery method in such cases is to drop the index and try again to perform CREATE INDEX CONCURRENTLY.(Another possibility is to rebuild the index with REINDEX. However, since REINDEX does not supportconcurrent builds, this option is unlikely to seem attractive.) The problem is that this invalid state is not dumped by pg_dump, meaning pg_upgrade will restore the index as valid. There are a few possible fixes. The first would be to have pg_upgrade throw an error on any invalid index in the old cluster. Another option would be to preserve the invalid state in pg_dump --binary-upgrade. I also need help in how to communicate this to users since our next minor release will be in the future. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
> There are a few possible fixes. The first would be to have pg_upgrade > throw an error on any invalid index in the old cluster. Another option > would be to preserve the invalid state in pg_dump --binary-upgrade. Or to not dump invalid indexes at all in --binary-upgrade mode. > I also need help in how to communicate this to users since our next > minor release will be in the future. Blog post? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
Bruce Momjian wrote: > There are a few possible fixes. The first would be to have pg_upgrade > throw an error on any invalid index in the old cluster. Another option > would be to preserve the invalid state in pg_dump --binary-upgrade. Yet another option would be for pg_dump --binary-upgrade to ignore invalid indexes altogether (and probably "not ready" indexes, too, not sure). -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Dec 6, 2012 at 09:35:19PM -0300, Alvaro Herrera wrote: > Bruce Momjian wrote: > > > There are a few possible fixes. The first would be to have pg_upgrade > > throw an error on any invalid index in the old cluster. Another option > > would be to preserve the invalid state in pg_dump --binary-upgrade. > > Yet another option would be for pg_dump --binary-upgrade to ignore > invalid indexes altogether (and probably "not ready" indexes, too, not > sure). Yes, I thought of not dumping it. The problem is that we don't delete the index when it fails, so I assumed we didn't want to lose the index creation information. I need to understand why we did that. Why do we have pg_dump dump the index then? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Bruce Momjian <bruce@momjian.us> writes: > Yes, I thought of not dumping it. The problem is that we don't delete > the index when it fails, so I assumed we didn't want to lose the index > creation information. I need to understand why we did that. Because CREATE INDEX CONCURRENTLY can't drop the index if it's already failed. It's not because we want to do that, it's an implementation restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. regards, tom lane
On Thu, 2012-12-06 at 16:31 -0800, Josh Berkus wrote: > > There are a few possible fixes. The first would be to have pg_upgrade > > throw an error on any invalid index in the old cluster. Another option > > would be to preserve the invalid state in pg_dump --binary-upgrade. > > Or to not dump invalid indexes at all in --binary-upgrade mode. +1 Jeff Davis
On 12/06/2012 07:58 PM, Jeff Davis wrote: > On Thu, 2012-12-06 at 16:31 -0800, Josh Berkus wrote: >>> There are a few possible fixes. The first would be to have pg_upgrade >>> throw an error on any invalid index in the old cluster. Another option >>> would be to preserve the invalid state in pg_dump --binary-upgrade. >> Or to not dump invalid indexes at all in --binary-upgrade mode. > +1 > > I think I prefer the first suggestion. If they are trying to upgrade when there's an invalid index presumably they aren't aware of the invalidity (or they would have done something about it). It would be better to fail and make them fix or remove the index, ISTM. cheers andrew
On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: > > Yes, I thought of not dumping it. The problem is that we don't delete > > the index when it fails, so I assumed we didn't want to lose the index > > creation information. I need to understand why we did that. > > Because CREATE INDEX CONCURRENTLY can't drop the index if it's already > failed. It's not because we want to do that, it's an implementation > restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. Well, what is the logic that pg_dump dumps it then, even in non-binary-upgrade mode? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On Thu, 2012-12-06 at 20:05 -0500, Andrew Dunstan wrote: > I think I prefer the first suggestion. If they are trying to upgrade > when there's an invalid index presumably they aren't aware of the > invalidity (or they would have done something about it). It would be > better to fail and make them fix or remove the index, ISTM. I'm a little concerned about introducing extra causes of failure into upgrade when we don't have to. They could have gone on with that invalid index forever, and I don't see it as the job of upgrade to alert someone to that problem. That being said, it's a reasonable position, and I am fine with either approach. Regards,Jeff Davis
Bruce Momjian <bruce@momjian.us> writes: > On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: >> Because CREATE INDEX CONCURRENTLY can't drop the index if it's already >> failed. It's not because we want to do that, it's an implementation >> restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. > Well, what is the logic that pg_dump dumps it then, even in > non-binary-upgrade mode? Actually, I was thinking about proposing exactly that. Ideally the system should totally ignore an invalid index (we just fixed some bugs in that line already). So it would be perfectly consistent for pg_dump to ignore it too, with or without --binary-upgrade. One possible spanner in the works for pg_upgrade is that this would mean there can be relation files in the database directories that it should ignore (not transfer over). Dunno if that takes any logic changes. regards, tom lane
On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: > > On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: > >> Because CREATE INDEX CONCURRENTLY can't drop the index if it's already > >> failed. It's not because we want to do that, it's an implementation > >> restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. > > > Well, what is the logic that pg_dump dumps it then, even in > > non-binary-upgrade mode? > > Actually, I was thinking about proposing exactly that. Ideally the > system should totally ignore an invalid index (we just fixed some bugs > in that line already). So it would be perfectly consistent for pg_dump > to ignore it too, with or without --binary-upgrade. > > One possible spanner in the works for pg_upgrade is that this would mean > there can be relation files in the database directories that it should > ignore (not transfer over). Dunno if that takes any logic changes. As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would stop creating creating it in the new cluster, and not transfer the index files. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On 12/06/2012 09:23 PM, Bruce Momjian wrote: > On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote: >> Bruce Momjian <bruce@momjian.us> writes: >>> On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: >>>> Because CREATE INDEX CONCURRENTLY can't drop the index if it's already >>>> failed. It's not because we want to do that, it's an implementation >>>> restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. >>> Well, what is the logic that pg_dump dumps it then, even in >>> non-binary-upgrade mode? >> Actually, I was thinking about proposing exactly that. Ideally the >> system should totally ignore an invalid index (we just fixed some bugs >> in that line already). So it would be perfectly consistent for pg_dump >> to ignore it too, with or without --binary-upgrade. >> >> One possible spanner in the works for pg_upgrade is that this would mean >> there can be relation files in the database directories that it should >> ignore (not transfer over). Dunno if that takes any logic changes. > As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would > stop creating creating it in the new cluster, and not transfer the index > files. > So we'll lose the index definition and leave some files behind? This sounds a bit messy to say the least. Making the user fix it seems much more sensible to me. Otherwise I suspect we'll find users who get strangely surprised when they can no longer find any trace of an expected index in their upgraded database. cheers andrew
* Andrew Dunstan (andrew@dunslane.net) wrote: > So we'll lose the index definition and leave some files behind? This > sounds a bit messy to say the least. Agreed. > Making the user fix it seems much more sensible to me. Otherwise I > suspect we'll find users who get strangely surprised when they can > no longer find any trace of an expected index in their upgraded > database. Or preserve it as-is. I don't really like the 'make them fix it' option, as a user could run into that in the middle of a planned upgrade that had been tested and never had that come up. Thanks, Stephen
Stephen Frost <sfrost@snowman.net> writes: > * Andrew Dunstan (andrew@dunslane.net) wrote: >> Making the user fix it seems much more sensible to me. Otherwise I >> suspect we'll find users who get strangely surprised when they can >> no longer find any trace of an expected index in their upgraded >> database. > Or preserve it as-is. To do that, we would have to add an option to CREATE INDEX to create it in an invalid state. Which is stupid... regards, tom lane
On Thu, Dec 6, 2012 at 09:41:00PM -0500, Andrew Dunstan wrote: > > On 12/06/2012 09:23 PM, Bruce Momjian wrote: > >On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote: > >>Bruce Momjian <bruce@momjian.us> writes: > >>>On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: > >>>>Because CREATE INDEX CONCURRENTLY can't drop the index if it's already > >>>>failed. It's not because we want to do that, it's an implementation > >>>>restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. > >>>Well, what is the logic that pg_dump dumps it then, even in > >>>non-binary-upgrade mode? > >>Actually, I was thinking about proposing exactly that. Ideally the > >>system should totally ignore an invalid index (we just fixed some bugs > >>in that line already). So it would be perfectly consistent for pg_dump > >>to ignore it too, with or without --binary-upgrade. > >> > >>One possible spanner in the works for pg_upgrade is that this would mean > >>there can be relation files in the database directories that it should > >>ignore (not transfer over). Dunno if that takes any logic changes. > >As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would > >stop creating creating it in the new cluster, and not transfer the index > >files. > > > > > So we'll lose the index definition and leave some files behind? This > sounds a bit messy to say the least. You will lose the index definition, but the new cluster will not have the invalid index files from the old cluster. > Making the user fix it seems much more sensible to me. Otherwise I > suspect we'll find users who get strangely surprised when they can > no longer find any trace of an expected index in their upgraded > database. Well, the indexes weren't being used. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote: > * Andrew Dunstan (andrew@dunslane.net) wrote: > > So we'll lose the index definition and leave some files behind? This > > sounds a bit messy to say the least. > > Agreed. > > > Making the user fix it seems much more sensible to me. Otherwise I > > suspect we'll find users who get strangely surprised when they can > > no longer find any trace of an expected index in their upgraded > > database. > > Or preserve it as-is. I don't really like the 'make them fix it' > option, as a user could run into that in the middle of a planned upgrade > that had been tested and never had that come up. They would get the warning during pg_upgrade --check, of course. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On Thu, Dec 6, 2012 at 10:06:13PM -0500, Tom Lane wrote: > Stephen Frost <sfrost@snowman.net> writes: > > * Andrew Dunstan (andrew@dunslane.net) wrote: > >> Making the user fix it seems much more sensible to me. Otherwise I > >> suspect we'll find users who get strangely surprised when they can > >> no longer find any trace of an expected index in their upgraded > >> database. > > > Or preserve it as-is. > > To do that, we would have to add an option to CREATE INDEX to create it > in an invalid state. Which is stupid... I think we would have have pg_dump --binary-upgrade issue an UPDATE to the system catalogs to mark it as invalid. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
* Bruce Momjian (bruce@momjian.us) wrote: > On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote: > > Or preserve it as-is. I don't really like the 'make them fix it' > > option, as a user could run into that in the middle of a planned upgrade > > that had been tested and never had that come up. > > They would get the warning during pg_upgrade --check, of course. Sure, if they happened to have a concurrent index creation going when they ran the check... But what if they didn't and it only happened to happen during the actual pg_upgrade? I'm still not thrilled with this idea of making the user have to abort in the middle to address something that, really, isn't a big deal to just preserve and deal with later...Thanks, Stephen
* Tom Lane (tgl@sss.pgh.pa.us) wrote: > Stephen Frost <sfrost@snowman.net> writes: > > Or preserve it as-is. > > To do that, we would have to add an option to CREATE INDEX to create it > in an invalid state. Which is stupid... Only in a binary-upgrade mode. Thanks, Stephen
* Bruce Momjian (bruce@momjian.us) wrote: > I think we would have have pg_dump --binary-upgrade issue an UPDATE to > the system catalogs to mark it as invalid. That'd work for me too- I'm not particular on if it's done as a direct catalog update or some undocumented feature of CREATE INDEX. Thanks, Stephen
Andrew Dunstan wrote: > > On 12/06/2012 09:23 PM, Bruce Momjian wrote: > >As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would > >stop creating creating it in the new cluster, and not transfer the index > >files. > > So we'll lose the index definition and leave some files behind? This > sounds a bit messy to say the least. I find it hard to get excited about this being a real problem. If the index has been kept invalid, how come the definition is so valuable? -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On 7 December 2012 04:02, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > Andrew Dunstan wrote: >> >> On 12/06/2012 09:23 PM, Bruce Momjian wrote: > >> >As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would >> >stop creating creating it in the new cluster, and not transfer the index >> >files. >> >> So we'll lose the index definition and leave some files behind? This >> sounds a bit messy to say the least. > > I find it hard to get excited about this being a real problem. If the > index has been kept invalid, how come the definition is so valuable? Agreed. I don't see the problem... just say "rebuild any invalid indexes before you run pg_upgrade". I don't think pg_upgrade should take the responsibility of fixing everything broken before you run an upgrade. Making that rod for our backs looks pretty bad here and could get worse if other situations come to light. Maybe it should have a mode where it detects that it would fail if you attempted the upgrade... -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Dec 6, 2012 at 09:23:14PM -0500, Bruce Momjian wrote: > On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote: > > Bruce Momjian <bruce@momjian.us> writes: > > > On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: > > >> Because CREATE INDEX CONCURRENTLY can't drop the index if it's already > > >> failed. It's not because we want to do that, it's an implementation > > >> restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. > > > > > Well, what is the logic that pg_dump dumps it then, even in > > > non-binary-upgrade mode? > > > > Actually, I was thinking about proposing exactly that. Ideally the > > system should totally ignore an invalid index (we just fixed some bugs > > in that line already). So it would be perfectly consistent for pg_dump > > to ignore it too, with or without --binary-upgrade. > > > > One possible spanner in the works for pg_upgrade is that this would mean > > there can be relation files in the database directories that it should > > ignore (not transfer over). Dunno if that takes any logic changes. > > As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would > stop creating creating it in the new cluster, and not transfer the index > files. Sorry, I was wrong about this. We would need to modify pg_dump to skip invalid indexes (perhaps only for --binary-upgrade), and pg_upgrade would also need to be modified to skip such indexes. This is necessary because, as a safety check, pg_upgrade requires there to be an exact match of relations between old and new clusters. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Bruce Momjian wrote: > On Thu, Dec 6, 2012 at 09:23:14PM -0500, Bruce Momjian wrote: > > On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote: > > > Bruce Momjian <bruce@momjian.us> writes: > > > > On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote: > > > >> Because CREATE INDEX CONCURRENTLY can't drop the index if it's already > > > >> failed. It's not because we want to do that, it's an implementation > > > >> restriction of the horrid kluge that is CREATE/DROP INDEX CONCURRENTLY. > > > > > > > Well, what is the logic that pg_dump dumps it then, even in > > > > non-binary-upgrade mode? > > > > > > Actually, I was thinking about proposing exactly that. Ideally the > > > system should totally ignore an invalid index (we just fixed some bugs > > > in that line already). So it would be perfectly consistent for pg_dump > > > to ignore it too, with or without --binary-upgrade. > > > > > > One possible spanner in the works for pg_upgrade is that this would mean > > > there can be relation files in the database directories that it should > > > ignore (not transfer over). Dunno if that takes any logic changes. > > > > As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would > > stop creating creating it in the new cluster, and not transfer the index > > files. > > Sorry, I was wrong about this. We would need to modify pg_dump to skip > invalid indexes (perhaps only for --binary-upgrade), Invalid indexes only persist in the database because the system cannot drop them automatically. I see no reason to have pg_dump include them in its output, with or without --binary-upgrade. > and pg_upgrade would also need to be modified to skip such indexes. > This is necessary because, as a safety check, pg_upgrade requires > there to be an exact match of relations between old and new clusters. Right. This shouldn't be too hard, though; just make sure to exclude invalid indexes in the pg_upgrade query that obtains relations from the old cluster. Since pg_dump would not include them in the dump, the numbers should match. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Dec 6, 2012 at 10:27:21PM -0500, Stephen Frost wrote: > * Bruce Momjian (bruce@momjian.us) wrote: > > On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote: > > > Or preserve it as-is. I don't really like the 'make them fix it' > > > option, as a user could run into that in the middle of a planned upgrade > > > that had been tested and never had that come up. > > > > They would get the warning during pg_upgrade --check, of course. > > Sure, if they happened to have a concurrent index creation going when > they ran the check... But what if they didn't and it only happened to > happen during the actual pg_upgrade? I'm still not thrilled with this > idea of making the user have to abort in the middle to address something > that, really, isn't a big deal to just preserve and deal with later... If a concurrent index creation was happening during the check, pg_upgrade --check would fail. I don't think there is any indication if the index is failed, or in process. That is a good argument for _not_ throwing an error because index creation is more of an intermediate state. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On Fri, Dec 7, 2012 at 09:16:56AM +0000, Simon Riggs wrote: > On 7 December 2012 04:02, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > > Andrew Dunstan wrote: > >> > >> On 12/06/2012 09:23 PM, Bruce Momjian wrote: > > > >> >As soon as pg_dump stopped dumping the CREATE INDEX, pg_upgrade would > >> >stop creating creating it in the new cluster, and not transfer the index > >> >files. > >> > >> So we'll lose the index definition and leave some files behind? This > >> sounds a bit messy to say the least. > > > > I find it hard to get excited about this being a real problem. If the > > index has been kept invalid, how come the definition is so valuable? > > Agreed. > > I don't see the problem... just say "rebuild any invalid indexes > before you run pg_upgrade". > > I don't think pg_upgrade should take the responsibility of fixing > everything broken before you run an upgrade. Making that rod for our > backs looks pretty bad here and could get worse if other situations > come to light. > > Maybe it should have a mode where it detects that it would fail if you > attempted the upgrade... That's what pg_upgrade --check does, but see my email about in-process concurrent index builds also causing a failure. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On 2012-12-07 09:21:58 -0500, Bruce Momjian wrote: > On Thu, Dec 6, 2012 at 10:27:21PM -0500, Stephen Frost wrote: > > * Bruce Momjian (bruce@momjian.us) wrote: > > > On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote: > > > > Or preserve it as-is. I don't really like the 'make them fix it' > > > > option, as a user could run into that in the middle of a planned upgrade > > > > that had been tested and never had that come up. > > > > > > They would get the warning during pg_upgrade --check, of course. > > > > Sure, if they happened to have a concurrent index creation going when > > they ran the check... But what if they didn't and it only happened to > > happen during the actual pg_upgrade? I'm still not thrilled with this > > idea of making the user have to abort in the middle to address something > > that, really, isn't a big deal to just preserve and deal with later... > > If a concurrent index creation was happening during the check, > pg_upgrade --check would fail. I don't think there is any indication if > the index is failed, or in process. There should be a lock on the table + index if the creation is in progress. > That is a good argument for _not_ throwing an error because index > creation is more of an intermediate state. Uhm. If pg_upgrade is actually running its definitely not an intermediate state anymore... Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Dec 7, 2012 at 03:24:46PM +0100, Andres Freund wrote: > On 2012-12-07 09:21:58 -0500, Bruce Momjian wrote: > > On Thu, Dec 6, 2012 at 10:27:21PM -0500, Stephen Frost wrote: > > > * Bruce Momjian (bruce@momjian.us) wrote: > > > > On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote: > > > > > Or preserve it as-is. I don't really like the 'make them fix it' > > > > > option, as a user could run into that in the middle of a planned upgrade > > > > > that had been tested and never had that come up. > > > > > > > > They would get the warning during pg_upgrade --check, of course. > > > > > > Sure, if they happened to have a concurrent index creation going when > > > they ran the check... But what if they didn't and it only happened to > > > happen during the actual pg_upgrade? I'm still not thrilled with this > > > idea of making the user have to abort in the middle to address something > > > that, really, isn't a big deal to just preserve and deal with later... > > > > If a concurrent index creation was happening during the check, > > pg_upgrade --check would fail. I don't think there is any indication if > > the index is failed, or in process. > > There should be a lock on the table + index if the creation is in > progress. Well, it is a CONCURRENT index creation, so locking would be minimal. Do we want pg_upgrade to be groveling over the lock view to look for locks? I don't think so. > > That is a good argument for _not_ throwing an error because index > > creation is more of an intermediate state. > > Uhm. If pg_upgrade is actually running its definitely not an > intermediate state anymore... It would be running pg_upgrade --check, which can be run while the old server is running. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On 2012-12-07 09:27:39 -0500, Bruce Momjian wrote: > On Fri, Dec 7, 2012 at 03:24:46PM +0100, Andres Freund wrote: > > On 2012-12-07 09:21:58 -0500, Bruce Momjian wrote: > > > On Thu, Dec 6, 2012 at 10:27:21PM -0500, Stephen Frost wrote: > > > > * Bruce Momjian (bruce@momjian.us) wrote: > > > > > On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote: > > > > > > Or preserve it as-is. I don't really like the 'make them fix it' > > > > > > option, as a user could run into that in the middle of a planned upgrade > > > > > > that had been tested and never had that come up. > > > > > > > > > > They would get the warning during pg_upgrade --check, of course. > > > > > > > > Sure, if they happened to have a concurrent index creation going when > > > > they ran the check... But what if they didn't and it only happened to > > > > happen during the actual pg_upgrade? I'm still not thrilled with this > > > > idea of making the user have to abort in the middle to address something > > > > that, really, isn't a big deal to just preserve and deal with later... > > > > > > If a concurrent index creation was happening during the check, > > > pg_upgrade --check would fail. I don't think there is any indication if > > > the index is failed, or in process. > > > > There should be a lock on the table + index if the creation is in > > progress. > > Well, it is a CONCURRENT index creation, so locking would be minimal. I wouldn't call a ShareUpdateExclusiveLock minimal... > Do we want pg_upgrade to be groveling over the lock view to look for > locks? I don't think so. ISTM that anybody who does DDL during or after pg_upgrade --check deserves any pain. So throwing an error in both seems perfectly fine for me. Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Dec 7, 2012 at 03:32:51PM +0100, Andres Freund wrote: > > Well, it is a CONCURRENT index creation, so locking would be minimal. > > I wouldn't call a ShareUpdateExclusiveLock minimal... > > > Do we want pg_upgrade to be groveling over the lock view to look for > > locks? I don't think so. > > ISTM that anybody who does DDL during or after pg_upgrade --check > deserves any pain. > > So throwing an error in both seems perfectly fine for me. Well, most of the current checks relate to checks for created objects. To fail for in-process concurrent index creation is to fail for an intermediate state --- index creation in process, but might complete before we do the actual upgrade. Or it might not be an intermediate state. I am just saying that this makes the --check report more likely to false failures than currently configured. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Bruce Momjian <bruce@momjian.us> writes: > On Fri, Dec 7, 2012 at 03:32:51PM +0100, Andres Freund wrote: >> ISTM that anybody who does DDL during or after pg_upgrade --check >> deserves any pain. >> >> So throwing an error in both seems perfectly fine for me. I agree with Andres on this. > I am just saying that this makes the --check report more likely to false > failures than currently configured. It's not a "false" failure. If you were to force-shutdown the system at that instant (not allowing C.I.C. to complete) then do a pg_upgrade, it would fail. So what's wrong with telling the user so? If you want, you could have the error message include some weasel wording about how this might be only a transient state, but frankly I think that's more likely to be confusing than helpful. As Andres says, nobody should expect that it's sensible to do "pg_upgrade --check" while any DDL is actively executing, so you'd be warning about a state that more than likely isn't reality anyway. On balance I am coming around to support the "just throw an error if there are any invalid indexes" position. Adding extra complication in pg_dump and pg_upgrade to handle ignoring them doesn't seem like a good idea --- for one thing, it will evidently weaken the strength of the same-number-of-relations cross-check. regards, tom lane
On Fri, Dec 7, 2012 at 10:29:22AM -0500, Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: > > On Fri, Dec 7, 2012 at 03:32:51PM +0100, Andres Freund wrote: > >> ISTM that anybody who does DDL during or after pg_upgrade --check > >> deserves any pain. > >> > >> So throwing an error in both seems perfectly fine for me. > > I agree with Andres on this. > > > I am just saying that this makes the --check report more likely to false > > failures than currently configured. > > It's not a "false" failure. If you were to force-shutdown the system at > that instant (not allowing C.I.C. to complete) then do a pg_upgrade, it > would fail. So what's wrong with telling the user so? True. > If you want, you could have the error message include some weasel > wording about how this might be only a transient state, but frankly > I think that's more likely to be confusing than helpful. As Andres > says, nobody should expect that it's sensible to do "pg_upgrade --check" > while any DDL is actively executing, so you'd be warning about a state > that more than likely isn't reality anyway. Well, if we fail, we want to give the user a solution to fixing it, and that solution is going to be different if the index is in-process, or from long ago. Those index names are going to be dumped into a file for the user to review. It will require weasel wording because pg_upgrade will have to say that the indexes are either in the process of being concurrently created, or are the result of a failed concurrent index creation in the past. They will either have to wait for the index creation to complete or delete the invalid index. > On balance I am coming around to support the "just throw an error if > there are any invalid indexes" position. Adding extra complication in > pg_dump and pg_upgrade to handle ignoring them doesn't seem like a good > idea --- for one thing, it will evidently weaken the strength of the > same-number-of-relations cross-check. The check would remain the same --- the change would be to prevent invalid indexes from being considered on both the old and new servers. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Bruce Momjian <bruce@momjian.us> writes: > On Fri, Dec 7, 2012 at 10:29:22AM -0500, Tom Lane wrote: >> On balance I am coming around to support the "just throw an error if >> there are any invalid indexes" position. Adding extra complication in >> pg_dump and pg_upgrade to handle ignoring them doesn't seem like a good >> idea --- for one thing, it will evidently weaken the strength of the >> same-number-of-relations cross-check. > The check would remain the same --- the change would be to prevent > invalid indexes from being considered on both the old and new servers. But that weakens the check. For instance, if you had seven invalid indexes in one cluster and eight in the other, you wouldn't notice. regards, tom lane
On Fri, Dec 7, 2012 at 11:46:51AM -0500, Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: > > On Fri, Dec 7, 2012 at 10:29:22AM -0500, Tom Lane wrote: > >> On balance I am coming around to support the "just throw an error if > >> there are any invalid indexes" position. Adding extra complication in > >> pg_dump and pg_upgrade to handle ignoring them doesn't seem like a good > >> idea --- for one thing, it will evidently weaken the strength of the > >> same-number-of-relations cross-check. > > > The check would remain the same --- the change would be to prevent > > invalid indexes from being considered on both the old and new servers. > > But that weakens the check. For instance, if you had seven invalid > indexes in one cluster and eight in the other, you wouldn't notice. That is true, though the assumption is that invalid indexes are insignficant. It would be a new case where actual non-system-table _files_ were not transfered. Seems most people want the error so I will start working on a patch. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
> Yes, I thought of not dumping it. The problem is that we don't delete > the index when it fails, so I assumed we didn't want to lose the index > creation information. I need to understand why we did that. Why do we > have pg_dump dump the index then? Because pg_restore will recreate the index from scratch, which is presumably what users want most of the time. So this issue doesn't exist outside of pg_upgrade. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On 2012-12-07 10:22:12 -0800, Josh Berkus wrote: > > > Yes, I thought of not dumping it. The problem is that we don't delete > > the index when it fails, so I assumed we didn't want to lose the index > > creation information. I need to understand why we did that. Why do we > > have pg_dump dump the index then? > > Because pg_restore will recreate the index from scratch, which is > presumably what users want most of the time. So this issue doesn't > exist outside of pg_upgrade. I wonder though if we shouldn't ignore !indislive indexes in pg_dump (and the respective bw-compat hack). Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
> I wonder though if we shouldn't ignore !indislive indexes in pg_dump > (and the respective bw-compat hack). Quite likely we shouldn't. However, that is why it wasn't considered a problem. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
On 2012-12-07 10:44:53 -0800, Josh Berkus wrote: > > > I wonder though if we shouldn't ignore !indislive indexes in pg_dump > > (and the respective bw-compat hack). Obviously I wanted to ask whether we *should* ignore them in the future. > Quite likely we shouldn't. However, that is why it wasn't considered a > problem. !indislive indexes are created during DROP INDEX CONCURRENTLY. Thats a different case than CREATE INDEX CONCURRENTLY. Accessing their definition is actually problematic because i can vanish while youre examing it which could cause errors both in the backend and in pg_dump. Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Andres Freund <andres@2ndquadrant.com> writes: > !indislive indexes are created during DROP INDEX CONCURRENTLY. Thats a > different case than CREATE INDEX CONCURRENTLY. Accessing their > definition is actually problematic because i can vanish while youre > examing it which could cause errors both in the backend and in pg_dump. That's true of any index, not just !indislive ones. If you're doing DDL during a pg_dump, and that makes it fail, you get to keep both pieces. regards, tom lane
On Fri, Dec 7, 2012 at 07:49:14PM +0100, Andres Freund wrote: > On 2012-12-07 10:44:53 -0800, Josh Berkus wrote: > > > > > I wonder though if we shouldn't ignore !indislive indexes in pg_dump > > > (and the respective bw-compat hack). > > Obviously I wanted to ask whether we *should* ignore them in the future. > > > Quite likely we shouldn't. However, that is why it wasn't considered a > > problem. > > !indislive indexes are created during DROP INDEX CONCURRENTLY. Thats a > different case than CREATE INDEX CONCURRENTLY. Accessing their > definition is actually problematic because i can vanish while youre > examing it which could cause errors both in the backend and in pg_dump. Is that something pg_upgrade need to worry about too? Is pg_index.indisvalid the only thing pg_upgrade need to check? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Bruce Momjian <bruce@momjian.us> writes: > On Fri, Dec 7, 2012 at 07:49:14PM +0100, Andres Freund wrote: >> !indislive indexes are created during DROP INDEX CONCURRENTLY. Thats a >> different case than CREATE INDEX CONCURRENTLY. Accessing their >> definition is actually problematic because i can vanish while youre >> examing it which could cause errors both in the backend and in pg_dump. > Is that something pg_upgrade need to worry about too? Is > pg_index.indisvalid the only thing pg_upgrade need to check? indisvalid should be sufficient. If you try to test more than that you're going to make the code more version-specific, without actually buying much. regards, tom lane
On 2012-12-07 13:54:44 -0500, Tom Lane wrote: > Andres Freund <andres@2ndquadrant.com> writes: > > !indislive indexes are created during DROP INDEX CONCURRENTLY. Thats a > > different case than CREATE INDEX CONCURRENTLY. Accessing their > > definition is actually problematic because i can vanish while youre > > examing it which could cause errors both in the backend and in pg_dump. > > That's true of any index, not just !indislive ones. If you're doing DDL > during a pg_dump, and that makes it fail, you get to keep both pieces. Uhm. pg_dump is going to lock the tables at start, so I don't really see anything dangerous happening otherwise? Once it has done so newly started concurrent CREATE/DROPs won't finish and normal ones can't even start. What I am thinking about is parts of the dead indexes pg_attribute entries vanishing while pg_get_indexdef() is running. That would probably result in some scary error messages. Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > Bruce Momjian <bruce@momjian.us> writes: > > On Fri, Dec 7, 2012 at 07:49:14PM +0100, Andres Freund wrote: > >> !indislive indexes are created during DROP INDEX CONCURRENTLY. Thats a > >> different case than CREATE INDEX CONCURRENTLY. Accessing their > >> definition is actually problematic because i can vanish while youre > >> examing it which could cause errors both in the backend and in pg_dump. > > > Is that something pg_upgrade need to worry about too? Is > > pg_index.indisvalid the only thing pg_upgrade need to check? > > indisvalid should be sufficient. If you try to test more than that > you're going to make the code more version-specific, without actually > buying much. Doesn't the check need to be at least indisvalid && indisready? Given that 9.2 represents indislive as indisvalid && !indisready? Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Dec 7, 2012 at 11:57:34AM -0500, Bruce Momjian wrote: > On Fri, Dec 7, 2012 at 11:46:51AM -0500, Tom Lane wrote: > > Bruce Momjian <bruce@momjian.us> writes: > > > On Fri, Dec 7, 2012 at 10:29:22AM -0500, Tom Lane wrote: > > >> On balance I am coming around to support the "just throw an error if > > >> there are any invalid indexes" position. Adding extra complication in > > >> pg_dump and pg_upgrade to handle ignoring them doesn't seem like a good > > >> idea --- for one thing, it will evidently weaken the strength of the > > >> same-number-of-relations cross-check. > > > > > The check would remain the same --- the change would be to prevent > > > invalid indexes from being considered on both the old and new servers. > > > > But that weakens the check. For instance, if you had seven invalid > > indexes in one cluster and eight in the other, you wouldn't notice. > > That is true, though the assumption is that invalid indexes are > insignficant. It would be a new case where actual non-system-table > _files_ were not transfered. > > Seems most people want the error so I will start working on a patch. Patch attached. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Attachment
Andres Freund <andres@2ndquadrant.com> writes: > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: >> indisvalid should be sufficient. If you try to test more than that >> you're going to make the code more version-specific, without actually >> buying much. > Doesn't the check need to be at least indisvalid && indisready? Given > that 9.2 represents !indislive as indisvalid && !indisready? Um, good point. It's annoying that we had to do it like that ... regards, tom lane
On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: > Andres Freund <andres@2ndquadrant.com> writes: > > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > >> indisvalid should be sufficient. If you try to test more than that > >> you're going to make the code more version-specific, without actually > >> buying much. > > > Doesn't the check need to be at least indisvalid && indisready? Given > > that 9.2 represents !indislive as indisvalid && !indisready? > > Um, good point. It's annoying that we had to do it like that ... So, does this affect pg_upgrade? Which PG versions? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Bruce Momjian <bruce@momjian.us> writes: > On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: >> Andres Freund <andres@2ndquadrant.com> writes: >>> Doesn't the check need to be at least indisvalid && indisready? Given >>> that 9.2 represents !indislive as indisvalid && !indisready? >> Um, good point. It's annoying that we had to do it like that ... > So, does this affect pg_upgrade? Which PG versions? I think you can just insist on indisvalid and indisready both being true. That's okay in all releases back to 8.3. regards, tom lane
On 2012-12-07 16:30:36 -0500, Bruce Momjian wrote: > On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: > > Andres Freund <andres@2ndquadrant.com> writes: > > > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > > >> indisvalid should be sufficient. If you try to test more than that > > >> you're going to make the code more version-specific, without actually > > >> buying much. > > > > > Doesn't the check need to be at least indisvalid && indisready? Given > > > that 9.2 represents !indislive as indisvalid && !indisready? > > > > Um, good point. It's annoying that we had to do it like that ... > > So, does this affect pg_upgrade? Which PG versions? Only 9.2 :(. Before that there was no DROP INDEX CONCURRENTLY and in 9.3 there's an actual indislive field and indisready is always set to false there if indislive is false. But I see no problem using !indisvalid || !indisready as the condition in all (supported) versions. Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Dec 7, 2012 at 10:38:39PM +0100, Andres Freund wrote: > On 2012-12-07 16:30:36 -0500, Bruce Momjian wrote: > > On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: > > > Andres Freund <andres@2ndquadrant.com> writes: > > > > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > > > >> indisvalid should be sufficient. If you try to test more than that > > > >> you're going to make the code more version-specific, without actually > > > >> buying much. > > > > > > > Doesn't the check need to be at least indisvalid && indisready? Given > > > > that 9.2 represents !indislive as indisvalid && !indisready? > > > > > > Um, good point. It's annoying that we had to do it like that ... > > > > So, does this affect pg_upgrade? Which PG versions? > > Only 9.2 :(. Before that there was no DROP INDEX CONCURRENTLY and in 9.3 > there's an actual indislive field and indisready is always set to false > there if indislive is false. > > But I see no problem using !indisvalid || !indisready as the condition > in all (supported) versions. OK, updated patch attached. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Attachment
On Fri, Dec 7, 2012 at 04:49:19PM -0500, Bruce Momjian wrote: > On Fri, Dec 7, 2012 at 10:38:39PM +0100, Andres Freund wrote: > > On 2012-12-07 16:30:36 -0500, Bruce Momjian wrote: > > > On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: > > > > Andres Freund <andres@2ndquadrant.com> writes: > > > > > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > > > > >> indisvalid should be sufficient. If you try to test more than that > > > > >> you're going to make the code more version-specific, without actually > > > > >> buying much. > > > > > > > > > Doesn't the check need to be at least indisvalid && indisready? Given > > > > > that 9.2 represents !indislive as indisvalid && !indisready? > > > > > > > > Um, good point. It's annoying that we had to do it like that ... > > > > > > So, does this affect pg_upgrade? Which PG versions? > > > > Only 9.2 :(. Before that there was no DROP INDEX CONCURRENTLY and in 9.3 > > there's an actual indislive field and indisready is always set to false > > there if indislive is false. > > > > But I see no problem using !indisvalid || !indisready as the condition > > in all (supported) versions. > > OK, updated patch attached. Patch applied back to 9.0. Now that it is applied, I need to publicize this. How do I do that? Josh mentioned my blog. What would cause these invalid indexes? Just CREATE INDEX CONCURRENTLY failures? What types of failures would users have if these invalid indexes had been upgraded by pg_upgrade? Can they test their indexes in any way? I assume they can't run queries on the old cluster to check. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On 2012-12-11 15:12:37 -0500, Bruce Momjian wrote: > On Fri, Dec 7, 2012 at 04:49:19PM -0500, Bruce Momjian wrote: > > On Fri, Dec 7, 2012 at 10:38:39PM +0100, Andres Freund wrote: > > > On 2012-12-07 16:30:36 -0500, Bruce Momjian wrote: > > > > On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: > > > > > Andres Freund <andres@2ndquadrant.com> writes: > > > > > > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > > > > > >> indisvalid should be sufficient. If you try to test more than that > > > > > >> you're going to make the code more version-specific, without actually > > > > > >> buying much. > > > > > > > > > > > Doesn't the check need to be at least indisvalid && indisready? Given > > > > > > that 9.2 represents !indislive as indisvalid && !indisready? > > > > > > > > > > Um, good point. It's annoying that we had to do it like that ... > > > > > > > > So, does this affect pg_upgrade? Which PG versions? > > > > > > Only 9.2 :(. Before that there was no DROP INDEX CONCURRENTLY and in 9.3 > > > there's an actual indislive field and indisready is always set to false > > > there if indislive is false. > > > > > > But I see no problem using !indisvalid || !indisready as the condition > > > in all (supported) versions. > > > > OK, updated patch attached. > > Patch applied back to 9.0. > > Now that it is applied, I need to publicize this. How do I do that? > Josh mentioned my blog. Not really sure why this needs to be so much more prominent than other fixes? > What would cause these invalid indexes? Just CREATE INDEX CONCURRENTLY > failures? What types of failures would users have if these invalid > indexes had been upgraded by pg_upgrade? Afaics it could be anything from wrong responses to queries to crashes although the former seems to be more likely. > Can they test their indexes in any way? I assume they can't run > queries on the old cluster to check. I don't see how. Greetings, Andres Freund --Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
On Tue, Dec 11, 2012 at 09:19:34PM +0100, Andres Freund wrote: > > > > Only 9.2 :(. Before that there was no DROP INDEX CONCURRENTLY and in 9.3 > > > > there's an actual indislive field and indisready is always set to false > > > > there if indislive is false. > > > > > > > > But I see no problem using !indisvalid || !indisready as the condition > > > > in all (supported) versions. > > > > > > OK, updated patch attached. > > > > Patch applied back to 9.0. > > > > Now that it is applied, I need to publicize this. How do I do that? > > Josh mentioned my blog. > > Not really sure why this needs to be so much more prominent than other > fixes? Because it creates an post-upgrade system that might return wong results with no warning. > > What would cause these invalid indexes? Just CREATE INDEX CONCURRENTLY > > failures? What types of failures would users have if these invalid > > indexes had been upgraded by pg_upgrade? > > Afaics it could be anything from wrong responses to queries to crashes > although the former seems to be more likely. > > > Can they test their indexes in any way? I assume they can't run > > queries on the old cluster to check. > > I don't see how. That's what I thought. This is not good. :-( -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On Tue, Dec 11, 2012 at 03:12:37PM -0500, Bruce Momjian wrote: > On Fri, Dec 7, 2012 at 04:49:19PM -0500, Bruce Momjian wrote: > > On Fri, Dec 7, 2012 at 10:38:39PM +0100, Andres Freund wrote: > > > On 2012-12-07 16:30:36 -0500, Bruce Momjian wrote: > > > > On Fri, Dec 7, 2012 at 04:21:48PM -0500, Tom Lane wrote: > > > > > Andres Freund <andres@2ndquadrant.com> writes: > > > > > > On 2012-12-07 13:59:41 -0500, Tom Lane wrote: > > > > > >> indisvalid should be sufficient. If you try to test more than that > > > > > >> you're going to make the code more version-specific, without actually > > > > > >> buying much. > > > > > > > > > > > Doesn't the check need to be at least indisvalid && indisready? Given > > > > > > that 9.2 represents !indislive as indisvalid && !indisready? > > > > > > > > > > Um, good point. It's annoying that we had to do it like that ... > > > > > > > > So, does this affect pg_upgrade? Which PG versions? > > > > > > Only 9.2 :(. Before that there was no DROP INDEX CONCURRENTLY and in 9.3 > > > there's an actual indislive field and indisready is always set to false > > > there if indislive is false. > > > > > > But I see no problem using !indisvalid || !indisready as the condition > > > in all (supported) versions. > > > > OK, updated patch attached. > > Patch applied back to 9.0. > > Now that it is applied, I need to publicize this. How do I do that? > Josh mentioned my blog. > > What would cause these invalid indexes? Just CREATE INDEX CONCURRENTLY > failures? What types of failures would users have if these invalid > indexes had been upgraded by pg_upgrade? Can they test their indexes in > any way? I assume they can't run queries on the old cluster to check. Blog entry posted explaining the bug and fix: http://momjian.us/main/blogs/pgblog/2012.html#December_14_2012 -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +