Thread: now 6.4
PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do this if we could help it. I think we will still need to run initdb, and move the data files. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian: > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do > this if we could help it. I think we will still need to run initdb, and > move the data files. I had thought we were going to avoid changing this unless there were changes to persistant structures. Do you know what changed to require this? Thanks -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software 300 Lakeside Drive Oakland, CA 94612 - A child of five could understand this! Fetch me a child of five.
On Wed, 10 Jun 1998, David Gould wrote: > Bruce Momjian: > > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do > > this if we could help it. I think we will still need to run initdb, and > > move the data files. > > I had thought we were going to avoid changing this unless there were changes > to persistant structures. Do you know what changed to require this? Huh? PG_VERSION should reflect that which we release, so that ppl know what version they are running, for bug reports and whatnot...
> > Bruce Momjian: > > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do > > this if we could help it. I think we will still need to run initdb, and > > move the data files. > > I had thought we were going to avoid changing this unless there were changes > to persistant structures. Do you know what changed to require this? > > Thanks The contents of the system tables are going to change between releases, almost for sure. What I think we are going to do is have people pg_dump -schema their databases, mv /data to /data.old, run initdb, run to create the old schema, and move the data/index files back into place. I will probably write the script and have people test it. As long as we don't change the data/index structure, we are OK. Is that good, or did you think we would be able to get away without system table changes? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > Bruce Momjian: > > > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do > > > this if we could help it. I think we will still need to run initdb, and > > > move the data files. > > > > I had thought we were going to avoid changing this unless there were changes > > to persistant structures. Do you know what changed to require this? > > > > Thanks > > The contents of the system tables are going to change between releases, > almost for sure. What I think we are going to do is have people pg_dump > -schema their databases, mv /data to /data.old, run initdb, run to > create the old schema, and move the data/index files back into place. > I will probably write the script and have people test it. > > As long as we don't change the data/index structure, we are OK. Is that > good, or did you think we would be able to get away without system table > changes? I have no problem with catalog changes and dumping the schema if we can write a script to help them do it. I would hope we can avoid having to make someone dump and reload their own data. I am thinking that it could be pretty inconvenient to dump/load and reindex something like a 50GB table with 6 indexes. Thanks for the clarification. -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software 300 Lakeside Drive Oakland, CA 94612 - A child of five could understand this! Fetch me a child of five.
> > On Wed, 10 Jun 1998, David Gould wrote: > > > Bruce Momjian: > > > PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do > > > this if we could help it. I think we will still need to run initdb, and > > > move the data files. > > > > I had thought we were going to avoid changing this unless there were changes > > to persistant structures. Do you know what changed to require this? > > Huh? PG_VERSION should reflect that which we release, so that ppl > know what version they are running, for bug reports and whatnot... It also requires the postmaster/postgres to match that version so they can run. PG_VERSION gets set at initdb time, so if we update it, we basically require them to run initdb so it matches the backend/postmaster version. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> I have no problem with catalog changes and dumping the schema if we can > write a script to help them do it. I would hope we can avoid having to make > someone dump and reload their own data. I am thinking that it could be > pretty inconvenient to dump/load and reindex something like a 50GB table with > 6 indexes. > > Thanks for the clarification. Yep, I think this is do'able, UNLESS Vadim decides he needs to change the structure of the data/index files. At that point, we are lost. In the past, we have made such changes, and they were very much needed. Not sure about the 6.4 release, but no such changes have been made yet. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > > > I have no problem with catalog changes and dumping the schema if we can > > write a script to help them do it. I would hope we can avoid having to make > > someone dump and reload their own data. I am thinking that it could be > > pretty inconvenient to dump/load and reindex something like a 50GB table with > > 6 indexes. > > > > Thanks for the clarification. > > Yep, I think this is do'able, UNLESS Vadim decides he needs to change > the structure of the data/index files. At that point, we are lost. Unfortunately, I want to change btree! But not HeapTuple structure... Vadim
>Bruce Momjian: >> PG_VERSION is now 6.4. initdb everyone. Or did we decide not to do >> this if we could help it. I think we will still need to run initdb, and >> move the data files. > >I had thought we were going to avoid changing this unless there were changes >to persistant structures. Do you know what changed to require this? Humm... I think: Even if catalogs would not be changed, initdb is required since we have added a new function octet_length(). Please correct me if I'm wrong. --- Tatsuo Ishii t-ishii@sra.co.jp
> > Bruce Momjian wrote: > > > > > I have no problem with catalog changes and dumping the schema if we can > > > write a script to help them do it. I would hope we can avoid having to make > > > someone dump and reload their own data. I am thinking that it could be > > > pretty inconvenient to dump/load and reindex something like a 50GB table with > > > 6 indexes. > > > > > > Thanks for the clarification. > > > > Yep, I think this is do'able, UNLESS Vadim decides he needs to change > > the structure of the data/index files. At that point, we are lost. > > Unfortunately, I want to change btree! > But not HeapTuple structure... So we will just need to re-create indexes. Sounds OK to me, but frankly, I am not sure what the objection to dump/reload is. Vadim, you make any changes you feel are necessary, and near release time, we will develop the best migration script we can. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian wrote: > > So we will just need to re-create indexes. Sounds OK to me, but > frankly, I am not sure what the objection to dump/reload is. It takes too long time to reload big tables... > Vadim, you make any changes you feel are necessary, and near release > time, we will develop the best migration script we can. Nice. Vadim
> Even if catalogs would not be changed, initdb is required > since we have added a new function octet_length(). > > Please correct me if I'm wrong. And functions for implicit conversion between the old 1-byte "char" type and the new 1-byte "char[1]" type, same for "name" to/from "text". I have int8 (64-bit integers) ready to put into the backend. Once enough platforms figure out how to get 64-bit integers defined, then we can consider using them for numeric() and decimal() types also. Alphas and ix86/Linux should already work. I had assumed that PowerPC had 64-bit ints (along with 64-bit addressing) but now suspect I was wrong. If anyone volunteers info on how to get 64-bit ints on their platform I'll including that in the first version. For gcc on ix86, "long long int" does the trick, and for Alphas "long int" should be enough. - Tom
>And functions for implicit conversion between the old 1-byte "char" type >and the new 1-byte "char[1]" type, same for "name" to/from "text". > >I have int8 (64-bit integers) ready to put into the backend. Once enough >platforms figure out how to get 64-bit integers defined, then we can >consider using them for numeric() and decimal() types also. Alphas and >ix86/Linux should already work. > >I had assumed that PowerPC had 64-bit ints (along with 64-bit >addressing) but now suspect I was wrong. If anyone volunteers info on >how to get 64-bit ints on their platform I'll including that in the >first version. For gcc on ix86, "long long int" does the trick, and for >Alphas "long int" should be enough. Regarding PowerPC, I successfully compiled a test program below and got result "8" usging gcc 2.8.0 on MkLinux(DR2.1). main() { long long int a; printf("%d\n",sizeof(a)); } -- Tatsuo Ishii t-ishii@sra.co.jp
> > > Even if catalogs would not be changed, initdb is required > > since we have added a new function octet_length(). > > > > Please correct me if I'm wrong. > > And functions for implicit conversion between the old 1-byte "char" type > and the new 1-byte "char[1]" type, same for "name" to/from "text". > > I have int8 (64-bit integers) ready to put into the backend. Once enough > platforms figure out how to get 64-bit integers defined, then we can > consider using them for numeric() and decimal() types also. Alphas and > ix86/Linux should already work. > > I had assumed that PowerPC had 64-bit ints (along with 64-bit > addressing) but now suspect I was wrong. If anyone volunteers info on > how to get 64-bit ints on their platform I'll including that in the > first version. For gcc on ix86, "long long int" does the trick, and for > Alphas "long int" should be enough. I thought all the GNU sites would work. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Thu, 11 Jun 1998, Vadim Mikheev wrote: > Bruce Momjian wrote: > > > > So we will just need to re-create indexes. Sounds OK to me, but > > frankly, I am not sure what the objection to dump/reload is. > > It takes too long time to reload big tables... I have to agree here...the one application that *I* really use this for is an accounting server...any downtime is unacceptable, because the whole system revolves around the database backend. Take a look at Michael Richards application (a search engine) where it has several *million* rows, and that isn't just one table. Michael, how long would it take to dump and reload that? How many ppl *don't* upgrade because of how expensive it would be for them to do, considering that their applications "work now"? Now, I liked the idea that was presented about moving the database directories out of the way and then moving them back in after an initdb...is this not doable? What caveats are there to doing this? Individual database's will be missing fields added in the release upgrade, so if you want some of the v6.4 new features, you'd have to dump the individual database and then reload it, but if you don't care, you'd have some optimizations associated with the new release? Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Wed, 10 Jun 1998, Bruce Momjian wrote: > So we will just need to re-create indexes. Sounds OK to me, but > frankly, I am not sure what the objection to dump/reload is. The cost associated with the downtime required in order to do the dump/reload...how much money is a company losing while their database is down to do the upgrade? Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> Now, I liked the idea that was presented about moving the > database directories out of the way and then moving them back in after an > initdb...is this not doable? What caveats are there to doing this? > Individual database's will be missing fields added in the release upgrade, > so if you want some of the v6.4 new features, you'd have to dump the > individual database and then reload it, but if you don't care, you'd have > some optimizations associated with the new release? We will move the old data files out of the way, run initdb, reload a pg_dump with schema-only, then move the data files back into the proper locations, and perhaps drop/recreate all indexes. They will have all the features. They have just kept their raw data files. How long does re-indexing the tables take vs. reloading and re-indexing? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Mon, 29 Jun 1998, The Hermit Hacker wrote: > On Thu, 11 Jun 1998, Vadim Mikheev wrote: > > > Bruce Momjian wrote: > > > > > > So we will just need to re-create indexes. Sounds OK to me, but > > > frankly, I am not sure what the objection to dump/reload is. > > > > It takes too long time to reload big tables... > > I have to agree here...the one application that *I* really use > this for is an accounting server...any downtime is unacceptable, because > the whole system revolves around the database backend. > > Take a look at Michael Richards application (a search engine) > where it has several *million* rows, and that isn't just one table. > Michael, how long would it take to dump and reload that? > > How many ppl *don't* upgrade because of how expensive it would be > for them to do, considering that their applications "work now"? I cringe when it comes time to upgrade and now with the main site getting ~1000 hits/day I can't have the downtime (this web site is really seasonal). Not only is there dump/reload to do, I also have to make sure to recompile the cgi stuff when libpq changes. Vince. -- ========================================================================== Vince Vielhaber -- KA8CSH email: vev@michvhf.com flame-mail: /dev/null # include <std/disclaimers.h> TEAM-OS2 Online Searchable Campground Listings http://www.camping-usa.com "There is no outfit less entitled to lecture me about bloat than the federal government" -- Tony Snow ==========================================================================
On Mon, 29 Jun 1998, Bruce Momjian wrote: > > Now, I liked the idea that was presented about moving the > > database directories out of the way and then moving them back in after an > > initdb...is this not doable? What caveats are there to doing this? > > Individual database's will be missing fields added in the release upgrade, > > so if you want some of the v6.4 new features, you'd have to dump the > > individual database and then reload it, but if you don't care, you'd have > > some optimizations associated with the new release? > > We will move the old data files out of the way, run initdb, reload a > pg_dump with schema-only, then move the data files back into the proper > locations, and perhaps drop/recreate all indexes. They will have all > the features. They have just kept their raw data files. > > How long does re-indexing the tables take vs. reloading and re-indexing? Is re-indexing required? With the old indexes work with a new release, albeit slower? Or just not work at all? As for dropping/recreating all indices...that isn't really so bad, anyway...once all the data is there, th edatabase can go live...albeit *very* slow, in some cases, if I have 4 indices on a table, each one built should improve the speed of queries, but each build shouldn't limit the ability for the database to be up... Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> > On Mon, 29 Jun 1998, Bruce Momjian wrote: > > > > Now, I liked the idea that was presented about moving the > > > database directories out of the way and then moving them back in after an > > > initdb...is this not doable? What caveats are there to doing this? > > > Individual database's will be missing fields added in the release upgrade, > > > so if you want some of the v6.4 new features, you'd have to dump the > > > individual database and then reload it, but if you don't care, you'd have > > > some optimizations associated with the new release? > > > > We will move the old data files out of the way, run initdb, reload a > > pg_dump with schema-only, then move the data files back into the proper > > locations, and perhaps drop/recreate all indexes. They will have all > > the features. They have just kept their raw data files. > > > > How long does re-indexing the tables take vs. reloading and re-indexing? > > Is re-indexing required? With the old indexes work with a new > release, albeit slower? Or just not work at all? Vadim is changing the index format for 6.4. > As for dropping/recreating all indices...that isn't really so bad, > anyway...once all the data is there, th edatabase can go live...albeit > *very* slow, in some cases, if I have 4 indices on a table, each one built > should improve the speed of queries, but each build shouldn't limit the > ability for the database to be up... Doesn't index creation lock the table? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Mon, 29 Jun 1998, Bruce Momjian wrote: > > As for dropping/recreating all indices...that isn't really so bad, > > anyway...once all the data is there, th edatabase can go live...albeit > > *very* slow, in some cases, if I have 4 indices on a table, each one built > > should improve the speed of queries, but each build shouldn't limit the > > ability for the database to be up... > > Doesn't index creation lock the table? I'm not sure why it would...creation of indices doesn't write nything to the table itself, just reads...no? Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
Bruce Momjian wrote: > > > As for dropping/recreating all indices...that isn't really so bad, > > anyway...once all the data is there, th edatabase can go live...albeit > > *very* slow, in some cases, if I have 4 indices on a table, each one built > > should improve the speed of queries, but each build shouldn't limit the > > ability for the database to be up... > > Doesn't index creation lock the table? Lock for read... Vadim
> > Bruce Momjian wrote: > > > > > As for dropping/recreating all indices...that isn't really so bad, > > > anyway...once all the data is there, th edatabase can go live...albeit > > > *very* slow, in some cases, if I have 4 indices on a table, each one built > > > should improve the speed of queries, but each build shouldn't limit the > > > ability for the database to be up... > > > > Doesn't index creation lock the table? > > Lock for read... Yep, good point. Reads are OK. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)