Thread: linux standard layout
Dear list, I have about 20 postgresql databases, about 3-4 GB in total. We are moving them from Solaris/SPARC to a linux based virtual machine. I don't like the VMWare environment, but it's not my choice, and assuming the cpu load is ok, will there be any benefits if I put each database on separate partitions, vs. simply using the one data directory? Also, how is using standard rpm, with its standard layout (/var/lib/pgsql, /usr/lib/pgsql, ...), generally regarded? ( vs. compiling everything ?) Does anyone think using the rpm is unprofessional or something that only beginners will do? I have someone who opposes the use of standard rpms (even yums) for this reason. I thought I'd check out how it is received professionally. I ask the question because sometimes I feel uneasy mixing rpms and source compilation. If I compile something from the source, sometimes I see a boundary condition - like, if I already have DBI from a standard rpm, it expects postgresql library at a certain location - making me wonder whether I should remove the DBI rpm and compile it also from the source, or whether I should use standard rpms for postgresql as well. (DBI may not be a good example.) In general I didn't have any problems yet with standard rpms and I can make the rpms work if there's a problem, but I may be missing something. Any advice or reference to a relevant article on this issue will be appreciated. Thanks. Ben Kim
On Mon, Mar 8, 2010 at 10:31 PM, Ben Kim <bkim@tamu.edu> wrote: > Dear list, > > I have about 20 postgresql databases, about 3-4 GB in total. > > We are moving them from Solaris/SPARC to a linux based virtual machine. > > I don't like the VMWare environment, but it's not my choice, and assuming > the cpu load is ok, will there be any benefits if I put each database on > separate partitions, vs. simply using the one data directory? What reasoning was given for putting your database server in a virutalizeed environment? > Also, how is using standard rpm, with its standard layout (/var/lib/pgsql, > /usr/lib/pgsql, ...), generally regarded? ( vs. compiling everything ?) Does > anyone think using the rpm is unprofessional or something that only > beginners will do? > > I have someone who opposes the use of standard rpms (even yums) for this > reason. I thought I'd check out how it is received professionally. Sounds like a religious argument. I mostly used packages, unless I can't. (i.e. two different versions on RH at the same time) > I ask the question because sometimes I feel uneasy mixing rpms and source > compilation. Worry more about accidentally having two different versions of the same lib linked to various executables. It's easy to do with things like mysql and apache and php and zlib. > If I compile something from the source, sometimes I see a boundary condition > - like, if I already have DBI from a standard rpm, it expects postgresql > library at a certain location - making me wonder whether I should remove the > DBI rpm and compile it also from the source, or whether I should use > standard rpms for postgresql as well. (DBI may not be a good example.) > > In general I didn't have any problems yet with standard rpms and I can make > the rpms work if there's a problem, but I may be missing something. My advice: put postgresql on its own, powerful, reliable non-virtualized server. Demand that the person who wants to virtualize it justify their decision with more than hand-wavy methodologies. Use packages unless you're on RPM and you need > 1 version of pgsql. Even if you need to compile some tarball against the packages, it's still easier to maintain than to install it all from source.
On 03/09/2010 05:31 AM, Ben Kim wrote: > Also, how is using standard rpm, with its standard layout > (/var/lib/pgsql, /usr/lib/pgsql, ...), generally regarded? ( vs. > compiling everything ?) Does anyone think using the rpm is > unprofessional or something that only beginners will do? > > I have someone who opposes the use of standard rpms (even yums) for > this reason. I thought I'd check out how it is received professionally. I wouldn't have it any other way. (I use Ubuntu, so it's packages instead of rpm, but it's the same.) The biggest benefit I've seen is that the packages are built against known versions of supporting libraries, and these libraries are also in the repository. So, an "apt-get install postgresql" gets the latest version AND all dependencies. Your friend sounds like a snob. :) (Though he/she may have valid reasons for feeling that way, I haven't had that cause a problem in a modern Linux environment. Red Hat 6? Yeah, you might want to compile - you probably couldn't find all the dependencies anyway.) > I ask the question because sometimes I feel uneasy mixing rpms and > source compilation. Bingo. :) When I do have to compile, I compile AND create a package (if possible), then install the package. Daniel
A word of caution for packages/rpms. Beware of admins who apply ALL updates that are available to a system. I have seenthis happen taking Postgres from say vresion 8.3.X to 8.4.X, which as you can imagine, caused problems. ________________________________ From: pgsql-admin-owner@postgresql.org [pgsql-admin-owner@postgresql.org] On Behalf Of Scott Marlowe [scott.marlowe@gmail.com] Sent: Monday, March 08, 2010 11:48 PM To: Ben Kim Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] linux standard layout On Mon, Mar 8, 2010 at 10:31 PM, Ben Kim <bkim@tamu.edu> wrote: > Dear list, > > I have about 20 postgresql databases, about 3-4 GB in total. > > We are moving them from Solaris/SPARC to a linux based virtual machine. > > I don't like the VMWare environment, but it's not my choice, and assuming > the cpu load is ok, will there be any benefits if I put each database on > separate partitions, vs. simply using the one data directory? What reasoning was given for putting your database server in a virutalizeed environment? > Also, how is using standard rpm, with its standard layout (/var/lib/pgsql, > /usr/lib/pgsql, ...), generally regarded? ( vs. compiling everything ?) Does > anyone think using the rpm is unprofessional or something that only > beginners will do? > > I have someone who opposes the use of standard rpms (even yums) for this > reason. I thought I'd check out how it is received professionally. Sounds like a religious argument. I mostly used packages, unless I can't. (i.e. two different versions on RH at the same time) > I ask the question because sometimes I feel uneasy mixing rpms and source > compilation. Worry more about accidentally having two different versions of the same lib linked to various executables. It's easy to do with things like mysql and apache and php and zlib. > If I compile something from the source, sometimes I see a boundary condition > - like, if I already have DBI from a standard rpm, it expects postgresql > library at a certain location - making me wonder whether I should remove the > DBI rpm and compile it also from the source, or whether I should use > standard rpms for postgresql as well. (DBI may not be a good example.) > > In general I didn't have any problems yet with standard rpms and I can make > the rpms work if there's a problem, but I may be missing something. My advice: put postgresql on its own, powerful, reliable non-virtualized server. Demand that the person who wants to virtualize it justify their decision with more than hand-wavy methodologies. Use packages unless you're on RPM and you need > 1 version of pgsql. Even if you need to compile some tarball against the packages, it's still easier to maintain than to install it all from source. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
"Daniel J. Summers" <daniel.lists@djs-consulting.com> writes: > On 03/09/2010 05:31 AM, Ben Kim wrote: >> I ask the question because sometimes I feel uneasy mixing rpms and >> source compilation. > Bingo. :) When I do have to compile, I compile AND create a package > (if possible), then install the package. +1. What's "unprofessional" is installing loose stuff into a package-managed system. The specific packages produced by your system vendor are not what you want? Fine, build your own and then install those. You'll be able to track them, remove them, etc much more easily than with an unpackaged source-code install. It's not that hard to build your own packages in any of the popular packaging systems --- especially not if there's a nearly-right package available for you to study and modify. BTW, I concur with Scott's statement that the choice to put this on a virtualized server is a much bigger deal than rpm versus raw source. At the end of the day, the installed software is the same with either of those options --- a package makes it a bit easier to manage but that's all. But a virtualization layer can kill your performance and/or reliability. Ask hard questions about why that decision is being imposed on you and what benefits it will have. regards, tom lane
On Mon, Mar 8, 2010 at 10:53 PM, Plugge, Joe R. <JRPlugge@west.com> wrote: > A word of caution for packages/rpms. Beware of admins who apply ALL updates that are available to a system. I have seenthis happen taking Postgres from say vresion 8.3.X to 8.4.X, which as you can imagine, caused problems. I've never used an OS that upgraded major versions of pg midstream. I.e. going from ubuntu 8.04 to 9.10 yeah maybe. Going from one update to the next of 8.04 no way. What OS have you seen this happen in ? Note that it's still a good idea to put things like kernels and pgsql into the exclude catagories in the yum conf.
It has been a while ago Scott, I don't remember exactly. If it currently is not an issue then I will not be so resistantto using packages/rpms for postgres installs. One other item, and maybe it is just that I have never done it ...how would one install a package/rpm and change page size, XML, or enable ssl connections? Just curious? Joe ________________________________ From: Scott Marlowe [scott.marlowe@gmail.com] Sent: Tuesday, March 09, 2010 12:16 AM To: Plugge, Joe R. Cc: Ben Kim; pgsql-admin@postgresql.org Subject: Re: [ADMIN] linux standard layout On Mon, Mar 8, 2010 at 10:53 PM, Plugge, Joe R. <JRPlugge@west.com> wrote: > A word of caution for packages/rpms. Beware of admins who apply ALL updates that are available to a system. I have seenthis happen taking Postgres from say vresion 8.3.X to 8.4.X, which as you can imagine, caused problems. I've never used an OS that upgraded major versions of pg midstream. I.e. going from ubuntu 8.04 to 9.10 yeah maybe. Going from one update to the next of 8.04 no way. What OS have you seen this happen in ? Note that it's still a good idea to put things like kernels and pgsql into the exclude catagories in the yum conf.
On Mon, Mar 8, 2010 at 11:21 PM, Plugge, Joe R. <JRPlugge@west.com> wrote: > It has been a while ago Scott, I don't remember exactly. If it currently is not an issue then I will not be so resistantto using packages/rpms for postgres installs. One other item, and maybe it is just that I have never done it ...how would one install a package/rpm and change page size, XML, or enable ssl connections? Just curious? SSL just works noawadays. xml and other contrib stuff is in a package. Using non standard 8k pages means you need to go to source rpms and build your own packages.
makes sense ... thanks for the advice. ________________________________ From: Scott Marlowe [scott.marlowe@gmail.com] Sent: Tuesday, March 09, 2010 12:29 AM To: Plugge, Joe R. Cc: Ben Kim; pgsql-admin@postgresql.org Subject: Re: [ADMIN] linux standard layout On Mon, Mar 8, 2010 at 11:21 PM, Plugge, Joe R. <JRPlugge@west.com> wrote: > It has been a while ago Scott, I don't remember exactly. If it currently is not an issue then I will not be so resistantto using packages/rpms for postgres installs. One other item, and maybe it is just that I have never done it ...how would one install a package/rpm and change page size, XML, or enable ssl connections? Just curious? SSL just works noawadays. xml and other contrib stuff is in a package. Using non standard 8k pages means you need to go to source rpms and build your own packages.
Plugge, Joe R. wrote: > It has been a while ago Scott, I don't remember exactly. If it currently is not an issue then I will not be so resistantto using packages/rpms for postgres installs. One other item, and maybe it is just that I have never done it ...how would one install a package/rpm and change page size, XML, or enable ssl connections? Just curious? I've literally never run into this on any version of Linux I've used (RH before they had EL and Fedora, RHEL, Fedora, Debian, Ubunt). Certainly I'd be stunned if any of the enterprise/long term release oriented distros did it (RHEL, CentOS, SLES, Debian, etc). Most packagers enable everything and the kitchen sink. I don't have a RHEL box handy, but as you can see from this Fedora 11 list: postgresql.i586 : PostgreSQL client programs and libraries postgresql-contrib.i586 : Contributed source and binaries distributed with : PostgreSQL postgresql-dbi-link.noarch : Partial implementation of the SQL/MED portion of : the SQL:2003 specification postgresql-devel.i586 : PostgreSQL development header files and libraries postgresql-docs.i586 : Extra documentation for PostgreSQL postgresql-ip4r.i586 : IPv4 and IPv4 range index types for PostgreSQL postgresql-jdbc.i586 : JDBC driver for PostgreSQL postgresql-libs.i586 : The shared libraries required for any PostgreSQL clients postgresql-odbc.i586 : PostgreSQL ODBC driver postgresql-odbcng.i586 : PostgreSQL ODBCng driver postgresql-pgpool-II.i586 : Pgpool is a connection pooling/replication server : for PostgreSQL postgresql-pgpool-II-devel.i586 : The development files for pgpool-II postgresql-pgpool-ha.noarch : Pgpool-HA uses heartbeat to keep pgpool from being : a single point of failure postgresql-pgpoolAdmin.noarch : PgpoolAdmin - web-based pgpool administration postgresql-plperl.i586 : The Perl procedural language for PostgreSQL postgresql-plpython.i586 : The Python procedural language for PostgreSQL postgresql-plruby.i586 : PostgreSQL Ruby Procedural Language postgresql-plruby-doc.i586 : Documentation for plruby postgresql-pltcl.i586 : The Tcl procedural language for PostgreSQL postgresql-python.i586 : Development module for Python code to access a : PostgreSQL DB postgresql-server.i586 : The programs needed to create and run a PostgreSQL : server postgresql-table_log.i586 : Log data changes in a PostgreSQL table postgresql-tcl.i586 : A Tcl client library for PostgreSQL postgresql-test.i586 : The test suite distributed with PostgreSQL postgresql_autodoc.noarch : PostgreSQL AutoDoc Utility ...what you get is pretty much everything, and the binaries are built with pretty much every option (kerberos, openssl, languages) either on or built into installable packages. The other benefit is that I know the PostgreSQL support for everything else on that Fedora system (PHP, Ruby, etc), will be at a matching version and tested to work with the server install. If Fedora have built with some options you don't like, my preference is to download the SRPM, tinker with the spec file, and build a new set of RPMs, which maximised the chances of everything just working.
Scott Marlowe wrote: > On Mon, Mar 8, 2010 at 10:31 PM, Ben Kim <bkim@tamu.edu> wrote: >> I have someone who opposes the use of standard rpms (even yums) for this >> reason. I thought I'd check out how it is received professionally. > > Sounds like a religious argument. I mostly used packages, unless I > can't. (i.e. two different versions on RH at the same time) Agreed. This is particularly the case once one starts thinking about security updates and so on - my experience is that hand-rolling from source tends to result in patching lagging far behind after a while.
Thanks all. I cannot change the decision on vmware or layout, but it's great to know that the rpm way is a valid one. I appreciate all inputs. Regards, Ben Kim On Mon, 8 Mar 2010, Scott Marlowe wrote: > On Mon, Mar 8, 2010 at 10:31 PM, Ben Kim <bkim@tamu.edu> wrote: >> Dear list, >> >> I have about 20 postgresql databases, about 3-4 GB in total. >> >> We are moving them from Solaris/SPARC to a linux based virtual machine. >> >> I don't like the VMWare environment, but it's not my choice, and assuming >> the cpu load is ok, will there be any benefits if I put each database on >> separate partitions, vs. simply using the one data directory? > > What reasoning was given for putting your database server in a > virutalizeed environment? > >> Also, how is using standard rpm, with its standard layout (/var/lib/pgsql, >> /usr/lib/pgsql, ...), generally regarded? ( vs. compiling everything ?) Does >> anyone think using the rpm is unprofessional or something that only >> beginners will do? >> >> I have someone who opposes the use of standard rpms (even yums) for this >> reason. I thought I'd check out how it is received professionally. > > Sounds like a religious argument. I mostly used packages, unless I > can't. (i.e. two different versions on RH at the same time) > >> I ask the question because sometimes I feel uneasy mixing rpms and source >> compilation. > > Worry more about accidentally having two different versions of the > same lib linked to various executables. It's easy to do with things > like mysql and apache and php and zlib. > >> If I compile something from the source, sometimes I see a boundary condition >> - like, if I already have DBI from a standard rpm, it expects postgresql >> library at a certain location - making me wonder whether I should remove the >> DBI rpm and compile it also from the source, or whether I should use >> standard rpms for postgresql as well. (DBI may not be a good example.) >> >> In general I didn't have any problems yet with standard rpms and I can make >> the rpms work if there's a problem, but I may be missing something. > > My advice: > > put postgresql on its own, powerful, reliable non-virtualized server. > Demand that the person who wants to virtualize it justify their > decision with more than hand-wavy methodologies. Use packages unless > you're on RPM and you need > 1 version of pgsql. Even if you need to > compile some tarball against the packages, it's still easier to > maintain than to install it all from source. >
Hi Ben, If you must use a VMware server for your databases, please run some "pull-the-power-plug" tests on your system to ensure that your data integrity is maintained. Virtual machines can sometimes cache filesystem updates in the name of performance with disasterous consequences to your filesystems and databases. Cheers, Ken On Tue, Mar 09, 2010 at 08:18:21AM -0600, Ben Kim wrote: > > Thanks all. > > I cannot change the decision on vmware or layout, but it's great to know > that the rpm way is a valid one. > > I appreciate all inputs. > > > > Regards, > > Ben Kim > > > On Mon, 8 Mar 2010, Scott Marlowe wrote: > >> On Mon, Mar 8, 2010 at 10:31 PM, Ben Kim <bkim@tamu.edu> wrote: >>> Dear list, >>> >>> I have about 20 postgresql databases, about 3-4 GB in total. >>> >>> We are moving them from Solaris/SPARC to a linux based virtual machine. >>> >>> I don't like the VMWare environment, but it's not my choice, and assuming >>> the cpu load is ok, will there be any benefits if I put each database on >>> separate partitions, vs. simply using the one data directory? >> >> What reasoning was given for putting your database server in a >> virutalizeed environment? >> >>> Also, how is using standard rpm, with its standard layout >>> (/var/lib/pgsql, >>> /usr/lib/pgsql, ...), generally regarded? ( vs. compiling everything ?) >>> Does >>> anyone think using the rpm is unprofessional or something that only >>> beginners will do? >>> >>> I have someone who opposes the use of standard rpms (even yums) for this >>> reason. I thought I'd check out how it is received professionally. >> >> Sounds like a religious argument. I mostly used packages, unless I >> can't. (i.e. two different versions on RH at the same time) >> >>> I ask the question because sometimes I feel uneasy mixing rpms and source >>> compilation. >> >> Worry more about accidentally having two different versions of the >> same lib linked to various executables. It's easy to do with things >> like mysql and apache and php and zlib. >> >>> If I compile something from the source, sometimes I see a boundary >>> condition >>> - like, if I already have DBI from a standard rpm, it expects postgresql >>> library at a certain location - making me wonder whether I should remove >>> the >>> DBI rpm and compile it also from the source, or whether I should use >>> standard rpms for postgresql as well. (DBI may not be a good example.) >>> >>> In general I didn't have any problems yet with standard rpms and I can >>> make >>> the rpms work if there's a problem, but I may be missing something. >> >> My advice: >> >> put postgresql on its own, powerful, reliable non-virtualized server. >> Demand that the person who wants to virtualize it justify their >> decision with more than hand-wavy methodologies. Use packages unless >> you're on RPM and you need > 1 version of pgsql. Even if you need to >> compile some tarball against the packages, it's still easier to >> maintain than to install it all from source. >> > > -- > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-admin >
On Tue, Mar 9, 2010 at 7:34 AM, Kenneth Marshall <ktm@rice.edu> wrote: > Hi Ben, > > If you must use a VMware server for your databases, please run > some "pull-the-power-plug" tests on your system to ensure that > your data integrity is maintained. Virtual machines can sometimes > cache filesystem updates in the name of performance with disasterous > consequences to your filesystems and databases. s/can sometimes/quite often/ Also, if OP cannot get a VM that reliably passes the power plug pull test then OP should submit in writing why he thinks VM is a bad idea as an ass covering move for the future date when the database WILL become corrupted and probably lose data. You need to get the point across that you can't be held responsible for the safety of the data placed into a non-reliable system.
Tom Lane skrev: > But a virtualization layer can kill your performance and/or > reliability. Ask hard questions about why that decision is being > imposed on you and what benefits it will have. > > I have been following this discussion, and I feel I have to make a comment. Our company has moved all servers to a fully virtualized environment (VMware). Instead of having to deal with a bunch of physical servers we now have a clean setup with two 'irons' running vmware, and a pack of disks (SAN). If any of the disks should fail, we can simply exchange it without disturbing the running servers. If one of the physical machines have some trouble it automatically switches to the other one, again without disturbing the running servers. If we wish, we can move a running PosgreSQL server from one physical machine to another without stopping it or disconnecting users. - Want more RAM? Adjust the value on the virtual machine. - Want another CPU? Just add one. - If I need another PostgreSQL server I just ask for a clone and it is ready in 10 minutes. You can even clone a running server, data and all. If I need the server for testing I can download a clone to my own computer and run it there under VMware Workstation or Player. I can make a new virtual machine on my own PC and when I am done setting it up I give it to the IT guys and they start it on the big irons. This setup is extremely flexible and saves A LOT of work. It also has a huge impact on the reliability and performance. The performance of the virtual servers are awesome. BTW: we are running PostgreSQL 8.3 and 8.4 servers on Ubuntu linux on the virtual machines. On my own development computer I am also running virtual machines; one running development tools, another a PostgreSQL server, a Windows machine running some client application using the database server etc. I will prefer a virtualized environment over physical machines any time. Regarding performance, on my development machine (a Lenovo ThinkPad W700) the virtual PC's are running at about 90-95% of the speed on the same hardware without virtualization. Regarding 'pulling the plug' on the servers: Physical or virtual, always use a UPS. You can pull the plug as much as you like. When the power is about to run out it signals the server, which shuts down cleanly. Our servers have dual powersupplies, connected to separate UPS'es on separate power sources... In a nutshell, I am heartly recommending virtualization. And - I do not want to start a discussion about it. Just sharing my opinion. /Jan-Ivar
On Tue, Mar 9, 2010 at 1:18 PM, Jan-Ivar Mellingen <jan-ivar.mellingen@alreg.no> wrote: > Regarding 'pulling the plug' on the servers: Physical or virtual, always use > a UPS. You can pull the plug as much as you like. When the power is about to > run out it signals the server, which shuts down cleanly. > Our servers have dual powersupplies, connected to separate UPS'es on > separate power sources... I've watched three redundant UPSes, three redundant power conditioners, and the switch for the diesel generator all fry when the perfect storm of events happened in a job 7 or 8 years ago. Every single machine in the hosting center lost power. Of the hundred or so database servers, mine was the only one that came up. The others all had to rely on off site backups to get up and running. Not one other DBA at that company had performed a power failure test. > In a nutshell, I am heartly recommending virtualization. In a nutshell, you are relying on luck that both heavy iron machines can't lose power at the same time. Sure, it's a low possibility, but it's still a real one. > And - I do not want to start a discussion about it. Just sharing my opinion. Well, you can't throw the post you threw out there and not expect it to start a discussion really. I understand a lot of the reasoning for virtualization. My DB servers run at 75 to 100% capacity during midday, there'd be no real advantage to buying an eve bigger piece of iron to run them on. I see the advantages of virtualization for certain load types, and allowing to easily move services as a single disk image instead of installing the service and configuring it on a new machine. Where I work all the servers (except the nagios box) work hard and there'd be no real advantage to me in putting all my eggs in the virtualization basket there. I do use KVM to run multiple servers on my laptop for testing. It's great for that. But hope is not a method I use when installing my servers.
And Jan-Ivar, please don't think I'm saying your way is not a valid way to do things, it just requires things like read recoverable dbs via PITR or something like that. Without a read to recover separate machine for your big db you could be facing downtime measured in hours or even days.
On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > In a nutshell, I am heartly recommending virtualization. > > In a nutshell, you are relying on luck that both heavy iron machines > can't lose power at the same time. Sure, it's a low possibility, but > it's still a real one. > Not luck. Percentage of risk. Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.
On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > >> > In a nutshell, I am heartly recommending virtualization. >> >> In a nutshell, you are relying on luck that both heavy iron machines >> can't lose power at the same time. Sure, it's a low possibility, but >> it's still a real one. >> > > Not luck. Percentage of risk. They're both ways of saying you're rolling the dice. And in every situation we're rolling the dice, it's just a question of how many and how unlikely a particular outcome it. It's why we all have off-site backups, and so on.
On Tue, 2010-03-09 at 14:25 -0700, Scott Marlowe wrote: > On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > > On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > > >> > In a nutshell, I am heartly recommending virtualization. > >> > >> In a nutshell, you are relying on luck that both heavy iron machines > >> can't lose power at the same time. Sure, it's a low possibility, but > >> it's still a real one. > >> > > > > Not luck. Percentage of risk. > > They're both ways of saying you're rolling the dice. And in every > situation we're rolling the dice, it's just a question of how many and Well my point was all about risk versus reward. For many, a 3% risk is more than appropriate. That isn't luck, it is a calculation of risk. > how unlikely a particular outcome it. It's why we all have off-site > backups, and so on. Yes. Joshua D. Drake > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.
On Tue, Mar 09, 2010 at 01:28:20PM -0800, Joshua D. Drake wrote: > On Tue, 2010-03-09 at 14:25 -0700, Scott Marlowe wrote: > > On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > > > On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > > > > >> > In a nutshell, I am heartly recommending virtualization. > > >> > > >> In a nutshell, you are relying on luck that both heavy iron machines > > >> can't lose power at the same time. Sure, it's a low possibility, but > > >> it's still a real one. > > >> > > > > > > Not luck. Percentage of risk. > > > > They're both ways of saying you're rolling the dice. And in every > > situation we're rolling the dice, it's just a question of how many and > > Well my point was all about risk versus reward. For many, a 3% risk is > more than appropriate. That isn't luck, it is a calculation of risk. > True, but in many cases the analysis of risk/reward is flawed by not including the true cost of a protracted outage. Some of the second order effects can be nasty if not included originally. I would also recommend that the analysis and implementation be signed-off at the highest levels -- that is where the head-hunting will start. Cheers, Ken
On Tue, 2010-03-09 at 15:43 -0600, Kenneth Marshall wrote: > On Tue, Mar 09, 2010 at 01:28:20PM -0800, Joshua D. Drake wrote: > > On Tue, 2010-03-09 at 14:25 -0700, Scott Marlowe wrote: > > > On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > > > > On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > > > > > > >> > In a nutshell, I am heartly recommending virtualization. > > > >> > > > >> In a nutshell, you are relying on luck that both heavy iron machines > > > >> can't lose power at the same time. Sure, it's a low possibility, but > > > >> it's still a real one. > > > >> > > > > > > > > Not luck. Percentage of risk. > > > > > > They're both ways of saying you're rolling the dice. And in every > > > situation we're rolling the dice, it's just a question of how many and > > > > Well my point was all about risk versus reward. For many, a 3% risk is > > more than appropriate. That isn't luck, it is a calculation of risk. > > > True, but in many cases the analysis of risk/reward is flawed by not > including the true cost of a protracted outage. Some of the second > order effects can be nasty if not included originally. I would also > recommend that the analysis and implementation be signed-off at the > highest levels -- that is where the head-hunting will start. I concur with that... Always have a CYA document. Joshua D. Drake > > Cheers, > Ken > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.
On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > In a nutshell, I am heartly recommending virtualization. > > In a nutshell, you are relying on luck that both heavy iron machines > can't lose power at the same time. Sure, it's a low possibility, but > it's still a real one. > Not luck. Percentage of risk. Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.
On Tue, 2010-03-09 at 14:25 -0700, Scott Marlowe wrote: > On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > > On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > > >> > In a nutshell, I am heartly recommending virtualization. > >> > >> In a nutshell, you are relying on luck that both heavy iron machines > >> can't lose power at the same time. Sure, it's a low possibility, but > >> it's still a real one. > >> > > > > Not luck. Percentage of risk. > > They're both ways of saying you're rolling the dice. And in every > situation we're rolling the dice, it's just a question of how many and Well my point was all about risk versus reward. For many, a 3% risk is more than appropriate. That isn't luck, it is a calculation of risk. > how unlikely a particular outcome it. It's why we all have off-site > backups, and so on. Yes. Joshua D. Drake > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.
On Tue, 2010-03-09 at 15:43 -0600, Kenneth Marshall wrote: > On Tue, Mar 09, 2010 at 01:28:20PM -0800, Joshua D. Drake wrote: > > On Tue, 2010-03-09 at 14:25 -0700, Scott Marlowe wrote: > > > On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > > > > On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote: > > > > > > > >> > In a nutshell, I am heartly recommending virtualization. > > > >> > > > >> In a nutshell, you are relying on luck that both heavy iron machines > > > >> can't lose power at the same time. Sure, it's a low possibility, but > > > >> it's still a real one. > > > >> > > > > > > > > Not luck. Percentage of risk. > > > > > > They're both ways of saying you're rolling the dice. And in every > > > situation we're rolling the dice, it's just a question of how many and > > > > Well my point was all about risk versus reward. For many, a 3% risk is > > more than appropriate. That isn't luck, it is a calculation of risk. > > > True, but in many cases the analysis of risk/reward is flawed by not > including the true cost of a protracted outage. Some of the second > order effects can be nasty if not included originally. I would also > recommend that the analysis and implementation be signed-off at the > highest levels -- that is where the head-hunting will start. I concur with that... Always have a CYA document. Joshua D. Drake > > Cheers, > Ken > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.
* Ben Kim <bkim@tamu.edu> wrote: Hi, > I don't like the VMWare environment, but it's not my choice, and assuming > the cpu load is ok, will there be any benefits if I put each database on > separate partitions, vs. simply using the one data directory? Depending on your underlying storage, it might be advisable to put each instance on an separate spindle/enclosure, maybe even on an different bus. But there's no universal answer to this. You could start putting each instance on an separate partition or lvm volume, record your real disk workload w/ blktrace and try it out on different storage configurations w/ blkreplay. > Also, how is using standard rpm, with its standard layout (/var/lib/pgsql, > /usr/lib/pgsql, ...), generally regarded? When working w/ some mainline distribution, always try to use that distro's packages (not distro-independent binpkgs!), unless you've got a valid reason for doing otherwise. When building your own packages, use your distro's build machinery for that. If your (virtual) box really only runs the server and you'd like to do specific optimizations (eg. processor specific, etc), you could create your own micro-distro. I've got some tools to ease that - just mail me directly if you're interested. > I have someone who opposes the use of standard rpms (even yums) for this > reason. I thought I'd check out how it is received professionally. Well, that depends on his specific reasons. For examples, several distros tend to do strange things, you might not want in your environment. Perhaps you need certain patches your distro doenst provide. But even in this case you should try to use your distro's build machinery for creating your own package. Please ask your collegue for his concrete reasons and report back :) > If I compile something from the source, sometimes I see a boundary > condition - like, if I already have DBI from a standard rpm, it expects > postgresql library at a certain location - making me wonder whether I > should remove the DBI rpm and compile it also from the source, or whether > I should use standard rpms for postgresql as well. (DBI may not be a good > example.) Pathes should be configurable, or you could use symlinks or bind mounts. cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service - http://www.metux.de/ --------------------------------------------------------------------- Please visit the OpenSource QM Taskforce: http://wiki.metux.de/public/OpenSource_QM_Taskforce Patches / Fixes for a lot dozens of packages in dozens of versions: http://patches.metux.de/ ---------------------------------------------------------------------
* Rodger Donaldson <rodgerd@diaspora.gen.nz> wrote: > Agreed. This is particularly the case once one starts thinking about > security updates and so on - my experience is that hand-rolling from > source tends to result in patching lagging far behind after a while. Depends on whether you have an proper QM and build machinery in the back. I'm often work in embedded or specific appliance environments and have developed my several tools which make this easier. (feel free to contact me personally if you're interested) cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service - http://www.metux.de/ --------------------------------------------------------------------- Please visit the OpenSource QM Taskforce: http://wiki.metux.de/public/OpenSource_QM_Taskforce Patches / Fixes for a lot dozens of packages in dozens of versions: http://patches.metux.de/ ---------------------------------------------------------------------
Hi Postgres crashes with - PG "FATAL: could not reattach to shared memory (key=5432001, addr=02100000): Invalid argument. The version is 8.2.4, the platform is win32 Does someone know the reason/workaround ? Thanks, Yuval Sofer BMC Software CTM&D Business Unit DBA Team 972-52-4286-282 yuval_sofer@bmc.com -----Original Message----- From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org] On Behalf Of Enrico Weigelt Sent: Saturday, May 01, 2010 7:51 PM To: pgsql-admin@postgresql.org Subject: Re: [ADMIN] linux standard layout * Rodger Donaldson <rodgerd@diaspora.gen.nz> wrote: > Agreed. This is particularly the case once one starts thinking about > security updates and so on - my experience is that hand-rolling from > source tends to result in patching lagging far behind after a while. Depends on whether you have an proper QM and build machinery in the back. I'm often work in embedded or specific appliance environments and have developed my several tools which make this easier. (feel free to contact me personally if you're interested) cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service - http://www.metux.de/ --------------------------------------------------------------------- Please visit the OpenSource QM Taskforce: http://wiki.metux.de/public/OpenSource_QM_Taskforce Patches / Fixes for a lot dozens of packages in dozens of versions: http://patches.metux.de/ --------------------------------------------------------------------- -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin