Thread: Where do we need servers?
Folks, Aside from pgFoundry, where are our greatest weak points in current web support? I've had offers of a couple of new hosts for the project over the last couple of weeks. Where do we need hardware/bandwidth the most? -- --Josh Josh Berkus Aglio Database Solutions San Francisco
Josh Berkus <josh@agliodbs.com> writes: > Aside from pgFoundry, where are our greatest weak points in current web > support? I've had offers of a couple of new hosts for the project over the > last couple of weeks. Where do we need hardware/bandwidth the most? The cvsweb service has really *sucked* for about the last six months --- response times of 30 seconds are not unusual, where before it was just a couple seconds. Maybe no one else cares, but I rely on it (I've spent most of the past hour waiting for the darn thing, in fact). Try poking around at http://developer.postgresql.org/cvsweb.cgi/ you'll see what I mean. regards, tom lane
On Tue, 8 Feb 2005, Tom Lane wrote: > Josh Berkus <josh@agliodbs.com> writes: >> Aside from pgFoundry, where are our greatest weak points in current web >> support? I've had offers of a couple of new hosts for the project over the >> last couple of weeks. Where do we need hardware/bandwidth the most? > > The cvsweb service has really *sucked* for about the last six months --- > response times of 30 seconds are not unusual, where before it was just a > couple seconds. Maybe no one else cares, but I rely on it (I've spent > most of the past hour waiting for the darn thing, in fact). > > Try poking around at > http://developer.postgresql.org/cvsweb.cgi/ > you'll see what I mean. That's already bein gworked on .. I'm waiting on the riser card for the new server so taht I can ship it down . ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
> Folks, > > Aside from pgFoundry, where are our greatest weak points in > current web support? I've had offers of a couple of new > hosts for the project over the > last couple of weeks. Where do we need hardware/bandwidth the most? Assuming my script for DNS based failover mirrors works out (hey, almost done, but I've said that for a week or so), the main web will need at least three machines for static web. Eventually, two for dynamic, but that's later. These should be well distributed (not all on the same provider, or even the same continent probably). Needs almost no processing power, not a lot of diskspace, but plenty of bandwidth. Since they shuold IMHO not share hw resources with the dynamic servers (and especially not with each other), I think we have only one of these today. But perhaps one or more of the existing mirrors can be moved to do this. Once dynamic servers are distributed, it's going to need processing power but almost no bandwidth. //Magnus
Magnus, > These should be well distributed (not all on the same provider, or even > the same continent probably). Needs almost no processing power, not a > lot of diskspace, but plenty of bandwidth. Since they shuold IMHO not > share hw resources with the dynamic servers (and especially not with > each other), I think we have only one of these today. But perhaps one or > more of the existing mirrors can be moved to do this. Well, one of the donated servers will be in Frankfurt, so that sounds perfect. -- Josh Berkus Aglio Database Solutions San Francisco
> -----Original Message----- > From: pgsql-www-owner@postgresql.org > [mailto:pgsql-www-owner@postgresql.org] On Behalf Of Josh Berkus > Sent: 09 February 2005 17:24 > To: Magnus Hagander > Cc: pgsql-www@postgresql.org > Subject: Re: [pgsql-www] Where do we need servers? > > Well, one of the donated servers will be in Frankfurt, so > that sounds perfect. We can setup static primaries easily, so if you can get us the appropriate details we can get it online fairly quickly. What OS is it, or can we install whatever we like? (not that it really matters for a static). Regards, Dave.
Dave, > We can setup static primaries easily, so if you can get us the > appropriate details we can get it online fairly quickly. What OS is it, > or can we install whatever we like? (not that it really matters for a > static). Do you have a really strong preference? The folks who are hosting the server in Frankfurt suggested SuSE Linux, because their people (who will need to do the physical admin) have no experience with BSD. Anyway, the delay on that one is 2 weeks. --Josh -- __Aglio Database Solutions_______________ Josh Berkus Consultant josh@agliodbs.com www.agliodbs.com Ph: 415-752-2500 Fax: 415-752-2387 2166 Hayes Suite 200 San Francisco, CA
> -----Original Message----- > From: Josh Berkus [mailto:josh@agliodbs.com] > Sent: 09 February 2005 22:46 > To: Dave Page > Cc: Magnus Hagander; pgsql-www@postgresql.org > Subject: Re: [pgsql-www] Where do we need servers? > > Dave, > > > We can setup static primaries easily, so if you can get us the > > appropriate details we can get it online fairly quickly. > What OS is it, > > or can we install whatever we like? (not that it really > matters for a > > static). > > Do you have a really strong preference? The folks who are > hosting the server > in Frankfurt suggested SuSE Linux, because their people (who > will need to do > the physical admin) have no experience with BSD. > > Anyway, the delay on that one is 2 weeks. No, the static servers don't matter as it's quicker just to install Apache than to muck around with FreeBSD VMs. Suse will be just fine :-) What sort of bandwidth does it have btw? Regards, Dave
Dave, > What sort of bandwidth does it have btw? 100mb. Which makes it a good FTP/HTTP mirror. And bittorrent seeder. and ... Magnus, is there any way for us to back up the mailing lists? -- Josh Berkus Aglio Database Solutions San Francisco
> > What sort of bandwidth does it have btw? > > 100mb. Which makes it a good FTP/HTTP mirror. And bittorrent seeder. Definitly. Is it single IP only, or multiple IPs if necessary? > and ... > > Magnus, is there any way for us to back up the mailing lists? I'm sure that should be possible. I've done it in several other cases ;-) Assuming majordomo2 works the same way majordomo does, it just keeps the list config and subscription options in plain text files. The way to handle that is to simply rsync those files over to second machine. Then you set up a second MX record pointing to the secondary machine *with a different priority value*. That way mail is only delivered to the secondary machine if the first fails. Notes on this: *) You won't get failover for subscription services, only for list delivery. Not sure if that is a problem. *) You can get some weirdness with digests in the event of a failover - digests may be sent from both systems. Shouldn't be a problem during normal operatinos. Make sure digest files are *not* synced between the servers, or two copies of all digests will be sent. *) In order not to affect other mail services, these things are a whole lot easier to deal with if the lists are in their own domain. Meaning we'd have pgsql-hackers@lists.postgresql.org instead of directly in postgresql.org. Not sure if that's interesting? You could always have the @postgresql.org addresses redirect. If not, then you'd have to have the seconary machine handle mail for all @postgresql.org addresses and forward those back as necessary. How are these things set up now? I see svr1, 2 and 4 all handle mail for postgresql.org, but do they do anything more than just queue it up until the primary (svr1) is back up? *) If it's a problem to sync the files, there are other list managers that can use postgresql to store their subscription info in. Then you could use Slony replication. not sure if it pays off thouhg, and changing list manager is always a lot of work. And AFAIK, they will still not give you failover on the subscription services, since that would require multimaster replication. Something to work off? //Magnus
On Fri, 11 Feb 2005, Magnus Hagander wrote: > Assuming majordomo2 works the same way majordomo does, it just keeps the > list config and subscription options in plain text files. Binary, Berkeley DB files for just about everything ... > Then you set up a second MX record pointing to the secondary machine > *with a different priority value*. That way mail is only delivered to > the secondary machine if the first fails. Notes on this: It wouldn't be a backup in the sense of failover if the main server reboots, only if it totally goes up in smoke, at which point we'd need to failover the whole VM ... and this needs to go onto a non-US server, which I've got lined up in the EU, just haven't had time to move forward with ... > How are these things set up now? I see svr1, 2 and 4 all handle mail for > postgresql.org, but do they do anything more than just queue it up until > the primary (svr1) is back up? Purely queuing ... ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
> > Assuming majordomo2 works the same way majordomo does, it > just keeps > > the list config and subscription options in plain text files. > > Binary, Berkeley DB files for just about everything ... Yikes. That makes things significantly harder, but I'm sure you could do something along the line of a dump on one and restore on the other. > > Then you set up a second MX record pointing to the > secondary machine > > *with a different priority value*. That way mail is only > delivered to > > the secondary machine if the first fails. Notes on this: > > It wouldn't be a backup in the sense of failover if the main > server reboots, only if it totally goes up in smoke, at which > point we'd need to failover the whole VM ... and this needs > to go onto a non-US server, which I've got lined up in the > EU, just haven't had time to move forward with ... Uh, yes it would, wouldn't it? When the server reboots, it stops responding on port 25. At which point a sending mailserver trying to deliver a mail tot he list will switch to using the secondary MX machine (with a higher priority), and deliver through that one. It would not handle new subscriptions etc, but it shuld handle delivery. > > How are these things set up now? I see svr1, 2 and 4 all > handle mail > > for postgresql.org, but do they do anything more than just > queue it up > > until the primary (svr1) is back up? > > Purely queuing ... Ok. That's what I thought. With that solution, mails will not be lost, but we have no delivery during the downtime. //Magnus
On Fri, 11 Feb 2005, Magnus Hagander wrote: > Ok. That's what I thought. With that solution, mails will not be lost, > but we have no delivery during the downtime. Its rare that we have more downtime then a reboot of the server, and we're taking steps to reduce the odd times that it is longer ... the nice thing is that the "big fix" for fsck that was made to FreeBSD a couple of months ago has, to date, resulted in no more then 60min downtimes, and that is time spent doing the fsck itself and then its back up again ... ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
Marc, > Its rare that we have more downtime then a reboot of the server, and we're > taking steps to reduce the odd times that it is longer ... the nice thing > is that the "big fix" for fsck that was made to FreeBSD a couple of months > ago has, to date, resulted in no more then 60min downtimes, and that is > time spent doing the fsck itself and then its back up again ... Um, no offense, but we've had 12hr+ downtimes at least 3 times in the past 2 years -- one of them 3 days. I think we need to plan for such downtimes to happen again. -- Josh Berkus Aglio Database Solutions San Francisco
On Fri, 11 Feb 2005, Josh Berkus wrote: > Marc, > >> Its rare that we have more downtime then a reboot of the server, and we're >> taking steps to reduce the odd times that it is longer ... the nice thing >> is that the "big fix" for fsck that was made to FreeBSD a couple of months >> ago has, to date, resulted in no more then 60min downtimes, and that is >> time spent doing the fsck itself and then its back up again ... > > Um, no offense, but we've had 12hr+ downtimes at least 3 times in the > past 2 years -- one of them 3 days. I think we need to plan for such > downtimes to happen again. Agreed, but the cause of the 12hr+ downtimes were due to fsck taking as long as it did, and that has been fixed. Its the '3 days downtime' that we need to avoid in the future, where its not a hardware, but a provider, issue ... ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
Marc, > Agreed, but the cause of the 12hr+ downtimes were due to fsck taking as > long as it did, and that has been fixed. Its the '3 days downtime' that > we need to avoid in the future, where its not a hardware, but a provider, > issue ... And these sorts of things will occur again in the future. Our (ethosmedia.com) servers have had better uptime than postgresql.org, but even we've been offline for a half-day twice in the last 18months due to DDOS attacks downing the firewall box. Outages *will* happen, and you're doing the whole community a disservice to pretend that they won't. -- --Josh Josh Berkus Aglio Database Solutions San Francisco
On Fri, 11 Feb 2005, Josh Berkus wrote: > Marc, > >> Agreed, but the cause of the 12hr+ downtimes were due to fsck taking as >> long as it did, and that has been fixed. Its the '3 days downtime' that >> we need to avoid in the future, where its not a hardware, but a provider, >> issue ... > > And these sorts of things will occur again in the future. Our > (ethosmedia.com) servers have had better uptime than postgresql.org, but > even we've been offline for a half-day twice in the last 18months due to > DDOS attacks downing the firewall box. Outages *will* happen, and > you're doing the whole community a disservice to pretend that they > won't. Ummmm, I finished off my post above stating that we do need to avoid such downtimes in the future .. *scratch head* In fact, I even sent Magnus an email off list, as I think that who I was talking ot in IRC about the redundancy in the EU, and Magnus, are the same person, and I wasn't putting the two together ;( ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
Marc, > Ummmm, I finished off my post above stating that we do need to avoid such > downtimes in the future .. *scratch head* Oh, ok, sorry, I read your e-mail differently. So, we'll move ahead on list redundancy then, as soon as we have a machine? > In fact, I even sent Magnus an email off list, as I think that who I was > talking ot in IRC about the redundancy in the EU, and Magnus, are the same > person, and I wasn't putting the two together ;( I can understand that, I get people's IRC handles confused a lot ... -- --Josh Josh Berkus Aglio Database Solutions San Francisco
On Fri, 11 Feb 2005, Josh Berkus wrote: > Marc, > >> Ummmm, I finished off my post above stating that we do need to avoid such >> downtimes in the future .. *scratch head* > > Oh, ok, sorry, I read your e-mail differently. So, we'll move ahead on list > redundancy then, as soon as we have a machine? Which I am working on, yes ... hopefully I'm right in that Magnus == mastermind ... ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
> -----Original Message----- > From: pgsql-www-owner@postgresql.org > [mailto:pgsql-www-owner@postgresql.org] On Behalf Of Marc G. Fournier > Sent: 11 February 2005 19:08 > To: Josh Berkus > Cc: pgsql-www@postgresql.org > Subject: Re: [pgsql-www] Where do we need servers? > > Which I am working on, yes ... hopefully I'm right in that Magnus == > mastermind ... Nope. I forget who mastermind is, but Magnus == mha. /D
On Fri, 11 Feb 2005, Dave Page wrote: > > >> -----Original Message----- >> From: pgsql-www-owner@postgresql.org >> [mailto:pgsql-www-owner@postgresql.org] On Behalf Of Marc G. Fournier >> Sent: 11 February 2005 19:08 >> To: Josh Berkus >> Cc: pgsql-www@postgresql.org >> Subject: Re: [pgsql-www] Where do we need servers? >> >> Which I am working on, yes ... hopefully I'm right in that Magnus == >> mastermind ... > > Nope. I forget who mastermind is, but Magnus == mha. 'k, will chat with Magnus when he gets back online, since I just realized he is in the EU ... ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: scrappy@hub.org Yahoo!: yscrappy ICQ: 7615664
> -----Original Message----- > From: Marc G. Fournier [mailto:scrappy@postgresql.org] > Sent: 11 February 2005 20:36 > To: Dave Page > Cc: pgsql-www@postgresql.org > Subject: RE: [pgsql-www] Where do we need servers? > > 'k, will chat with Magnus when he gets back online, since I > just realized > he is in the EU ... Yup, Sweden. Land if ridiculously cheap LAN speed net connections. /D
>> Which I am working on, yes ... hopefully I'm right in that Magnus == >> mastermind ... > >Nope. I forget who mastermind is, but Magnus == mha. Not quite. Someone registered that nick off me before I did on the network, som on IRC I'm magnush now. Used to be mha, though, and it's mha more or less everywhere else. //Magnus