Thread: Re: [NOVICE] Re: re : PHP and persistent connections
Note: CC'd to Hackers, as this has wandered into deeper feature issues. Tom Lane wrote: > GH <grasshacker@over-yonder.net> writes: > > Do the "persistent-connected" Postgres backends ever timeout or die? > No. A backend will sit patiently for the client to send it another > query or close the connection. This does have an unfortunate denial-of-service implication, where an attack can effectively suck up all available backends, and there's no throttle, no timeout, no way of automatically dropping these.... However, the more likely possibility is similar to the problem that we see in PHP's persistant connections.... a normally benign connection is inactive, and yet it isn't dropped. If you have two of these created every day, and you only have 16 backends, after 8 days you have a lockout. On a busy web site or another busy application, you can, of course, exhaust 64 backends in a matter of minutes. > > Is it possible to set something like a timeout for persistent connctions? > > (Er, would that be something that someone would want > > to do? A Bad Thing?) > This has been suggested before, but I don't think any of the core > developers consider it a good idea. Having the backend arbitrarily > disconnect on an active client would be a Bad Thing for sure. Right.... but I don't think anybody has suggested disconnecting an *active* client, just inactive ones. > Hence, > any workable timeout would have to be quite large (order of an > hour, maybe? not milliseconds anyway). The mySQL disconnect starts at around 24 hours. It prevents a slow accumulation of unused backends, but does nothing for a rapid accumulation. It can be cranked down to a few minutes AFAIK. > And that means that it's not > an effective solution for the problem. Under load, a webserver that > wastes backend connections will run out of available backends long > before a safe timeout would start to clean up after it. Depends on how it's set up... you see, this isn't uncharted territory, other web/db solutions have already fought with this issue. Much like the number of backends set up for pgsql must be static, a timeout may wind up being the same way. The critical thing to realize is that you are timing out _inactive_ connections, not connections in general. So provided that a connection provided information about when it was last used, or usage set a counter somewhere, it could easily be checked. > To my mind, a client app that wants to use persistent connections > has got to implement some form of connection pooling, so that it > recycles idle connections back to a "pool" for allocation to task > threads that want to make a new query. And the threads have to release > connections back to the pool as soon as they're done with a transaction. > Actively releasing an idle connection is essential, rather than > depending on a timeout. > > I haven't studied PHP at all, but from this conversation I gather that > it's only halfway there... Well...... This is exactly how apache and PHP serve pages. The problem is that apache children aren't threads, they are separate copies of the application itself. So a single apache thread will re-use the same connection, over and over again, and give that conection over to other connections on that apache thread.. so in your above model, it's not really one client application in the first place. It's a dynamic number of client applications, between one and hundreds or so. So to turn the feature request the other way 'round: "I have all sorts of client apps, connecting in different ways, to my server. Some of the clients are leaving their connections open, but unused. How can I prevent running out of backends, and boot the inactive users off?" -Ronabop -- Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine, which is currently in MacOS land. Your bopping may vary.
At 05:26 PM 11/25/00 -0700, Ron Chmara wrote: >Note: CC'd to Hackers, as this has wandered into deeper feature issues. > >Tom Lane wrote: >> GH <grasshacker@over-yonder.net> writes: >> > Do the "persistent-connected" Postgres backends ever timeout or die? >> No. A backend will sit patiently for the client to send it another >> query or close the connection. > >This does have an unfortunate denial-of-service implication, where >an attack can effectively suck up all available backends, and there's >no throttle, no timeout, no way of automatically dropping these.... > >However, the more likely possibility is similar to the problem that >we see in PHP's persistant connections.... a normally benign connection >is inactive, and yet it isn't dropped. If you have two of these created >every day, and you only have 16 backends, after 8 days you have a lockout. > >On a busy web site or another busy application, you can, of course, >exhaust 64 backends in a matter of minutes. Ugh...the more I read stuff like this the more I appreciate AOlserver's built-in database API which protects the application from any such problems altogether. The particular problem being described simply can't occur in this environment. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Service and other goodies at http://donb.photo.net.
I've tried quite a bit to use persistent connections with PHP (for over a year) and always the scripts that I try to use them with behave crazy... The last time I tried there were problems all over the place with PHP, variables getting overwritten, certain functions just totally breaking (date() to name one) and so on.. I know I'm not being specific but my point is that I think there are some other outstanding PHP issues that play into this problem as the behavior that I've seen isn't directly related to PostgreSQL but only happens when I use persistent connections.. I've been trying to corner the problem for quite some time, it's an elusive one for sure.. I spoke with the PHP developers 9 or so months ago about the problems and they didn't seem to pay any attention to it, the thread on the mailing list was short with the bug report collecting dust at the bottom of the to-do list I'm sure (as that was back before PHP 4 was even released and obviously the problem remains).. Just my $0.02 worth. -Mitch ----- Original Message ----- From: "Ron Chmara" <ron@Opus1.COM> To: "Tom Lane" <tgl@sss.pgh.pa.us>; "PostgreSQL Hackers List" <pgsql-hackers@postgresql.org> Cc: "GH" <grasshacker@over-yonder.net>; <pgsql-novice@postgresql.org> Sent: Saturday, November 25, 2000 4:26 PM Subject: [HACKERS] Re: [NOVICE] Re: re : PHP and persistent connections > Note: CC'd to Hackers, as this has wandered into deeper feature issues. > > Tom Lane wrote: > > GH <grasshacker@over-yonder.net> writes: > > > Do the "persistent-connected" Postgres backends ever timeout or die? > > No. A backend will sit patiently for the client to send it another > > query or close the connection. > > This does have an unfortunate denial-of-service implication, where > an attack can effectively suck up all available backends, and there's > no throttle, no timeout, no way of automatically dropping these.... > > However, the more likely possibility is similar to the problem that > we see in PHP's persistant connections.... a normally benign connection > is inactive, and yet it isn't dropped. If you have two of these created > every day, and you only have 16 backends, after 8 days you have a lockout. > > On a busy web site or another busy application, you can, of course, > exhaust 64 backends in a matter of minutes. > > > > Is it possible to set something like a timeout for persistent connctions? > > > (Er, would that be something that someone would want > > > to do? A Bad Thing?) > > This has been suggested before, but I don't think any of the core > > developers consider it a good idea. Having the backend arbitrarily > > disconnect on an active client would be a Bad Thing for sure. > > Right.... but I don't think anybody has suggested disconnecting an *active* > client, just inactive ones. > > > Hence, > > any workable timeout would have to be quite large (order of an > > hour, maybe? not milliseconds anyway). > > The mySQL disconnect starts at around 24 hours. It prevents a slow > accumulation of unused backends, but does nothing for a rapid > accumulation. It can be cranked down to a few minutes AFAIK. > > > And that means that it's not > > an effective solution for the problem. Under load, a webserver that > > wastes backend connections will run out of available backends long > > before a safe timeout would start to clean up after it. > > Depends on how it's set up... you see, this isn't uncharted territory, > other web/db solutions have already fought with this issue. Much > like the number of backends set up for pgsql must be static, a timeout > may wind up being the same way. The critical thing to realize is > that you are timing out _inactive_ connections, not connections > in general. So provided that a connection provided information > about when it was last used, or usage set a counter somewhere, it > could easily be checked. > > > To my mind, a client app that wants to use persistent connections > > has got to implement some form of connection pooling, so that it > > recycles idle connections back to a "pool" for allocation to task > > threads that want to make a new query. And the threads have to release > > connections back to the pool as soon as they're done with a transaction. > > Actively releasing an idle connection is essential, rather than > > depending on a timeout. > > > > I haven't studied PHP at all, but from this conversation I gather that > > it's only halfway there... > > Well...... This is exactly how apache and PHP serve pages. The > problem is that apache children aren't threads, they are separate copies > of the application itself. So a single apache thread will re-use the > same connection, over and over again, and give that conection over to > other connections on that apache thread.. so in your above model, it's > not really one client application in the first place. > > It's a dynamic number of client applications, between one and hundreds > or so. > > So to turn the feature request the other way 'round: > "I have all sorts of client apps, connecting in different ways, to > my server. Some of the clients are leaving their connections open, > but unused. How can I prevent running out of backends, and boot > the inactive users off?" > > -Ronabop > > -- > Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine, > which is currently in MacOS land. Your bopping may vary. >
> "I have all sorts of client apps, connecting in different ways, to > my server. Some of the clients are leaving their connections open, > but unused. How can I prevent running out of backends, and boot > the inactive users off?" how about having a middle man between apache (or aolserver or any other clients...) and PosgreSQL ?? that middleman could be configured to have 16 persistant connections,every clients would deal with the middleman instead of going direct to the database,this would be an advantage where multiple PostgreSQL server are used... 240 apache process are running on a box and there's 60 PostgreSQL instance running on the machine or another machine: 240 apache process --> middleman --> 60 PostgreSQL process now if there's multiple Database server: 240 apache process --> middleman --> 12 PostgreSQL for each server (5 servers in this case) in this case,the middleman could be a shared library which the clients link to.. what do you think about that ?? Alain Toussaint
At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote: >how about having a middle man between apache (or aolserver or any other >clients...) and PosgreSQL ?? > >that middleman could be configured to have 16 persistant connections,every >clients would deal with the middleman instead of going direct to the >database,this would be an advantage where multiple PostgreSQL server are >used... Well, this is sort of what AOLserver does for you without any need for middlemen. Again, reading stuff like this makes me think "ugh!" This stuff is really pretty easy, it's amazing to me that the Apache/db world talks about such kludges when they're clearly not necessary. My first experience running a website (donb.photo.net) was with Apache on Linux on an old P100 system in 1996 when few folks had personal photo sites with >1000 photos on them getting thousands of hits a day. I have fond memories of those days, and Apache served me (or more properly webserved my website) well. This site is largely responsible for my reputation that lets me freelance nature photography to the national media market pretty much at will. Thus my fondness. But ... for database stuff the release of AOLserver as first Free Beer, and now Free Speech software has caused me to abandon Apache and suggestions like the above just make me cringe. It shouldn't be that hard, folks. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
At 10:00 PM 11/25/00 -0800, Mitch Vincent wrote: > I've tried quite a bit to use persistent connections with PHP (for over >a year) and always the scripts that I try to use them with behave crazy... >The last time I tried there were problems all over the place with PHP, >variables getting overwritten, certain functions just totally breaking >(date() to name one) and so on.. I know I'm not being specific but my point >is that I think there are some other outstanding PHP issues that play into >this problem as the behavior that I've seen isn't directly related to >PostgreSQL but only happens when I use persistent connections.. I've heard rumors that PHP isn't thoroughly threadsafe, could this be a source of your problems? - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Service and other goodies at http://donb.photo.net.
I'm sure that this, if true, could certainly be the source of the problems I've seen... I can't comment on if PHP is completely threadsafe, I know that some of the modules (for lack of a better word) aren't, possible the ClibPDF library I'm using. I'll check into it. Thanks! -Mitch ----- Original Message ----- From: "Don Baccus" <dhogaza@pacifier.com> To: "Mitch Vincent" <mitch@venux.net>; "PostgreSQL Hackers List" <pgsql-hackers@postgresql.org> Cc: <pgsql-novice@postgresql.org> Sent: Saturday, November 25, 2000 9:18 PM Subject: Re: [HACKERS] Re: [NOVICE] Re: re : PHP and persistent connections > At 10:00 PM 11/25/00 -0800, Mitch Vincent wrote: > > I've tried quite a bit to use persistent connections with PHP (for over > >a year) and always the scripts that I try to use them with behave crazy... > >The last time I tried there were problems all over the place with PHP, > >variables getting overwritten, certain functions just totally breaking > >(date() to name one) and so on.. I know I'm not being specific but my point > >is that I think there are some other outstanding PHP issues that play into > >this problem as the behavior that I've seen isn't directly related to > >PostgreSQL but only happens when I use persistent connections.. > > I've heard rumors that PHP isn't thoroughly threadsafe, could this be a > source of your problems? > > > > > - Don Baccus, Portland OR <dhogaza@pacifier.com> > Nature photos, on-line guides, Pacific Northwest > Rare Bird Alert Service and other goodies at > http://donb.photo.net. >
> Well, this is sort of what AOLserver does for you without any need for > middlemen. i agree that AolServer is good karma,i've been reading various docs on Aolserver since Philip Greenspun talked about it on linuxworld and i'm glad that there's some java support being coded for it (im my opinion,it's the only advantage that Apache had over AolServer for me). > Again, reading stuff like this makes me think "ugh!" > > This stuff is really pretty easy, it's amazing to me that the Apache/db > world talks about such kludges when they're clearly not necessary. well...i was using Apache as an example due to it DB model but the stuff i was talking would work quite well in the case of multiple DB server hosting differents table and you want to maintain location independance,here's an example: you have 7 Database server,5 are online and the other 2 are for maintenance and/or development purpose,for simplicity,we'll name the server database1.example.net to database7.example.net,database4.example.net is currently doing a dump and database6.example.net is loading the dump from database4,then,you reconfigure the middleman so it redirect all request from database4 to database6: vim /etc/middleman.conf and then a sighup to the middleman so it reread its config file: killall -HUP middleman this would update the middleman's shared lib with the new configuration info (and BTW,i just extended my idea from a single shared lib to a daemon/shared lib combo). now i'm off to get the dog out for a walk and then,take a nap,see ya !! Alain Toussaint
On Sun, 26 Nov 2000, Alain Toussaint wrote: > > "I have all sorts of client apps, connecting in different ways, to > > my server. Some of the clients are leaving their connections open, > > but unused. How can I prevent running out of backends, and boot > > the inactive users off?" > > how about having a middle man between apache (or aolserver or any other > clients...) and PosgreSQL ?? I don't see it solving anything. You just move the connection management problem from the database to the middleman (in the industry such a thing would be called a query multiplexor). Multiplexors have often been used in the past to solve this problem, because the database could not be extended or protected. Besides, if you are an n-tier developer, this isn't a problem as your middle tier not does connection management, but some logic as well. At the end of the day, PHP/Apache is just not suitable for complex applications. Tom
Don Baccus wrote: > At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote: > >how about having a middle man between apache (or aolserver or any other > >clients...) and PosgreSQL ?? > >that middleman could be configured to have 16 persistant connections,every > >clients would deal with the middleman instead of going direct to the > >database,this would be an advantage where multiple PostgreSQL server are > >used... > Well, this is sort of what AOLserver does for you without any need for > middlemen. What if you have a server farm of 8 AOL servers, and 12 perl clients, and 3 MS Access connections, leaving things open? Is AOLserver parsing the Perl DBD/DBI, connects, too? So you're using AOLserver as (cough) a middleman? <g> > Again, reading stuff like this makes me think "ugh!" > This stuff is really pretty easy, it's amazing to me that the Apache/db > world talks about such kludges when they're clearly not necessary. How does AOL server time out access clients, ODBC connections, Perl clients? I thought it was mainly web-server stuff. Apache/PHP isn't the only problem. The problem isn't solved by telling others to fix their software, either... is this something that can be done _within_ postmaster? -Bop -- Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine, which is currently in MacOS land. Your bopping may vary.
Tom Samplonius wrote: > On Sun, 26 Nov 2000, Alain Toussaint wrote: > > > "I have all sorts of client apps, connecting in different ways, to > > > my server. Some of the clients are leaving their connections open, > > > but unused. How can I prevent running out of backends, and boot > > > the inactive users off?" > > how about having a middle man between apache (or aolserver or any other > > clients...) and PosgreSQL ?? > I don't see it solving anything. You just move the connection > management problem from the database to the middleman (in the industry > such a thing would be called a query multiplexor). Multiplexors have > often been used in the past to solve this problem, because the database > could not be extended or protected. And I'm requesting protection. Because the database isn't capable of dynamically detroying temporary backends. (Which would be another solution to this problem) > Besides, if you are an n-tier developer, this isn't a problem as your > middle tier not does connection management, but some logic as well. At > the end of the day, PHP/Apache is just not suitable for complex > applications. Is it dump on PHP day? Okay, pretend the problem is left-open Perl connections. Slam that for a while. Move over to left open Access connections. Bag on that for a few posts. Errant C code for a few days. Still have a problem. :-) How does a db admin close connections that are idle, and unwanted, without shutting the postmaster down? -Bop -- Brought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine, which is currently in MacOS land. Your bopping may vary.
At 12:38 AM 11/27/00 -0700, Ron Chmara wrote: >Don Baccus wrote: >> At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote: >> >how about having a middle man between apache (or aolserver or any other >> >clients...) and PosgreSQL ?? >> >that middleman could be configured to have 16 persistant connections,every >> >clients would deal with the middleman instead of going direct to the >> >database,this would be an advantage where multiple PostgreSQL server are >> >used... >> Well, this is sort of what AOLserver does for you without any need for >> middlemen. > >What if you have a server farm of 8 AOL servers, and 12 perl clients, and >3 MS Access connections, leaving things open? Is AOLserver parsing the >Perl DBD/DBI, connects, too? So you're using AOLserver as (cough) a >middleman? <g> Well, no - we'd use the built-in Tcl, Python or nsjava (still in infancy) modules which interface natively to AOLserver's built-in database API. You don't NEED the various connection implementations buried in various languages because they're provided directly in the server. That's the point. That's the main reason people use it. If you're going to run CGI/Perl scripts using its database connectivity stuff, don't use AOLserver. They'll run since AOLserver supports CGI, but they'll run no better than under Apache and probably worse, since no one doing serious AOLserver work uses CGI and therefore the code which implements it has languished - there's no motivation to improve something that no one uses. If you're willing to use a language module which exposes the AOLserver API to your application, then AOLserver's a great choice. >> Again, reading stuff like this makes me think "ugh!" >> This stuff is really pretty easy, it's amazing to me that the Apache/db >> world talks about such kludges when they're clearly not necessary. > >How does AOL server time out access clients, ODBC connections, Perl >clients? I thought it was mainly web-server stuff. Well, for starters one normally wouldn't use ODBC since AOLserver includes drivers for PostgreSQL, Oracle and Sybase. There's one for Solid, too, but no one seems to use Solid since they raised their prices drastically a couple of years ago (if you're going to spend lots of money on a database, Oracle and Sybase are more than willing to help you). Nor does nsjava use JDBC, it encapsulates the AOLserver API into a database API class(es?). AOLserver manages the database pools in about the same way it manages threads, i.e. if a thread can't get the handles it needs (usually only one, sometimes two, more than that usually indicates poorly written code) it blocks until another thread releases a handle. When a thread ends (returns a page) any allocated handles are released. Transactions that haven't been properly committed are rolled back as well (lesser of two evils - the event's logged since it indicates a bug). For each pool you provide the name of the driver (which of course serves to select which RDMBS that pool will use - you can use as many different RDBMSs as you have, and have drivers for), a datasource, the maximum number of connections to open for that pool, minimum and maximum lifetimes for connections, etc. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
Uh, Don? Not all the world's a web page, you know. Thatkind of thinking is _so_ mid 90's ;-) Dedicated apps that talk directly the user seem to be making a comeback, due to a number of factors. They can have much cleaner user interfaces, for example. Which brings us back around to the point of why this is on Hackers: PostgreSQL currently has no clean method for dropping idle connections. Yes, some apps handle this themselves, but not all. A number of people seem to feel there is a need for this feature. How hard would it be to implement? Probably not too hard: we've already got an 'idle' state, suring which we block on the input. Add a timeout to hat, and we're pretty much there. <goes and looks at code for a bit> Hmm, we're down in the bowels of libpq, doing a recv() on the socket to the frontend, about 4 layers down from backend's blocking call to ReadCommand(). I seem to recall someone working on creating an async version of the libpq API, but Tom not being happy with the approach. So, it's not a simple change. Ross On Mon, Nov 27, 2000 at 07:18:48AM -0800, Don Baccus wrote: > At 12:38 AM 11/27/00 -0700, Ron Chmara wrote: > >Don Baccus wrote: > >> At 12:07 AM 11/26/00 -0500, Alain Toussaint wrote: > >> >how about having a middle man between apache (or aolserver or any other > >> >clients...) and PosgreSQL ?? > >> >that middleman could be configured to have 16 persistant connections,every > >> >clients would deal with the middleman instead of going direct to the > >> >database,this would be an advantage where multiple PostgreSQL server are > >> >used... > >> Well, this is sort of what AOLserver does for you without any need for > >> middlemen. > > > >What if you have a server farm of 8 AOL servers, and 12 perl clients, and > >3 MS Access connections, leaving things open? Is AOLserver parsing the > >Perl DBD/DBI, connects, too? So you're using AOLserver as (cough) a > >middleman? <g> Note that only the AOL servers here are web client/servers, the rest are dedicated apps. <snip Don missing the point> -- Open source code is like a natural resource, it's the result of providing food and sunshine to programmers, and then staying out of their way. [...] [It] is not going away because it has utility for both the developers and users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.
"Ross J. Reedstrom" <reedstrm@rice.edu> writes: > Which brings us back around to the point of why this is on Hackers: > PostgreSQL currently has no clean method for dropping idle connections. > Yes, some apps handle this themselves, but not all. A number of people > seem to feel there is a need for this feature. I'm still not following exactly what people think would happen if we did have such a "feature". OK, the backend times out after some interval of seeing no activity, and disconnects. How is the client going to react to that, exactly, and why would it not conclude that something's gone fatally wrong with the database? Seems to me that you still end up having to fix the client, and that in the last analysis this is a client issue, not something for the backend to hack around. regards, tom lane
On Mon, Nov 27, 2000 at 12:09:00PM -0500, Tom Lane wrote: > > I'm still not following exactly what people think would happen if we did > have such a "feature". OK, the backend times out after some interval > of seeing no activity, and disconnects. How is the client going to > react to that, exactly, and why would it not conclude that something's > gone fatally wrong with the database? Because a lot of commercial (and other) databases have this "feature", a lot of well behaved apps (and middleware packages) already know how to deal with it: i.e. try to reconnect, and continue. If that fails, throw an error. > Seems to me that you still end up having to fix the client, and that > in the last analysis this is a client issue, not something for the > backend to hack around. It's already fixed, see above. In addition, your assuming the same administrative entity has control over the clients and the backend. This is not always the case. For example, in a web hosting environment. Then, the DBA has the responsibiltiy to ensure minimal interference between different customers. As it stands, the client that causes the problem sees no problem to fix: other clients get 'that damn PostgreSQL backend quits accepting connections', and yell at the DBA. So, the DBA wants a way to propagate the 'problem' to the clients that cause it, by timing out the idle connections. Then, those clients _will_ fix their code, if it doesn't already do it for them, as per above. Basically, PostgreSQL is being too polite: it's in the clients interest to keep the connection open, since it minimizes response time, regardless of how this might affect other backends. It's cooperative vs. hard multitasking, all over again. Clients and servers optimize for different parameters: the client wants minimum response time for it's requests. The backend wants minimum _average_ response time, over all requests. Ross -- Open source code is like a natural resource, it's the result of providing food and sunshine to programmers, and then staying out of their way. [...] [It] is not going away because it has utility for both the developers and users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.
At 10:46 AM 11/27/00 -0600, Ross J. Reedstrom wrote: >Uh, Don? >Not all the world's a web page, you know. Thatkind of thinking is _so_ >mid 90's ;-) Dedicated apps that talk directly the user seem to be making >a comeback, due to a number of factors. They can have much cleaner user >interfaces, for example. Of course. But the question's been raised in the context of a web server, and I've answered in context. I've been trying to move the discussion offline to avoid clogging the hackers list with this stuff but some of the messages have escaped my machine with my forgetting to remove pg_hackers from the distribution list. I'll try to be more diligent if the discussion continues. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.