Thread: Feature request

Feature request

From
ohp@pyrenet.fr
Date:
Hi hackers,

I know you're all very busy with 7.4 coming out next week.But I thought I
could ask for a little feature with postgres logs.

Today, logs are all going to a file or syslog or both. But there is no way
at all you can automatically know upon witch database errors are thrown
Therefore, would it be possible/hard to prefix all error/warning message
with the database name on witch it occured.

It would then be easy to dispatch all errors to the right customer.

What do you think?

Regards
-- 
Olivier PRENANT                    Tel: +33-5-61-50-97-00 (Work)
6, Chemin d'Harraud Turrou           +33-5-61-50-97-01 (Fax)
31190 AUTERIVE                       +33-6-07-63-80-64 (GSM)
FRANCE                          Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)


Re: Feature request -- Log Database Name

From
Josh Berkus
Date:
Hackers,


> Today, logs are all going to a file or syslog or both. But there is no way
> at all you can automatically know upon witch database errors are thrown
> Therefore, would it be possible/hard to prefix all error/warning message
> with the database name on witch it occured.

Olivier appears to be correct ... there is no log option which logs the name
of the database generating the message.

Do we need to add this as a TODO?

--
-Josh BerkusAglio Database SolutionsSan Francisco



Re: Feature request -- Log Database Name

From
Larry Rosenman
Date:

--On Wednesday, July 23, 2003 12:31:38 -0700 Josh Berkus 
<josh@agliodbs.com> wrote:

> Hackers,
>
>
>> Today, logs are all going to a file or syslog or both. But there is no
>> way at all you can automatically know upon witch database errors are
>> thrown Therefore, would it be possible/hard to prefix all error/warning
>> message with the database name on witch it occured.
>
> Olivier appears to be correct ... there is no log option which logs the
> name  of the database generating the message.
>
> Do we need to add this as a TODO?
It would be VERY nice to do that, and maybe even the table?

LER



-- 
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: Feature request -- Log Database Name

From
Robert Treat
Date:
On Wed, 2003-07-23 at 15:38, Larry Rosenman wrote:
> 
> 
> --On Wednesday, July 23, 2003 12:31:38 -0700 Josh Berkus 
> <josh@agliodbs.com> wrote:
> 
> > Hackers,
> >
> >
> >> Today, logs are all going to a file or syslog or both. But there is no
> >> way at all you can automatically know upon witch database errors are
> >> thrown Therefore, would it be possible/hard to prefix all error/warning
> >> message with the database name on witch it occured.
> >
> > Olivier appears to be correct ... there is no log option which logs the
> > name  of the database generating the message.
> >
> > Do we need to add this as a TODO?
> It would be VERY nice to do that, and maybe even the table?
> 

Should it be a GUC like log_timestamp that can be applied to all log
messages?

Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL



Re: Feature request -- Log Database Name

From
Larry Rosenman
Date:

--On Wednesday, July 23, 2003 16:20:20 -0400 Robert Treat 
<xzilla@users.sourceforge.net> wrote:


>
> Should it be a GUC like log_timestamp that can be applied to all log
> messages?
IMHO, Yes, and it probably can be localized to elog(), although I haven't 
looked
at the current elog() function code since 7.0 when I futzed with the 
syslog() code.

the question is:

Is this a feature change, or a bug fix given the error reporting change for 
7.4?




-- 
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: Feature request -- Log Database Name

From
Josh Berkus
Date:
Robert,

> Should it be a GUC like log_timestamp that can be applied to all log
> messages?

Yes, absolutely.

--
-Josh BerkusAglio Database SolutionsSan Francisco



Re: Feature request -- Log Database Name

From
Josh Berkus
Date:
TIm,

> Anyways. If it doesn't already, having username and database would both be
> helpful things when troubleshooting things.

Hmmm ... that would be two log TODOs.  I wonder why this has never come up
before ....

--
-Josh BerkusAglio Database SolutionsSan Francisco



Re: Feature request -- Log Database Name

From
Bruce Momjian
Date:
Josh Berkus wrote:
> TIm,
> 
> > Anyways. If it doesn't already, having username and database would both be
> > helpful things when troubleshooting things.
> 
> Hmmm ... that would be two log TODOs.  I wonder why this has never come up 
> before ....

What we recommend is to use log_pid and log_connections and link the pid
to each log message.  The big issue is that while logging user/db/etc is
nice for grep, it can fill the logs pretty quickly.

Of course, the pid can wrap around, so it gets pretty confusing.  I have
been wondering about logging pid as 1-3321 meaning the first loop
through the pid cycle, pid 3321 so they are always unique in the log
file.

My guess is that we need something flexible to say which things should
appear on each log line, if we go that direction.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Feature request -- Log Database Name

From
ohp@pyrenet.fr
Date:
On Wed, 23 Jul 2003, Larry Rosenman wrote:

> Date: Wed, 23 Jul 2003 15:22:49 -0500
> From: Larry Rosenman <ler@lerctr.org>
> To: Robert Treat <xzilla@users.sourceforge.net>
> Cc: Josh Berkus <josh@agliodbs.com>, ohp@pyrenet.fr,
>      pgsql-hackers list <pgsql-hackers@postgresql.org>
> Subject: Re: [HACKERS] Feature request -- Log Database Name
>
>
>
> --On Wednesday, July 23, 2003 16:20:20 -0400 Robert Treat
> <xzilla@users.sourceforge.net> wrote:
>
>
> >
> > Should it be a GUC like log_timestamp that can be applied to all log
> > messages?
> IMHO, Yes, and it probably can be localized to elog(), although I haven't
> looked
> at the current elog() function code since 7.0 when I futzed with the
> syslog() code.
>
> the question is:
>
> Is this a feature change, or a bug fix given the error reporting change for
> 7.4?
I hope it can go into 7.4 (could we have a port on 7.3.4 if it's comming?)

Also I was thinking that we could "hide" a log table into a "special"
schema like this:

CREATE  TABLE log (
when    timestamp,
user    text,
table    name,
query text,
error text);

So that iff this table exists in a databse, all error reporting would
be logged in this table.

This sounds complicated but IMHO would be unvaluable for debugging help

Regards
-- 
Olivier PRENANT                    Tel: +33-5-61-50-97-00 (Work)
6, Chemin d'Harraud Turrou           +33-5-61-50-97-01 (Fax)
31190 AUTERIVE                       +33-6-07-63-80-64 (GSM)
FRANCE                          Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)


Re: Feature request -- Log Database Name

From
ohp@pyrenet.fr
Date:
On Mon, 28 Jul 2003, Robert Treat wrote:

> Date: 28 Jul 2003 13:50:27 -0400
> From: Robert Treat <xzilla@users.sourceforge.net>
> To: ohp@pyrenet.fr
> Cc: Larry Rosenman <ler@lerctr.org>, Josh Berkus <josh@agliodbs.com>,
>      pgsql-hackers list <pgsql-hackers@postgresql.org>
> Subject: Re: [HACKERS] Feature request -- Log Database Name
>
> On Thu, 2003-07-24 at 11:23, ohp@pyrenet.fr wrote:
> > Also I was thinking that we could "hide" a log table into a "special"
> > schema like this:
> >
> > CREATE  TABLE log (
> > when    timestamp,
> > user    text,
> > table    name,
> > query text,
> > error text);
> >
> > So that iff this table exists in a databse, all error reporting would
> > be logged in this table.
> >
> > This sounds complicated but IMHO would be unvaluable for debugging help
> >
>
> I think better would be a GUC "log_to_table" which wrote all standard
> out/err to a pg_log table.  of course, I doubt you could make this
> foolproof (how to log startup errors in this table?) but it could be a
> start.
>
> Robert Treat
That would be great (although of course not follproof) maybe to be safe we
could do both just to be on the safe side.

This pg_log_table should be local to each database of course...

-- 
Olivier PRENANT                    Tel: +33-5-61-50-97-00 (Work)
6, Chemin d'Harraud Turrou           +33-5-61-50-97-01 (Fax)
31190 AUTERIVE                       +33-6-07-63-80-64 (GSM)
FRANCE                          Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)


Re: Feature request -- Log Database Name

From
Robert Treat
Date:
On Thu, 2003-07-24 at 11:23, ohp@pyrenet.fr wrote:
> Also I was thinking that we could "hide" a log table into a "special"
> schema like this:
> 
> CREATE  TABLE log (
> when    timestamp,
> user    text,
> table    name,
> query text,
> error text);
> 
> So that iff this table exists in a databse, all error reporting would
> be logged in this table.
> 
> This sounds complicated but IMHO would be unvaluable for debugging help
> 

I think better would be a GUC "log_to_table" which wrote all standard
out/err to a pg_log table.  of course, I doubt you could make this
foolproof (how to log startup errors in this table?) but it could be a
start.

Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL



Re: Feature request -- Log Database Name

From
Tom Lane
Date:
Robert Treat <xzilla@users.sourceforge.net> writes:
> I think better would be a GUC "log_to_table" which wrote all standard
> out/err to a pg_log table.  of course, I doubt you could make this
> foolproof (how to log startup errors in this table?) but it could be a
> start.

How would a failed transaction make any entries in such a table?  How
would you handle maintenance operations on the table that require
exclusive lock?  (vacuum full, reindex, etc)

It seems possible that you could make this work if you piped stderr to a
buffering process that was itself a database client, and issued INSERTs
to put the rows into the table, and could buffer pending data whenever
someone else had the table locked (eg for vacuum).  I'd not care to try
to get backends to do it locally.
        regards, tom lane


Re: Feature request -- Log Database Name

From
ohp@pyrenet.fr
Date:
On Mon, 28 Jul 2003, Tom Lane wrote:

> Date: Mon, 28 Jul 2003 21:39:23 -0400
> From: Tom Lane <tgl@sss.pgh.pa.us>
> To: Robert Treat <xzilla@users.sourceforge.net>
> Cc: ohp@pyrenet.fr, Larry Rosenman <ler@lerctr.org>,
>      Josh Berkus <josh@agliodbs.com>,
>      pgsql-hackers list <pgsql-hackers@postgresql.org>
> Subject: Re: [HACKERS] Feature request -- Log Database Name
>
> Robert Treat <xzilla@users.sourceforge.net> writes:
> > I think better would be a GUC "log_to_table" which wrote all standard
> > out/err to a pg_log table.  of course, I doubt you could make this
> > foolproof (how to log startup errors in this table?) but it could be a
> > start.
>
> How would a failed transaction make any entries in such a table?  How
> would you handle maintenance operations on the table that require
> exclusive lock?  (vacuum full, reindex, etc)
>
> It seems possible that you could make this work if you piped stderr to a
> buffering process that was itself a database client, and issued INSERTs
> to put the rows into the table, and could buffer pending data whenever
> someone else had the table locked (eg for vacuum).  I'd not care to try
> to get backends to do it locally.
>
>             regards, tom lane
Not quite, my goal is to have a log per database, the stderr dosn't
contain enough information to split it.

As an ISP, I would like that each customer having one or more databases
being able to see any error on their database.
I imagine have a log file per database would be toot complicated...
>

-- 
Olivier PRENANT                    Tel: +33-5-61-50-97-00 (Work)
6, Chemin d'Harraud Turrou           +33-5-61-50-97-01 (Fax)
31190 AUTERIVE                       +33-6-07-63-80-64 (GSM)
FRANCE                          Email: ohp@pyrenet.fr
------------------------------------------------------------------------------
Make your life a dream, make your dream a reality. (St Exupery)


Re: Feature request -- Log Database Name

From
Andrew Dunstan
Date:
There seem to be 2 orthogonal issues here - in effect how to log and 
where to log. I had a brief look and providing an option to log the 
dbname where appropriate seems to be quite easy - unless someone else is 
already doing it I will look at it on the weekend. Assuming that were 
done you could split the log based on dbname.

For the reasons Tom gives, logging to a table looks much harder and 
possibly undesirable - I would normally want my log table(s) in a 
different database, possibly even on a different machine, from my 
production transactional database. However, an ISP might want to provide 
the logs for each client in their designated db. It therefore seems to 
me far more sensible to do load logs into tables out of band as Tom 
suggests, possibly with some helper tools in contrib to parse the logs, 
or even to load them in more or less real time (many tools exist to do 
this sort of thing for web logs, so it is hardly rocket science - 
classic case for a perl script ;-).

cheers

andrew


ohp@pyrenet.fr wrote:

>On Mon, 28 Jul 2003, Tom Lane wrote:
>
>  
>
>>Date: Mon, 28 Jul 2003 21:39:23 -0400
>>From: Tom Lane <tgl@sss.pgh.pa.us>
>>To: Robert Treat <xzilla@users.sourceforge.net>
>>Cc: ohp@pyrenet.fr, Larry Rosenman <ler@lerctr.org>,
>>     Josh Berkus <josh@agliodbs.com>,
>>     pgsql-hackers list <pgsql-hackers@postgresql.org>
>>Subject: Re: [HACKERS] Feature request -- Log Database Name
>>
>>Robert Treat <xzilla@users.sourceforge.net> writes:
>>    
>>
>>>I think better would be a GUC "log_to_table" which wrote all standard
>>>out/err to a pg_log table.  of course, I doubt you could make this
>>>foolproof (how to log startup errors in this table?) but it could be a
>>>start.
>>>      
>>>
>>How would a failed transaction make any entries in such a table?  How
>>would you handle maintenance operations on the table that require
>>exclusive lock?  (vacuum full, reindex, etc)
>>
>>It seems possible that you could make this work if you piped stderr to a
>>buffering process that was itself a database client, and issued INSERTs
>>to put the rows into the table, and could buffer pending data whenever
>>someone else had the table locked (eg for vacuum).  I'd not care to try
>>to get backends to do it locally.
>>
>>            regards, tom lane
>>    
>>
>Not quite, my goal is to have a log per database, the stderr dosn't
>contain enough information to split it.
>
>As an ISP, I would like that each customer having one or more databases
>being able to see any error on their database.
>I imagine have a log file per database would be toot complicated...
>  
>
>
>  
>




Re: Feature request -- Log Database Name

From
Bruce Momjian
Date:
One idea would be to output log information as INSERT statements, so we
could log connection/dbname/username to one table, and per-session
information to another table, and server-level info in a third table.

If you want to analyze the logs, you could load the data into a database
via inserts, and even do joins and analyze the output using SQL!

This would solve the problem of failed transactions exporting
information, would not be extra overhead for every log message, and
would handle the problem of analyzing the log tables while the system
was running and continuing to emit more log output.

---------------------------------------------------------------------------

Andrew Dunstan wrote:
> There seem to be 2 orthogonal issues here - in effect how to log and 
> where to log. I had a brief look and providing an option to log the 
> dbname where appropriate seems to be quite easy - unless someone else is 
> already doing it I will look at it on the weekend. Assuming that were 
> done you could split the log based on dbname.
> 
> For the reasons Tom gives, logging to a table looks much harder and 
> possibly undesirable - I would normally want my log table(s) in a 
> different database, possibly even on a different machine, from my 
> production transactional database. However, an ISP might want to provide 
> the logs for each client in their designated db. It therefore seems to 
> me far more sensible to do load logs into tables out of band as Tom 
> suggests, possibly with some helper tools in contrib to parse the logs, 
> or even to load them in more or less real time (many tools exist to do 
> this sort of thing for web logs, so it is hardly rocket science - 
> classic case for a perl script ;-).
> 
> cheers
> 
> andrew
> 
> 
> ohp@pyrenet.fr wrote:
> 
> >On Mon, 28 Jul 2003, Tom Lane wrote:
> >
> >  
> >
> >>Date: Mon, 28 Jul 2003 21:39:23 -0400
> >>From: Tom Lane <tgl@sss.pgh.pa.us>
> >>To: Robert Treat <xzilla@users.sourceforge.net>
> >>Cc: ohp@pyrenet.fr, Larry Rosenman <ler@lerctr.org>,
> >>     Josh Berkus <josh@agliodbs.com>,
> >>     pgsql-hackers list <pgsql-hackers@postgresql.org>
> >>Subject: Re: [HACKERS] Feature request -- Log Database Name
> >>
> >>Robert Treat <xzilla@users.sourceforge.net> writes:
> >>    
> >>
> >>>I think better would be a GUC "log_to_table" which wrote all standard
> >>>out/err to a pg_log table.  of course, I doubt you could make this
> >>>foolproof (how to log startup errors in this table?) but it could be a
> >>>start.
> >>>      
> >>>
> >>How would a failed transaction make any entries in such a table?  How
> >>would you handle maintenance operations on the table that require
> >>exclusive lock?  (vacuum full, reindex, etc)
> >>
> >>It seems possible that you could make this work if you piped stderr to a
> >>buffering process that was itself a database client, and issued INSERTs
> >>to put the rows into the table, and could buffer pending data whenever
> >>someone else had the table locked (eg for vacuum).  I'd not care to try
> >>to get backends to do it locally.
> >>
> >>            regards, tom lane
> >>    
> >>
> >Not quite, my goal is to have a log per database, the stderr dosn't
> >contain enough information to split it.
> >
> >As an ISP, I would like that each customer having one or more databases
> >being able to see any error on their database.
> >I imagine have a log file per database would be toot complicated...
> >  
> >
> >
> >  
> >
> 
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Feature request -- Log Database Name

From
Andrew Dunstan
Date:
That assumes we know what the shape of the log tables will be, but this 
isn't quite clear to me - I can imagine it being different for different 
needs.  Having an external program to parse the logs into INSERT 
statements would not be hard, anyway, so I'm not sure that this would 
buy us much. I'll think about it more. In any case, it should be done in 
stages, I think, with the first stage simply being what we do now with 
the optional dbname field added.

cheers

andrew

Bruce Momjian wrote:

>One idea would be to output log information as INSERT statements, so we
>could log connection/dbname/username to one table, and per-session
>information to another table, and server-level info in a third table.
>
>If you want to analyze the logs, you could load the data into a database
>via inserts, and even do joins and analyze the output using SQL!
>
>This would solve the problem of failed transactions exporting
>information, would not be extra overhead for every log message, and
>would handle the problem of analyzing the log tables while the system
>was running and continuing to emit more log output.
>
>---------------------------------------------------------------------------
>
>Andrew Dunstan wrote:
>  
>
>>There seem to be 2 orthogonal issues here - in effect how to log and 
>>where to log. I had a brief look and providing an option to log the 
>>dbname where appropriate seems to be quite easy - unless someone else is 
>>already doing it I will look at it on the weekend. Assuming that were 
>>done you could split the log based on dbname.
>>
>>For the reasons Tom gives, logging to a table looks much harder and 
>>possibly undesirable - I would normally want my log table(s) in a 
>>different database, possibly even on a different machine, from my 
>>production transactional database. However, an ISP might want to provide 
>>the logs for each client in their designated db. It therefore seems to 
>>me far more sensible to do load logs into tables out of band as Tom 
>>suggests, possibly with some helper tools in contrib to parse the logs, 
>>or even to load them in more or less real time (many tools exist to do 
>>this sort of thing for web logs, so it is hardly rocket science - 
>>classic case for a perl script ;-).
>>
>>cheers
>>
>>andrew
>>
>>
>>    
>>




Re: Feature request -- Log Database Name

From
Josh Berkus
Date:
Guys:

> That assumes we know what the shape of the log tables will be, but this
> isn't quite clear to me - I can imagine it being different for different
> needs.  Having an external program to parse the logs into INSERT
> statements would not be hard, anyway, so I'm not sure that this would
> buy us much. I'll think about it more. In any case, it should be done in

My simple suggestion would be to have the option of outputting log entries as
tab-delimited data.   Then the admin could very easily write a script to load
it into a table or tables; we could even supply a sample perl script on
techdocs or somewhere.

--
-Josh BerkusAglio Database SolutionsSan Francisco



Re: Feature request -- Log Database Name

From
Bruce Momjian
Date:
I was thinking of outputing CREATE TABLE at the start of the log file.

I see what you mean that the schemas could be different, so we would
have to output the relevant fields all the time, like timestamp and
username, but because the username would be joined, you would only
output it on connection start, and not for each output line.

---------------------------------------------------------------------------

Andrew Dunstan wrote:
> 
> That assumes we know what the shape of the log tables will be, but this 
> isn't quite clear to me - I can imagine it being different for different 
> needs.  Having an external program to parse the logs into INSERT 
> statements would not be hard, anyway, so I'm not sure that this would 
> buy us much. I'll think about it more. In any case, it should be done in 
> stages, I think, with the first stage simply being what we do now with 
> the optional dbname field added.
> 
> cheers
> 
> andrew
> 
> Bruce Momjian wrote:
> 
> >One idea would be to output log information as INSERT statements, so we
> >could log connection/dbname/username to one table, and per-session
> >information to another table, and server-level info in a third table.
> >
> >If you want to analyze the logs, you could load the data into a database
> >via inserts, and even do joins and analyze the output using SQL!
> >
> >This would solve the problem of failed transactions exporting
> >information, would not be extra overhead for every log message, and
> >would handle the problem of analyzing the log tables while the system
> >was running and continuing to emit more log output.
> >
> >---------------------------------------------------------------------------
> >
> >Andrew Dunstan wrote:
> >  
> >
> >>There seem to be 2 orthogonal issues here - in effect how to log and 
> >>where to log. I had a brief look and providing an option to log the 
> >>dbname where appropriate seems to be quite easy - unless someone else is 
> >>already doing it I will look at it on the weekend. Assuming that were 
> >>done you could split the log based on dbname.
> >>
> >>For the reasons Tom gives, logging to a table looks much harder and 
> >>possibly undesirable - I would normally want my log table(s) in a 
> >>different database, possibly even on a different machine, from my 
> >>production transactional database. However, an ISP might want to provide 
> >>the logs for each client in their designated db. It therefore seems to 
> >>me far more sensible to do load logs into tables out of band as Tom 
> >>suggests, possibly with some helper tools in contrib to parse the logs, 
> >>or even to load them in more or less real time (many tools exist to do 
> >>this sort of thing for web logs, so it is hardly rocket science - 
> >>classic case for a perl script ;-).
> >>
> >>cheers
> >>
> >>andrew
> >>
> >>
> >>    
> >>
> 
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
> 
>                http://www.postgresql.org/docs/faqs/FAQ.html
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Feature request -- Log Database Name

From
Bruce Momjian
Date:
Josh Berkus wrote:
> Guys:
> 
> > That assumes we know what the shape of the log tables will be, but this 
> > isn't quite clear to me - I can imagine it being different for different 
> > needs.  Having an external program to parse the logs into INSERT 
> > statements would not be hard, anyway, so I'm not sure that this would 
> > buy us much. I'll think about it more. In any case, it should be done in 
> 
> My simple suggestion would be to have the option of outputting log entries as 
> tab-delimited data.   Then the admin could very easily write a script to load 
> it into a table or tables; we could even supply a sample perl script on 
> techdocs or somewhere.

The problem with that is needing to output to multiple tables ---
session-level, query-level, and server-level tables.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073