Thread: [PATCH] Incremental backup: add backup profile to base backup

[PATCH] Incremental backup: add backup profile to base backup

From
Marco Nenciarini
Date:
Hi Hackers,

This is the first piece of file level incremental backup support, as
described on wiki page https://wiki.postgresql.org/wiki/Incremental_backup

It is not yet complete, but I wish to share it on the list to receive
comments and suggestions.

The point of the patch is adding an option to pg_basebackup and
replication protocol BASE_BACKUP command to generate a backup_profile file.

When taking a full backup with pg_basebackup, the user can request
Postgres to generate a backup_profile file through the --profile option
(-B short option, which I've arbitrarily picked up because both -P and
-p was already taken)

At the moment the backup profile consists of a file with one line per
file detailing modification time, md5, size, tablespace and path
relative to tablespace root (PGDATA or the tablespace)

To calculate the md5 checksum I've used the md5 code present in pgcrypto
contrib as the code in src/include/libpq/md5.h is not suitable for large
files. Since a core feature cannot depend on a piece of contrib, I've
moved the files

contrib/pgcrypto/md5.c
contrib/pgcrypto/md5.h

to

src/backend/utils/hash/md5.c
src/include/utils/md5.h

changing the pgcrypto extension to use them.

There are still some TODOs:

* User documentation

* Remove the pg_basebackup code duplication I've introduced with the
ReceiveAndUnpackTarFileToDir function, which is almost the same of
ReceiveAndUnpackTarFile but does not expect to handle a tablespace. It
instead simply extract a tar stream in a destination directory. The
latter could probably be rewritten using the former as component, but it
needs some adjustment to the "progress reporting" part, which is not
present at the moment in ReceiveAndUnpackTarFileToDir.

* Add header section to backup_profile file which at the moment contains
only the body part. I'm thinking to change the original design and put
the whole backup label as header, which is IMHO clearer and well known.
I would use something like:

START WAL LOCATION: 0/E000028 (file 00000001000000000000000E)
CHECKPOINT LOCATION: 0/E000060
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2014-08-14 18:54:01 CEST
LABEL: pg_basebackup base backup
END LABEL

I've attached the current patch based on master.

Any comment will be appreciated.

Regards,
Marco

--
Marco Nenciarini - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
marco.nenciarini@2ndQuadrant.it | www.2ndQuadrant.it

Attachment

Re: [PATCH] Incremental backup: add backup profile to base backup

From
Alvaro Herrera
Date:
Marco Nenciarini wrote:

> To calculate the md5 checksum I've used the md5 code present in pgcrypto
> contrib as the code in src/include/libpq/md5.h is not suitable for large
> files. Since a core feature cannot depend on a piece of contrib, I've
> moved the files
> 
> contrib/pgcrypto/md5.c
> contrib/pgcrypto/md5.h
> 
> to
> 
> src/backend/utils/hash/md5.c
> src/include/utils/md5.h
> 
> changing the pgcrypto extension to use them.

We already have the FNV checksum implementation in the backend -- can't
we use that one for this and avoid messing with MD5?

(I don't think we're looking for a cryptographic hash here.  Am I wrong?)

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Heikki Linnakangas
Date:
On 08/18/2014 03:04 AM, Marco Nenciarini wrote:
> Hi Hackers,
>
> This is the first piece of file level incremental backup support, as
> described on wiki page https://wiki.postgresql.org/wiki/Incremental_backup
>
> It is not yet complete, but I wish to share it on the list to receive
> comments and suggestions.
>
> The point of the patch is adding an option to pg_basebackup and
> replication protocol BASE_BACKUP command to generate a backup_profile file.

This is difficult to review in isolation. Would have to see the whole 
solution to see if this makes sense.

If we ever want to make this more granular than per-file (and I think we 
should, because otherwise you might as well just use rsync), how would 
this be extended?

- Heikki




Re: [PATCH] Incremental backup: add backup profile to base backup

From
Heikki Linnakangas
Date:
On 08/18/2014 08:05 AM, Alvaro Herrera wrote:
> Marco Nenciarini wrote:
>
>> To calculate the md5 checksum I've used the md5 code present in pgcrypto
>> contrib as the code in src/include/libpq/md5.h is not suitable for large
>> files. Since a core feature cannot depend on a piece of contrib, I've
>> moved the files
>>
>> contrib/pgcrypto/md5.c
>> contrib/pgcrypto/md5.h
>>
>> to
>>
>> src/backend/utils/hash/md5.c
>> src/include/utils/md5.h
>>
>> changing the pgcrypto extension to use them.
>
> We already have the FNV checksum implementation in the backend -- can't
> we use that one for this and avoid messing with MD5?
>
> (I don't think we're looking for a cryptographic hash here.  Am I wrong?)

Hmm. Any user that can update a table can craft such an update that its 
checksum matches an older backup. That may seem like an onerous task; to 
correctly calculate the checksum of a file in a previous, you need to 
know the LSNs and the exact data, including deleted data, on every block 
in the table, and then construct a suitable INSERT or UPDATE that 
modifies the table such that you get a collision. But for some tables it 
could be trivial; you might know that a table was bulk-loaded with a 
particular LSN and there are no dead tuples. Or you can simply create 
your own table and insert exactly the data you want. Messing with your 
own table might seem harmless, but it'll e.g. let you construct a case 
where an index points to a tuple that doesn't exist anymore, or there's 
a row that doesn't pass a CHECK-constraint that was added later. Even if 
there's no direct security issue with that, you don't want that kind of 
uncertainty from a backup solution.

But more to the point, I thought the consensus was to use the highest 
LSN of all the blocks in the file, no? That's essentially free to 
calculate (if you have to read all the data anyway), and isn't 
vulnerable to collisions.

- Heikki




Re: [PATCH] Incremental backup: add backup profile to base backup

From
Arthur Silva
Date:
<div dir="ltr"><br /><div class="gmail_extra"><div class="gmail_quote">On Mon, Aug 18, 2014 at 10:05 AM, Heikki
Linnakangas<span dir="ltr"><<a href="mailto:hlinnakangas@vmware.com"
target="_blank">hlinnakangas@vmware.com</a>></span>wrote:<br /><blockquote class="gmail_quote" style="margin:0px 0px
0px0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">On 08/18/2014 08:05 AM, Alvaro Herrera
wrote:<br/><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">Marco Nenciarini wrote:<br /><br /><blockquote class="gmail_quote" style="margin:0px
0px0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> To calculate the md5 checksum I've used the md5
codepresent in pgcrypto<br /> contrib as the code in src/include/libpq/md5.h is not suitable for large<br /> files.
Sincea core feature cannot depend on a piece of contrib, I've<br /> moved the files<br /><br />
contrib/pgcrypto/md5.c<br/> contrib/pgcrypto/md5.h<br /><br /> to<br /><br /> src/backend/utils/hash/md5.c<br />
src/include/utils/md5.h<br/><br /> changing the pgcrypto extension to use them.<br /></blockquote><br /> We already
havethe FNV checksum implementation in the backend -- can't<br /> we use that one for this and avoid messing with
MD5?<br/><br /> (I don't think we're looking for a cryptographic hash here.  Am I wrong?)<br /></blockquote><br
/></div>Hmm. Any user that can update a table can craft such an update that its checksum matches an older backup. That
mayseem like an onerous task; to correctly calculate the checksum of a file in a previous, you need to know the LSNs
andthe exact data, including deleted data, on every block in the table, and then construct a suitable INSERT or UPDATE
thatmodifies the table such that you get a collision. But for some tables it could be trivial; you might know that a
tablewas bulk-loaded with a particular LSN and there are no dead tuples. Or you can simply create your own table and
insertexactly the data you want. Messing with your own table might seem harmless, but it'll e.g. let you construct a
casewhere an index points to a tuple that doesn't exist anymore, or there's a row that doesn't pass a CHECK-constraint
thatwas added later. Even if there's no direct security issue with that, you don't want that kind of uncertainty from a
backupsolution.<br /><br /> But more to the point, I thought the consensus was to use the highest LSN of all the blocks
inthe file, no? That's essentially free to calculate (if you have to read all the data anyway), and isn't vulnerable to
collisions.<spanclass=""><font color="#888888"><br /><br /> - Heikki</font></span><div class=""><div class="h5"><br
/><br/><br /><br /> -- <br /> Sent via pgsql-hackers mailing list (<a href="mailto:pgsql-hackers@postgresql.org"
target="_blank">pgsql-hackers@postgresql.org</a>)<br/> To make changes to your subscription:<br /><a
href="http://www.postgresql.org/mailpref/pgsql-hackers"
target="_blank">http://www.postgresql.org/<u></u>mailpref/pgsql-hackers</a><br/></div></div></blockquote></div><br />We
alsohave both crc32 and crc64 implementations in pg_crc. If the goal is just verifying file integrity (we can't really
protectagainst intentional modification) crc sounds more appropriate to me.<br /></div></div> 

Re: [PATCH] Incremental backup: add backup profile to base backup

From
Alvaro Herrera
Date:
Heikki Linnakangas wrote:
> On 08/18/2014 08:05 AM, Alvaro Herrera wrote:

> >We already have the FNV checksum implementation in the backend -- can't
> >we use that one for this and avoid messing with MD5?
> >
> >(I don't think we're looking for a cryptographic hash here.  Am I wrong?)
> 
> Hmm. Any user that can update a table can craft such an update that
> its checksum matches an older backup. That may seem like an onerous
> task; to correctly calculate the checksum of a file in a previous,
> you need to know the LSNs and the exact data, including deleted
> data, on every block in the table, and then construct a suitable
> INSERT or UPDATE that modifies the table such that you get a
> collision. But for some tables it could be trivial; you might know
> that a table was bulk-loaded with a particular LSN and there are no
> dead tuples.

What would anybody obtain by doing that?  The only benefit is that the
file you so carefully crafted is not included in the next incremental
backup.  How is this of any interest?

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Heikki Linnakangas
Date:
On 08/18/2014 07:33 PM, Alvaro Herrera wrote:
> Heikki Linnakangas wrote:
>> On 08/18/2014 08:05 AM, Alvaro Herrera wrote:
>
>>> We already have the FNV checksum implementation in the backend --
>>> can't we use that one for this and avoid messing with MD5?
>>>
>>> (I don't think we're looking for a cryptographic hash here.  Am I
>>> wrong?)
>>
>> Hmm. Any user that can update a table can craft such an update
>> that its checksum matches an older backup. That may seem like an
>> onerous task; to correctly calculate the checksum of a file in a
>> previous, you need to know the LSNs and the exact data, including
>> deleted data, on every block in the table, and then construct a
>> suitable INSERT or UPDATE that modifies the table such that you get
>> a collision. But for some tables it could be trivial; you might
>> know that a table was bulk-loaded with a particular LSN and there
>> are no dead tuples.
>
> What would anybody obtain by doing that?  The only benefit is that
> the file you so carefully crafted is not included in the next
> incremental backup.  How is this of any interest?

You're not thinking evil enough ;-). Let's say that you have a table 
that stores bank transfers. You can do a bank transfer to pay a 
merchant, get the goods delivered to you, and then a second transfer to 
yourself with a specially crafted message attached to it that makes the 
checksum match the state before the first transfer. If the backup is 
restored (e.g. by a daily batch job to a reporting system), it will 
appear as if neither transfer happened, and you get to keep your money.

Far-fetched? Well, how about this scenario: a vandal just wants to cause 
damage. Creating a situation where a restore from backup causes the 
system to be inconsistent will certainly cause headaches to the admins, 
and leave them wondering what else is corrupt.

Or how about this: you can do the trick to a system catalog, say 
pg_attribute, to make it look like a column is of type varlena, when 
it's actually since been ALTERed to be an integer. Now you can access 
arbitrary memory in the server, and take over the whole system.

I'm sure any or all of those scenarios are highly inpractical when you 
actually sit down and try to do it, but you don't want to rely on that. 
You have to be able to trust your backups.

- Heikki



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Amit Kapila
Date:
On Mon, Aug 18, 2014 at 6:35 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote:
>
> On 08/18/2014 08:05 AM, Alvaro Herrera wrote:
>>
>> Marco Nenciarini wrote:
>>
>>> To calculate the md5 checksum I've used the md5 code present in pgcrypto
>>> contrib as the code in src/include/libpq/md5.h is not suitable for large
>>> files. Since a core feature cannot depend on a piece of contrib, I've
>>> moved the files
>>>
>>> contrib/pgcrypto/md5.c
>>> contrib/pgcrypto/md5.h
>>>
>>> to
>>>
>>> src/backend/utils/hash/md5.c
>>> src/include/utils/md5.h
>>>
>>> changing the pgcrypto extension to use them.
>>
>>
>> We already have the FNV checksum implementation in the backend -- can't
>> we use that one for this and avoid messing with MD5?
>>
>> (I don't think we're looking for a cryptographic hash here.  Am I wrong?)
>
>
> Hmm. Any user that can update a table can craft such an update that its checksum matches an older backup. That may seem like an onerous task; to correctly calculate the checksum of a file in a previous, you need to know the LSNs and the exact data, including deleted data, on every block in the table, and then construct a suitable INSERT or UPDATE that modifies the table such that you get a collision. But for some tables it could be trivial; you might know that a table was bulk-loaded with a particular LSN and there are no dead tuples. Or you can simply create your own table and insert exactly the data you want. Messing with your own table might seem harmless, but it'll e.g. let you construct a case where an index points to a tuple that doesn't exist anymore, or there's a row that doesn't pass a CHECK-constraint that was added later. Even if there's no direct security issue with that, you don't want that kind of uncertainty from a backup solution.
>
> But more to the point, I thought the consensus was to use the highest LSN of all the blocks in the file, no?

If we want to use highest LSN, then may be it would be
helpful for author to reuse some part of code which I have
written for Compute Max LSN utility which didn't got committed
because of not enough use case for it.
The latest patch for the same can be found at:


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Re: [PATCH] Incremental backup: add backup profile to base backup

From
Heikki Linnakangas
Date:
I think this has had enough review for a WIP patch. I'm marking this as 
Returned with Feedback in the commitfest because:

* should use LSNs instead of a md5
* this doesn't do anything useful on its own, hence would need to see 
the whole solution before committing
* not clear how this would be extended if you wanted to do more 
fine-grained than file-level diffs.

But please feel free to continue discussing those items.

On 08/18/2014 03:04 AM, Marco Nenciarini wrote:
> Hi Hackers,
>
> This is the first piece of file level incremental backup support, as
> described on wiki page https://wiki.postgresql.org/wiki/Incremental_backup
>
> It is not yet complete, but I wish to share it on the list to receive
> comments and suggestions.
>
> The point of the patch is adding an option to pg_basebackup and
> replication protocol BASE_BACKUP command to generate a backup_profile file.
>
> When taking a full backup with pg_basebackup, the user can request
> Postgres to generate a backup_profile file through the --profile option
> (-B short option, which I've arbitrarily picked up because both -P and
> -p was already taken)
>
> At the moment the backup profile consists of a file with one line per
> file detailing modification time, md5, size, tablespace and path
> relative to tablespace root (PGDATA or the tablespace)
>
> To calculate the md5 checksum I've used the md5 code present in pgcrypto
> contrib as the code in src/include/libpq/md5.h is not suitable for large
> files. Since a core feature cannot depend on a piece of contrib, I've
> moved the files
>
> contrib/pgcrypto/md5.c
> contrib/pgcrypto/md5.h
>
> to
>
> src/backend/utils/hash/md5.c
> src/include/utils/md5.h
>
> changing the pgcrypto extension to use them.
>
> There are still some TODOs:
>
> * User documentation
>
> * Remove the pg_basebackup code duplication I've introduced with the
> ReceiveAndUnpackTarFileToDir function, which is almost the same of
> ReceiveAndUnpackTarFile but does not expect to handle a tablespace. It
> instead simply extract a tar stream in a destination directory. The
> latter could probably be rewritten using the former as component, but it
> needs some adjustment to the "progress reporting" part, which is not
> present at the moment in ReceiveAndUnpackTarFileToDir.
>
> * Add header section to backup_profile file which at the moment contains
> only the body part. I'm thinking to change the original design and put
> the whole backup label as header, which is IMHO clearer and well known.
> I would use something like:
>
> START WAL LOCATION: 0/E000028 (file 00000001000000000000000E)
> CHECKPOINT LOCATION: 0/E000060
> BACKUP METHOD: streamed
> BACKUP FROM: master
> START TIME: 2014-08-14 18:54:01 CEST
> LABEL: pg_basebackup base backup
> END LABEL
>
> I've attached the current patch based on master.
>
> Any comment will be appreciated.
>
> Regards,
> Marco
>


-- 
- Heikki



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Bruce Momjian
Date:
On Mon, Aug 18, 2014 at 04:05:07PM +0300, Heikki Linnakangas wrote:
> But more to the point, I thought the consensus was to use the
> highest LSN of all the blocks in the file, no? That's essentially
> free to calculate (if you have to read all the data anyway), and
> isn't vulnerable to collisions.

The highest-LSN approach allows you to read only the tail part of each
8k block.  Assuming 512-byte storage sector sizes, you only have to read
1/8 of the file.

Now, the problem is that you lose kernel prefetch, but maybe
posix_fadvise() would fix that problem.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + Everyone has their own god. +



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Claudio Freire
Date:
On Wed, Aug 20, 2014 at 8:24 PM, Bruce Momjian <bruce@momjian.us> wrote:
> On Mon, Aug 18, 2014 at 04:05:07PM +0300, Heikki Linnakangas wrote:
>> But more to the point, I thought the consensus was to use the
>> highest LSN of all the blocks in the file, no? That's essentially
>> free to calculate (if you have to read all the data anyway), and
>> isn't vulnerable to collisions.
>
> The highest-LSN approach allows you to read only the tail part of each
> 8k block.  Assuming 512-byte storage sector sizes, you only have to read
> 1/8 of the file.
>
> Now, the problem is that you lose kernel prefetch, but maybe
> posix_fadvise() would fix that problem.

Sequential read of 512-byte blocks or 8k blocks takes the same amount
of time in rotating media (if they're scheduled right). Maybe not in
SSD media.

Not only, the kernel will read in 4k blocks, instead of 8k (at least in linux).

So, the benefit is dubious.



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Robert Haas
Date:
On Mon, Aug 18, 2014 at 4:55 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
> You're not thinking evil enough ;-). Let's say that you have a table that
> stores bank transfers. You can do a bank transfer to pay a merchant, get the
> goods delivered to you, and then a second transfer to yourself with a
> specially crafted message attached to it that makes the checksum match the
> state before the first transfer. If the backup is restored (e.g. by a daily
> batch job to a reporting system), it will appear as if neither transfer
> happened, and you get to keep your money.
>
> Far-fetched? Well, how about this scenario: a vandal just wants to cause
> damage. Creating a situation where a restore from backup causes the system
> to be inconsistent will certainly cause headaches to the admins, and leave
> them wondering what else is corrupt.
>
> Or how about this: you can do the trick to a system catalog, say
> pg_attribute, to make it look like a column is of type varlena, when it's
> actually since been ALTERed to be an integer. Now you can access arbitrary
> memory in the server, and take over the whole system.
>
> I'm sure any or all of those scenarios are highly inpractical when you
> actually sit down and try to do it, but you don't want to rely on that. You
> have to be able to trust your backups.

Yeah.  I agree that these scenarios are far-fetched; however, they're
also preventable, so we should.

Also, with respect to checksum collisions, you figure to have an
*accidental* checksum collision every so often as well.  For block
checksums, we're using a 16-bit value, which is OK because we'll still
detect 65535/65536 = 99.998% of corruption events.  The requirements
are much higher for incremental backup, because a checksum collision
here means automatic data corruption.  If we were crazy enough to use
a 16-bit block-level checksum in this context, about one
actually-modified block would fail to get copied out of every 8kB *
64k = 512MB of modified data, which would not make anybody very happy.
A 32-bit checksum would be much safer, and a 64-bit checksum would be
better still, but block LSNs seem better still.

Of course, there's one case where block LSNs aren't better, which is
where somebody goes backwards in time - i.e. back up the database, run
it for a while, take a base backup, shut down, restore from backup, do
stuff that's different from what you did the first time through, try
to take an incremental backup against your base backup.  LSNs won't
catch that; checksums will.  Do we want to allow incremental backup in
that kind of situation?  It seems like playing with fire, but it's
surely not useless.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Robert Haas
Date:
On Wed, Aug 20, 2014 at 7:33 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
> On Wed, Aug 20, 2014 at 8:24 PM, Bruce Momjian <bruce@momjian.us> wrote:
>> On Mon, Aug 18, 2014 at 04:05:07PM +0300, Heikki Linnakangas wrote:
>>> But more to the point, I thought the consensus was to use the
>>> highest LSN of all the blocks in the file, no? That's essentially
>>> free to calculate (if you have to read all the data anyway), and
>>> isn't vulnerable to collisions.
>>
>> The highest-LSN approach allows you to read only the tail part of each
>> 8k block.  Assuming 512-byte storage sector sizes, you only have to read
>> 1/8 of the file.
>>
>> Now, the problem is that you lose kernel prefetch, but maybe
>> posix_fadvise() would fix that problem.
>
> Sequential read of 512-byte blocks or 8k blocks takes the same amount
> of time in rotating media (if they're scheduled right). Maybe not in
> SSD media.
>
> Not only, the kernel will read in 4k blocks, instead of 8k (at least in linux).
>
> So, the benefit is dubious.

Agreed.  But, there could be a CPU benefit, too.  Pulling the LSN out
of a block is probably a lot cheaper than checksumming the whole
thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [PATCH] Incremental backup: add backup profile to base backup

From
Bruce Momjian
Date:
On Fri, Aug 22, 2014 at 12:00:19PM -0400, Robert Haas wrote:
> On Wed, Aug 20, 2014 at 7:33 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
> > On Wed, Aug 20, 2014 at 8:24 PM, Bruce Momjian <bruce@momjian.us> wrote:
> >> On Mon, Aug 18, 2014 at 04:05:07PM +0300, Heikki Linnakangas wrote:
> >>> But more to the point, I thought the consensus was to use the
> >>> highest LSN of all the blocks in the file, no? That's essentially
> >>> free to calculate (if you have to read all the data anyway), and
> >>> isn't vulnerable to collisions.
> >>
> >> The highest-LSN approach allows you to read only the tail part of each
> >> 8k block.  Assuming 512-byte storage sector sizes, you only have to read
> >> 1/8 of the file.
> >>
> >> Now, the problem is that you lose kernel prefetch, but maybe
> >> posix_fadvise() would fix that problem.
> >
> > Sequential read of 512-byte blocks or 8k blocks takes the same amount
> > of time in rotating media (if they're scheduled right). Maybe not in
> > SSD media.
> >
> > Not only, the kernel will read in 4k blocks, instead of 8k (at least in linux).
> >
> > So, the benefit is dubious.
> 
> Agreed.  But, there could be a CPU benefit, too.  Pulling the LSN out
> of a block is probably a lot cheaper than checksumming the whole
> thing.

Oh, good, I hoped someone would find something useful about my idea.  :-)

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + Everyone has their own god. +