Thread: multi billion row tables: possible or insane?

multi billion row tables: possible or insane?

From
Ramon Bastiaans
Date:
Hi all,

I am doing research for a project of mine where I need to store several
billion values for a monitoring and historical tracking system for a big
computer system. My currect estimate is that I have to store (somehow)
around 1 billion values each month (possibly more).

I was wondering if anyone has had any experience with these kind of big
numbers of data in a postgres sql database and how this affects database
design and optimization.

What would be important issues when setting up a database this big, and
is it at all doable? Or would it be a insane to think about storing up
to 5-10 billion rows in a postgres database.

The database's performance is important. There would be no use in
storing the data if a query will take ages. Query's should be quite fast
if possible.

I would really like to hear people's thoughts/suggestions or "go see a
shrink, you must be mad" statements ;)

Kind regards,

Ramon Bastiaans



Re: multi billion row tables: possible or insane?

From
Jeff
Date:
On Mar 1, 2005, at 4:34 AM, Ramon Bastiaans wrote:
>
> What would be important issues when setting up a database this big,
> and is it at all doable? Or would it be a insane to think about
> storing up to 5-10 billion rows in a postgres database.
>

Buy a bunch of disks.
And then go out and buy more disks.
When you are done with that - go buy some more disks.
Then buy some ram.
Then buy more disks.

You want the fastest IO possible.

I'd also recommend the opteron route since you can also put heaping
gobules of ram in there as well.

> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite
> fast if possible.
>

And make sure you tune your queries.

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/


Re: multi billion row tables: possible or insane?

From
Markus Schaber
Date:
Hi, Ramon,

Ramon Bastiaans schrieb:

> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite fast
> if possible.

Which kind of query do you want to run?

Queries that involve only a few rows should stay quite fast when you set
up the right indices.

However, queries that involve sequential scans over your table (like
average computation) will take years. Get faaaaaast I/O for this. Or,
better, use a multidimensional data warehouse engine. Those can
precalculate needed aggregate functions and reports. But they need loads
of storage (because of very redundant data storage), and I don't know
any open source or cheap software.

Markus
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com

Re: multi billion row tables: possible or insane?

From
John Arbash Meinel
Date:
Ramon Bastiaans wrote:

> Hi all,
>
> I am doing research for a project of mine where I need to store
> several billion values for a monitoring and historical tracking system
> for a big computer system. My currect estimate is that I have to store
> (somehow) around 1 billion values each month (possibly more).
>
If you have that 1 billion perfectly distributed over all hours of the
day, then you need 1e9/30/24/3600 = 385 transactions per second.

Which I'm pretty sure is possible with postgres, you just need pretty
beefy hardware. And like Jeff said, lots of disks for lots of IO.
Like a quad opteron, with 16GB of ram, and around 14-20 very fast disks.
raid10 not raid5, etc. To improve query performance, you can do some
load balancing by having replication machines by using Slony.

Or if you can do batch processing, you could split up the work into a
few update machines, which then do bulk updates on the master database.
This lets you get more machines into the job, since you can't share a
database across multiple machines.

> I was wondering if anyone has had any experience with these kind of
> big numbers of data in a postgres sql database and how this affects
> database design and optimization.
>
Well, one of the biggest things is if you can get bulk updates, or if
clients can handle data being slightly out of date, so you can use
cacheing. Can you segregate your data into separate tables as much as
possible? Are your clients okay if aggregate information takes a little
while to update?

One trick is to use semi-lazy materialized views to get your updates to
be fast.

> What would be important issues when setting up a database this big,
> and is it at all doable? Or would it be a insane to think about
> storing up to 5-10 billion rows in a postgres database.

I think you if you can design the db properly, it is doable. But if you
have a clients saying "I need up to the second information on 1 billion
rows", you're never going to get it.

>
> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite
> fast if possible.
>
Again, it depends on the queries being done.
There are some nice tricks you can use, like doing a month-by-month
partitioning (if you are getting 1G inserts, you might want week-by-week
partitioning), and then with a date column index, and a union all view
you should be able to get pretty good insert speed, and still keep fast
*recent* queries. Going through 1billion rows is always going to be
expensive.

> I would really like to hear people's thoughts/suggestions or "go see a
> shrink, you must be mad" statements ;)
>
> Kind regards,
>
> Ramon Bastiaans

I think it would be possible, but there are a lot of design issues with
a system like this. You can't go into it thinking that you can design a
multi billion row database the same way you would design a million row db.

John
=:->


Attachment

Re: multi billion row tables: possible or insane?

From
"Vig, Sandor (G/FI-2)"
Date:
385 transaction/sec?

fsync = false

risky but fast.

-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org]On Behalf Of John Arbash
Meinel
Sent: Tuesday, March 01, 2005 4:19 PM
To: Ramon Bastiaans
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] multi billion row tables: possible or insane?


Ramon Bastiaans wrote:

> Hi all,
>
> I am doing research for a project of mine where I need to store
> several billion values for a monitoring and historical tracking system
> for a big computer system. My currect estimate is that I have to store
> (somehow) around 1 billion values each month (possibly more).
>
If you have that 1 billion perfectly distributed over all hours of the
day, then you need 1e9/30/24/3600 = 385 transactions per second.

Which I'm pretty sure is possible with postgres, you just need pretty
beefy hardware. And like Jeff said, lots of disks for lots of IO.
Like a quad opteron, with 16GB of ram, and around 14-20 very fast disks.
raid10 not raid5, etc. To improve query performance, you can do some
load balancing by having replication machines by using Slony.

Or if you can do batch processing, you could split up the work into a
few update machines, which then do bulk updates on the master database.
This lets you get more machines into the job, since you can't share a
database across multiple machines.

> I was wondering if anyone has had any experience with these kind of
> big numbers of data in a postgres sql database and how this affects
> database design and optimization.
>
Well, one of the biggest things is if you can get bulk updates, or if
clients can handle data being slightly out of date, so you can use
cacheing. Can you segregate your data into separate tables as much as
possible? Are your clients okay if aggregate information takes a little
while to update?

One trick is to use semi-lazy materialized views to get your updates to
be fast.

> What would be important issues when setting up a database this big,
> and is it at all doable? Or would it be a insane to think about
> storing up to 5-10 billion rows in a postgres database.

I think you if you can design the db properly, it is doable. But if you
have a clients saying "I need up to the second information on 1 billion
rows", you're never going to get it.

>
> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite
> fast if possible.
>
Again, it depends on the queries being done.
There are some nice tricks you can use, like doing a month-by-month
partitioning (if you are getting 1G inserts, you might want week-by-week
partitioning), and then with a date column index, and a union all view
you should be able to get pretty good insert speed, and still keep fast
*recent* queries. Going through 1billion rows is always going to be
expensive.

> I would really like to hear people's thoughts/suggestions or "go see a
> shrink, you must be mad" statements ;)
>
> Kind regards,
>
> Ramon Bastiaans

I think it would be possible, but there are a lot of design issues with
a system like this. You can't go into it thinking that you can design a
multi billion row database the same way you would design a million row db.

John
=:->


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.

Re: multi billion row tables: possible or insane?

From
John Arbash Meinel
Date:
Vig, Sandor (G/FI-2) wrote:

>385 transaction/sec?
>
>fsync = false
>
>risky but fast.
>
>

I think with a dedicated RAID10 for pg_xlog (or possibly a battery
backed up ramdisk), and then a good amount of disks in a bulk RAID10 or
possibly a good partitioning of the db across multiple raids, you could
probably get a good enough tps.

But you're right, fsync=false could certainly give you the performance,
though a power outage means potential *real* corruption. Not just
missing transactions, but duplicated rows, all sorts of ugliness.

John
=:->


Attachment

Re: multi billion row tables: possible or insane?

From
Andras Kadinger
Date:
What do your "values" consist of?

Would it be possible to group several hundred or thousand of them into a
single row somehow that still makes it possible for your queries to get at
them efficiently?

What kind of queries will you want to run against the data?

For example if you have a measurement of some process value each
millisecond, it might be a good performance tradeoff to pack a whole
second of measurements into a single row if your data processing only
needs to access the values sequentially. With this single step you
immediately reduced your row and transaction number to the 1/1000th.

Please tell us more.

On Tue, 1 Mar 2005, Ramon Bastiaans wrote:

> Hi all,
>
> I am doing research for a project of mine where I need to store several
> billion values for a monitoring and historical tracking system for a big
> computer system. My currect estimate is that I have to store (somehow)
> around 1 billion values each month (possibly more).
>
> I was wondering if anyone has had any experience with these kind of big
> numbers of data in a postgres sql database and how this affects database
> design and optimization.
>
> What would be important issues when setting up a database this big, and
> is it at all doable? Or would it be a insane to think about storing up
> to 5-10 billion rows in a postgres database.
>
> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite fast
> if possible.
>
> I would really like to hear people's thoughts/suggestions or "go see a
> shrink, you must be mad" statements ;)
>
> Kind regards,
>
> Ramon Bastiaans
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>


Re: multi billion row tables: possible or insane?

From
Alan Stange
Date:
Isn't that 385 rows/second.   Presumably one can insert more than one
row in a transaction?

-- Alan

Vig, Sandor (G/FI-2) wrote:

>385 transaction/sec?
>
>fsync = false
>
>risky but fast.
>
>-----Original Message-----
>From: pgsql-performance-owner@postgresql.org
>[mailto:pgsql-performance-owner@postgresql.org]On Behalf Of John Arbash
>Meinel
>Sent: Tuesday, March 01, 2005 4:19 PM
>To: Ramon Bastiaans
>Cc: pgsql-performance@postgresql.org
>Subject: Re: [PERFORM] multi billion row tables: possible or insane?
>
>
>Ramon Bastiaans wrote:
>
>
>
>>Hi all,
>>
>>I am doing research for a project of mine where I need to store
>>several billion values for a monitoring and historical tracking system
>>for a big computer system. My currect estimate is that I have to store
>>(somehow) around 1 billion values each month (possibly more).
>>
>>
>>
>If you have that 1 billion perfectly distributed over all hours of the
>day, then you need 1e9/30/24/3600 = 385 transactions per second.
>
>Which I'm pretty sure is possible with postgres, you just need pretty
>beefy hardware. And like Jeff said, lots of disks for lots of IO.
>Like a quad opteron, with 16GB of ram, and around 14-20 very fast disks.
>raid10 not raid5, etc. To improve query performance, you can do some
>load balancing by having replication machines by using Slony.
>
>Or if you can do batch processing, you could split up the work into a
>few update machines, which then do bulk updates on the master database.
>This lets you get more machines into the job, since you can't share a
>database across multiple machines.
>
>
>
>>I was wondering if anyone has had any experience with these kind of
>>big numbers of data in a postgres sql database and how this affects
>>database design and optimization.
>>
>>
>>
>Well, one of the biggest things is if you can get bulk updates, or if
>clients can handle data being slightly out of date, so you can use
>cacheing. Can you segregate your data into separate tables as much as
>possible? Are your clients okay if aggregate information takes a little
>while to update?
>
>One trick is to use semi-lazy materialized views to get your updates to
>be fast.
>
>
>
>>What would be important issues when setting up a database this big,
>>and is it at all doable? Or would it be a insane to think about
>>storing up to 5-10 billion rows in a postgres database.
>>
>>
>
>I think you if you can design the db properly, it is doable. But if you
>have a clients saying "I need up to the second information on 1 billion
>rows", you're never going to get it.
>
>
>
>>The database's performance is important. There would be no use in
>>storing the data if a query will take ages. Query's should be quite
>>fast if possible.
>>
>>
>>
>Again, it depends on the queries being done.
>There are some nice tricks you can use, like doing a month-by-month
>partitioning (if you are getting 1G inserts, you might want week-by-week
>partitioning), and then with a date column index, and a union all view
>you should be able to get pretty good insert speed, and still keep fast
>*recent* queries. Going through 1billion rows is always going to be
>expensive.
>
>
>
>>I would really like to hear people's thoughts/suggestions or "go see a
>>shrink, you must be mad" statements ;)
>>
>>Kind regards,
>>
>>Ramon Bastiaans
>>
>>
>
>I think it would be possible, but there are a lot of design issues with
>a system like this. You can't go into it thinking that you can design a
>multi billion row database the same way you would design a million row db.
>
>John
>=:->
>
>
>The information transmitted is intended only for the person or entity to
>which it is addressed and may contain confidential and/or privileged
>material. Any review, retransmission, dissemination or other use of, or
>taking of any action in reliance upon, this information by persons or
>entities other than the intended recipient is prohibited. If you received
>this in error, please contact the sender and delete the material from any
>computer.
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>


Re: multi billion row tables: possible or insane?

From
Markus Schaber
Date:
Hi, John,

John Arbash Meinel schrieb:

>> I am doing research for a project of mine where I need to store
>> several billion values for a monitoring and historical tracking system
>> for a big computer system. My currect estimate is that I have to store
>> (somehow) around 1 billion values each month (possibly more).
>>
> If you have that 1 billion perfectly distributed over all hours of the
> day, then you need 1e9/30/24/3600 = 385 transactions per second.

I hope that he does not use one transaction per inserted row.

In your in-house tests, we got a speedup factor of up to some hundred
when bundling rows on insertions. The fastest speed was with using
bunches of some thousand rows per transaction, and running about 5
processes in parallel.

Regard the usual performance tips: Use a small, but fast-writing RAID
for transaction log (no RAID-5 or RAID-6 variants), possibly a mirroring
of two harddisk-backed SSD. Use different disks for the acutal data
(here, LVM2 with growing volumes could be very handy). Have enough RAM.
Use a fast file system.

BTW, as you read about the difficulties that you'll face with this
enormous amount of data: Don't think that your task will much be easier
or cheaper using any other DBMS, neither commercial nor open source. For
all of them, you'll need "big iron" hardware, and a skilled team of
admins to set up and maintain the database.

Markus

--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com

Re: multi billion row tables: possible or insane?

From
John Arbash Meinel
Date:
Markus Schaber wrote:

>Hi, John,
>
>John Arbash Meinel schrieb:
>
>
>
>>>I am doing research for a project of mine where I need to store
>>>several billion values for a monitoring and historical tracking system
>>>for a big computer system. My currect estimate is that I have to store
>>>(somehow) around 1 billion values each month (possibly more).
>>>
>>>
>>>
>>If you have that 1 billion perfectly distributed over all hours of the
>>day, then you need 1e9/30/24/3600 = 385 transactions per second.
>>
>>
>
>I hope that he does not use one transaction per inserted row.
>
>In your in-house tests, we got a speedup factor of up to some hundred
>when bundling rows on insertions. The fastest speed was with using
>bunches of some thousand rows per transaction, and running about 5
>processes in parallel.
>
>
You're right. I guess it just depends on how the data comes in, and what
you can do at the client ends. That is kind of where I was saying put a
machine in front which gathers up the information, and then does a batch
update. If your client can do this directly, then you have the same
advantage.

>
>
John
=:->


Attachment

Re: multi billion row tables: possible or insane?

From
Josh Berkus
Date:
Ramon,

> What would be important issues when setting up a database this big, and
> is it at all doable? Or would it be a insane to think about storing up
> to 5-10 billion rows in a postgres database.

What's your budget?    You're not going to do this on a Dell 2650.   Do you
have the kind of a budget necessary to purchase/build a good SAN,
Quad-opteron machine, etc.?   Or at least hire some tuning help?


--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

Re: multi billion row tables: possible or insane?

From
Alan Stange
Date:
Ramon Bastiaans wrote:

> I am doing research for a project of mine where I need to store
> several billion values for a monitoring and historical tracking system
> for a big computer system. My currect estimate is that I have to store
> (somehow) around 1 billion values each month (possibly more).
>
> I was wondering if anyone has had any experience with these kind of
> big numbers of data in a postgres sql database and how this affects
> database design and optimization.
>
> What would be important issues when setting up a database this big,
> and is it at all doable? Or would it be a insane to think about
> storing up to 5-10 billion rows in a postgres database.
>
> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite
> fast if possible.
>
> I would really like to hear people's thoughts/suggestions or "go see a
> shrink, you must be mad" statements ;)

It just dawned on me that we're doing something that, while not the
same, might be relevant.  One of our tables has ~85M rows in it
according to the output from an "explain select * from table".  I don't
plan on trying a select count(*) any time soon :)    We add and remove
about 25M rows a day to/from this table which would be about 750M
rows/month total.  Given our current usage of the database, it could
handle a larger row/day rate without too much trouble.  (The problem
isn't adding rows but deleting rows.)

    Column    |   Type   | Modifiers
--------------+----------+-----------
 timeseriesid | bigint   |
 bindata      | bytea    |
 binsize      | integer  |
 rateid       | smallint |
 ownerid      | smallint |
Indexes:
    "idx_timeseries" btree (timeseriesid)

In this case, each bytea entry is typically about  2KB of data, so the
total table size is about 150GB, plus some index overhead.

A second table has ~100M rows according to explain select *.  Again it
has about 30M rows added and removed / day.

   Column   |         Type          | Modifiers
------------+-----------------------+-----------
 uniqid     | bigint                |
 type       | character varying(50) |
 memberid   | bigint                |
 tag        | character varying(50) |
 membertype | character varying(50) |
 ownerid    | smallint              |
Indexes:
    "composite_memberid" btree (memberid)
    "composite_uniqid" btree (uniqid)

There are some additional tables that have a few million rows / day of
activity, so call it 60M rows/day added and removed.  We run a vacuum
every day.

The box is an dual Opteron 248 from Sun.   Linux 2.6, 8GB of memory.  We
use reiserfs.  We started with XFS but had several instances of file
system corruption.  Obviously, no RAID 5.  The xlog is on a 2 drive
mirror and the rest is on separate mirrored volume.  The drives are
fiber channel but that was a mistake as the driver from IBM wasn't very
good.

So, while we don't have a billion rows we do have ~200M total rows in
all the tables and we're certainly running the daily row count that
you'd need to obtain.   But scaling this sort of thing up can be tricky
and your milage may vary.

In a prior career I ran a "data intensive computing center" and helped
do some design work for a high energy physics experiment:  petabytes of
data, big tape robots, etc., the usual Big Science toys.   You might
take a look at ROOT and some of the activity from those folks if you
don't need transactions and all the features of a general database like
postgresql.

-- Alan

Re: multi billion row tables: possible or insane?

From
Thomas Ganss
Date:
Ramon Bastiaans schrieb:
> My currect estimate is that I have to store (somehow)
> around 1 billion values each month (possibly more).

You should post the actual number or power of ten,
since "billion" is not always interpreted the same way...

rgds

thomas

Re: multi billion row tables: possible or insane?

From
"Jim C. Nasby"
Date:
On Tue, Mar 01, 2005 at 10:34:29AM +0100, Ramon Bastiaans wrote:
> Hi all,
>
> I am doing research for a project of mine where I need to store several
> billion values for a monitoring and historical tracking system for a big
> computer system. My currect estimate is that I have to store (somehow)
> around 1 billion values each month (possibly more).

On a side-note, do you need to keep the actual row-level details for
history? http://rrs.decibel.org might be of some use.

Other than that, what others have said. Lots and lots of disks in
RAID10, and opterons (though I would choose opterons not for memory size
but because of memory *bandwidth*).
--
Jim C. Nasby, Database Consultant               decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

Re: multi billion row tables: possible or insane?

From
Alex Turner
Date:
Not true - with fsync on I get nearly 500 tx/s, with it off I'm as
high as 1600/sec with dual opteron and 14xSATA drives and 4GB RAM on a
3ware Escalade.  Database has 3 million rows.

As long as queries use indexes, multi billion row shouldn't be too
bad.  Full table scan will suck though.

Alex Turner
netEconomist


On Tue, 1 Mar 2005 16:40:29 +0100, Vig, Sandor (G/FI-2)
<Sandor.Vig@audi.hu> wrote:
> 385 transaction/sec?
>
> fsync = false
>
> risky but fast.
>
> -----Original Message-----
> From: pgsql-performance-owner@postgresql.org
> [mailto:pgsql-performance-owner@postgresql.org]On Behalf Of John Arbash
> Meinel
> Sent: Tuesday, March 01, 2005 4:19 PM
> To: Ramon Bastiaans
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] multi billion row tables: possible or insane?
>
> Ramon Bastiaans wrote:
>
> > Hi all,
> >
> > I am doing research for a project of mine where I need to store
> > several billion values for a monitoring and historical tracking system
> > for a big computer system. My currect estimate is that I have to store
> > (somehow) around 1 billion values each month (possibly more).
> >
> If you have that 1 billion perfectly distributed over all hours of the
> day, then you need 1e9/30/24/3600 = 385 transactions per second.
>
> Which I'm pretty sure is possible with postgres, you just need pretty
> beefy hardware. And like Jeff said, lots of disks for lots of IO.
> Like a quad opteron, with 16GB of ram, and around 14-20 very fast disks.
> raid10 not raid5, etc. To improve query performance, you can do some
> load balancing by having replication machines by using Slony.
>
> Or if you can do batch processing, you could split up the work into a
> few update machines, which then do bulk updates on the master database.
> This lets you get more machines into the job, since you can't share a
> database across multiple machines.
>
> > I was wondering if anyone has had any experience with these kind of
> > big numbers of data in a postgres sql database and how this affects
> > database design and optimization.
> >
> Well, one of the biggest things is if you can get bulk updates, or if
> clients can handle data being slightly out of date, so you can use
> cacheing. Can you segregate your data into separate tables as much as
> possible? Are your clients okay if aggregate information takes a little
> while to update?
>
> One trick is to use semi-lazy materialized views to get your updates to
> be fast.
>
> > What would be important issues when setting up a database this big,
> > and is it at all doable? Or would it be a insane to think about
> > storing up to 5-10 billion rows in a postgres database.
>
> I think you if you can design the db properly, it is doable. But if you
> have a clients saying "I need up to the second information on 1 billion
> rows", you're never going to get it.
>
> >
> > The database's performance is important. There would be no use in
> > storing the data if a query will take ages. Query's should be quite
> > fast if possible.
> >
> Again, it depends on the queries being done.
> There are some nice tricks you can use, like doing a month-by-month
> partitioning (if you are getting 1G inserts, you might want week-by-week
> partitioning), and then with a date column index, and a union all view
> you should be able to get pretty good insert speed, and still keep fast
> *recent* queries. Going through 1billion rows is always going to be
> expensive.
>
> > I would really like to hear people's thoughts/suggestions or "go see a
> > shrink, you must be mad" statements ;)
> >
> > Kind regards,
> >
> > Ramon Bastiaans
>
> I think it would be possible, but there are a lot of design issues with
> a system like this. You can't go into it thinking that you can design a
> multi billion row database the same way you would design a million row db.
>
> John
> =:->
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>