Thread: tackling full page writes

tackling full page writes

From
Robert Haas
Date:
While eating good Indian food and talking about aviation accidents on
the last night of PGCon, Greg Stark, Heikki Linnakangas, and I found
some time to brainstorm about possible ways to reduce the impact of
full_page_writes.  I'm not sure that these ideas are much good, but
for the sake of posterity:

1. Heikki suggested that instead of doing full page writes, we might
try to write only the parts of the page that have changed.  For
example, if we had 16 bits to play with in the page header (which we
don't), then we could imagine the page as being broken up into 16
512-byte chunks, one per bit.  Each time we update the page, we write
whatever subset of the 512-byte chunks we're actually modifying,
except for any that have been written since the last checkpoint.  In
more detail, when writing a WAL record, if a checkpoint has intervened
since the page LSN, then we first clear all 16 bits, reset the bits
for the chunks we're modifying, and XLOG those chunks.  If no
checkpoint has intervened, then we set the bits for any chunks that we
are modifying and for which the corresponding bits aren't yet set; and
XLOG the corresponding chunks.  As I think about it a bit more, we'd
need to XLOG not only the parts of the page we actually modifying, but
any that the WAL record would need to be correct on replay.

(It was further suggested that, in our grand tradition of bad naming,
we could name this feature "partial full page writes" and enable it
either with a setting of full_page_writes=partial, or better yet, add
a new GUC partial_full_page_writes.  The beauty of the latter is that
it's completely ambiguous what happens when full_page_writes=off and
partial_full_page_writes=on.  Actually, we could invert the sense and
call it disable_partial_full_page_writes instead, which would probably
remove all hope of understanding.  This all seemed completely
hilarious when we were talking about it, and we weren't even drunk.)

2. The other fairly obvious alternative is to adjust our existing WAL
record types to be idempotent - i.e. to not rely on the existing page
contents.  For XLOG_HEAP_INSERT, we currently store the target tid and
the tuple contents.  I'm not sure if there's anything else, but we
would obviously need the offset where the new tuple should be written,
which we currently infer from reading the existing page contents.  For
XLOG_HEAP_DELETE, we store just the TID of the target tuple; we would
certainly need to store its offset within the block, and maybe the
infomask.  For XLOG_HEAP_UPDATE, we'd need the old and new offsets and
perhaps also the old and new infomasks.  Assuming that's all we need
and I'm not missing anything (which I won't bet on), that means we'd
be adding, say, 4 bytes per insert or delete and 8 bytes per update.
So, if checkpoints are spread out widely enough that there will be
more than ~2K operations per page between checkpoints, then it makes
more sense to just do a full page write and call it good.  If not,
this idea might have legs.

3. Going a bit further, Greg proposed the idea of ripping out our
current WAL infrastructure altogether and instead just having one WAL
record that says "these byte ranges on this page changed to have these
new contents".  That's elegantly simple, but I'm afraid it would bloat
the records quite a bit.  For example, as Heikki pointed out,
HEAP_XLOG_DELETE relies on the XID in the record header to figure out
what to write, and all the heap-modification operations implicitly
specify the visibility map change when they specify the heap change.
We currently have a flag to indicate whether the visibility map
actually requires an update, but it's just one bit.  However, one
possible application of this concept is that we could add something
like this in along with our existing WAL record types.  It might be
useful, for example, for third-party index AMs, which are currently
pretty much out of luck.

That's about as far as we got.  Though I haven't convinced anyone else
yet, I still think there's some merit to the idea of just writing the
portion of the page that precedes pd_upper.  WAL records would have to
assume that the tuple data might be clobbered, but they could rely on
the early portion of the page to be correct.  AFAICT, that would be OK
for all of the existing WAL records except for XLOG_HEAP2_CLEAN (i.e.
vacuum), with the exception that - prior to the minimum recovery point
- they'd need to apply their changes unconditionally rather than
considering the page LSN.  Tom has argued that won't work, but I'm not
sure he's convinced anyone else yet...

Anyone else have good ideas?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
Jeff Davis
Date:
On Tue, 2011-05-24 at 16:34 -0400, Robert Haas wrote:
> As I think about it a bit more, we'd
> need to XLOG not only the parts of the page we actually modifying, but
> any that the WAL record would need to be correct on replay.

I don't understand that statement. Can you clarify?

Regards,Jeff Davis




Re: tackling full page writes

From
Bruce Momjian
Date:
Robert Haas wrote:
> 2. The other fairly obvious alternative is to adjust our existing WAL
> record types to be idempotent - i.e. to not rely on the existing page
> contents.  For XLOG_HEAP_INSERT, we currently store the target tid and
> the tuple contents.  I'm not sure if there's anything else, but we
> would obviously need the offset where the new tuple should be written,
> which we currently infer from reading the existing page contents.  For
> XLOG_HEAP_DELETE, we store just the TID of the target tuple; we would
> certainly need to store its offset within the block, and maybe the
> infomask.  For XLOG_HEAP_UPDATE, we'd need the old and new offsets and
> perhaps also the old and new infomasks.  Assuming that's all we need
> and I'm not missing anything (which I won't bet on), that means we'd
> be adding, say, 4 bytes per insert or delete and 8 bytes per update.
> So, if checkpoints are spread out widely enough that there will be
> more than ~2K operations per page between checkpoints, then it makes
> more sense to just do a full page write and call it good.  If not,
> this idea might have legs.

I vote for "wal_level = idempotent" because so few people will know what
idempotent means.  ;-)

Idempotent does seem like the most promising idea.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + It's impossible for everything to be true. +


Re: tackling full page writes

From
Robert Haas
Date:
On Tue, May 24, 2011 at 10:52 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> On Tue, 2011-05-24 at 16:34 -0400, Robert Haas wrote:
>> As I think about it a bit more, we'd
>> need to XLOG not only the parts of the page we actually modifying, but
>> any that the WAL record would need to be correct on replay.
>
> I don't understand that statement. Can you clarify?

I'll try.  Suppose we have two WAL records A and B, with no
intervening checkpoint, that both modify the same page.  A reads chunk
1 of that page and then modifies chunk 2.  B modifies chunk 1.  Now,
suppose we make A do a "partial page write" on chunk 2 only, and B do
the same for chunk 1.  At the point the system crashes, A and B are
both on disk, and the page has already been written to disk as well.
Replay begins from a checkpoint preceding A.

Now, when we get to the record for A, what are we to do?  If it were a
full page image, we could just restore it, and everything would be
fine after that.  But if we replay the partial page write, we've got
trouble.  A will now see the state of the chunk 1 as it existed after
the action protected by B occurred, and will presumably do the wrong
thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
Robert Haas
Date:
On Tue, May 24, 2011 at 11:52 PM, Bruce Momjian <bruce@momjian.us> wrote:
> Robert Haas wrote:
>> 2. The other fairly obvious alternative is to adjust our existing WAL
>> record types to be idempotent - i.e. to not rely on the existing page
>> contents.  For XLOG_HEAP_INSERT, we currently store the target tid and
>> the tuple contents.  I'm not sure if there's anything else, but we
>> would obviously need the offset where the new tuple should be written,
>> which we currently infer from reading the existing page contents.  For
>> XLOG_HEAP_DELETE, we store just the TID of the target tuple; we would
>> certainly need to store its offset within the block, and maybe the
>> infomask.  For XLOG_HEAP_UPDATE, we'd need the old and new offsets and
>> perhaps also the old and new infomasks.  Assuming that's all we need
>> and I'm not missing anything (which I won't bet on), that means we'd
>> be adding, say, 4 bytes per insert or delete and 8 bytes per update.
>> So, if checkpoints are spread out widely enough that there will be
>> more than ~2K operations per page between checkpoints, then it makes
>> more sense to just do a full page write and call it good.  If not,
>> this idea might have legs.
>
> I vote for "wal_level = idempotent" because so few people will know what
> idempotent means.  ;-)

That idea has the additional advantage of confusing the level of
detail of our WAL logging (minimal vs. archive vs. hot standby) with
the mechanism used to protect against torn pages (full page writes vs.
idempotent WAL records vs. prayer).  When they set it wrong and
destroy their system, we can tell them it's their own fault for not
configuring the system properly!  Bwahahahaha!

In all seriousness, I can't imagine that we'd make this
user-configurable in the first place, since that would amount to
having two sets of WAL records each of which would be even less well
tested than what we have now; and for a project this complex, we
probably shouldn't even consider changing things that seem to work now
unless the new system is clearly better than the old.

> Idempotent does seem like the most promising idea.

I tend to agree with you, but I'm worried it won't actually work out
to a win.  By the time we augment the records with enough additional
information we may have eaten up a lot of the benefit we were hoping
to get.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
Simon Riggs
Date:
On Tue, May 24, 2011 at 9:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:

> I'm not sure that these ideas are much good, but
> for the sake of posterity:

Both (1) and (2) seem promising to me.

Heikki mentioned (2) would only be effective if we managed to change
*all* WAL records. ISTM likely that we would find that difficult
and/or time consuming and/or buggy.

I would qualify that by saying *all* WAL record types for a certain
page type, such as all btree index pages. Not that much better, since
heap and btree are the big ones, ISTM.

(1) seems like we could do it incrementally if we supported both
partial and full page writes at same time. That way we could work on
the most frequent record types over time. Not sure if that is
possible, but seems worth considering.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: tackling full page writes

From
Bruce Momjian
Date:
Robert Haas wrote:
> That idea has the additional advantage of confusing the level of
> detail of our WAL logging (minimal vs. archive vs. hot standby) with
> the mechanism used to protect against torn pages (full page writes vs.
> idempotent WAL records vs. prayer).  When they set it wrong and
> destroy their system, we can tell them it's their own fault for not
> configuring the system properly!  Bwahahahaha!

I love it!  Create confusing configuration options and blame the user!

> In all seriousness, I can't imagine that we'd make this
> user-configurable in the first place, since that would amount to
> having two sets of WAL records each of which would be even less well
> tested than what we have now; and for a project this complex, we
> probably shouldn't even consider changing things that seem to work now
> unless the new system is clearly better than the old.
> 
> > Idempotent does seem like the most promising idea.
> 
> I tend to agree with you, but I'm worried it won't actually work out
> to a win.  By the time we augment the records with enough additional
> information we may have eaten up a lot of the benefit we were hoping
> to get.

This is where I was confused.  Our bad case now is when someone modifies
one row on a page between checkpoints --- instead of writing 400 bytes,
we write 8400.  What portion of between-checkpoint activity writes more
than a few rows to a page?  I didn't think many, except for COPY. 
Ideally we could switch in and out of this mode per page, but that seems
super-complicated.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + It's impossible for everything to be true. +


Re: tackling full page writes

From
Robert Haas
Date:
On Wed, May 25, 2011 at 10:13 AM, Bruce Momjian <bruce@momjian.us> wrote:
>> > Idempotent does seem like the most promising idea.
>>
>> I tend to agree with you, but I'm worried it won't actually work out
>> to a win.  By the time we augment the records with enough additional
>> information we may have eaten up a lot of the benefit we were hoping
>> to get.
>
> This is where I was confused.  Our bad case now is when someone modifies
> one row on a page between checkpoints --- instead of writing 400 bytes,
> we write 8400.  What portion of between-checkpoint activity writes more
> than a few rows to a page?  I didn't think many, except for COPY.
> Ideally we could switch in and out of this mode per page, but that seems
> super-complicated.

Well, an easy to understand example would be a page that gets repeated
HOT updates.  We'll do this: add a tuple, add a tuple, add a tuple,
add a tuple, HOT cleanup, add a tuple, add a tuple, add a tuple, add a
tuple, HOT cleanup... and so on.  In the worst case, that could be
done many, many times between checkpoints that might be up to an hour
apart.  The problem can also occur (with a little more difficulty)
even without HOT.  Imagine a small table without lots of inserts and
deletes.  Page fills up, some rows are deleted, vacuum frees up space,
page fills up again, some more rows are deleted, vacuum frees up space
again, and so on.

But you raise an interesting point, which is that it might also be
possible to reduce the impact of write-ahead logging in other ways.
For example, if we're doing a large COPY into a table, we could buffer
up a full block of tuples and then just emit an FPI for the page.
This would likely be cheaper than logging each tuple individually.

In fact, you could imagine keeping a queue of pending WAL for each
block in shared buffers.  You don't really need that WAL to be
consolidated into a single stream until either (a) you need to write
the block or (b) you need to commit the transaction.  When one of
those things happens, you can decide at that point whether it's
cheaper to emit the individual records or do some sort of
consolidation.  Doing it in exactly that way is probably impractical,
because every backend that wants to commit would have to make a sweep
of every buffer it's dirtied and see if any of them still contain WAL
that needs to be shoved into the main queue, and that would probably
suck, especially for small transactions.  But maybe there's some
variant that could be made to work.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
Greg Smith
Date:
On 05/24/2011 04:34 PM, Robert Haas wrote:
> we could name this feature "partial full page writes" and enable it
> either with a setting of full_page_writes=partial

+1 to overloading the initial name, but only if the parameter is named 
'maybe', 'sometimes', or 'perhaps'.

I've been looking into a similar refactoring of the names here, where we 
bundle all of these speed over safety things (fsync, full_page_writes, 
etc.) into one control so they're easier to turn off at once.  Not sure 
if it should be named "web_scale" or "do_you_feel_lucky_punk".

> 3. Going a bit further, Greg proposed the idea of ripping out our
> current WAL infrastructure altogether and instead just having one WAL
> record that says "these byte ranges on this page changed to have these
> new contents".

The main thing that makes this idea particularly interesting to me, over 
the other two, is that it might translate well into the idea of using 
sync_file_range to aim for a finer fsync call on Linux than is currently 
possible.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: tackling full page writes

From
Robert Haas
Date:
On Wed, May 25, 2011 at 1:06 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> On 05/24/2011 04:34 PM, Robert Haas wrote:
>>
>> we could name this feature "partial full page writes" and enable it
>> either with a setting of full_page_writes=partial
>
> +1 to overloading the initial name, but only if the parameter is named
> 'maybe', 'sometimes', or 'perhaps'.

Perfect!

> I've been looking into a similar refactoring of the names here, where we
> bundle all of these speed over safety things (fsync, full_page_writes, etc.)
> into one control so they're easier to turn off at once.  Not sure if it
> should be named "web_scale" or "do_you_feel_lucky_punk".

Actually, I suggested that same idea to you, or someone, a while back,
only I was serious.  crash_safety=off.  I never got around to fleshing
out the details, though.

>> 3. Going a bit further, Greg proposed the idea of ripping out our
>> current WAL infrastructure altogether and instead just having one WAL
>> record that says "these byte ranges on this page changed to have these
>> new contents".
>
> The main thing that makes this idea particularly interesting to me, over the
> other two, is that it might translate well into the idea of using
> sync_file_range to aim for a finer fsync call on Linux than is currently
> possible.

Hmm, maybe.  But it's possible that the dirty blocks are the first and
last ones in the file.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
Greg Stark
Date:
On Tue, May 24, 2011 at 1:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> 1. Heikki suggested that instead of doing full page writes, we might
> try to write only the parts of the page that have changed.  For
> example, if we had 16 bits to play with in the page header (which we
> don't), then we could imagine the page as being broken up into 16
> 512-byte chunks, one per bit.  Each time we update the page, we write
> whatever subset of the 512-byte chunks we're actually modifying,
>

Alternately we could have change vectors which are something like
<offset,length,bytes[]> which I think would be a lot less wasteful than
dumping 512 byte chunks. The main advantage of 512 byte chunks is it's
easier to figure how what chunks to include in the output and if
you're replacing the entire block it looks just like our existing
system including not having to read in the page before writing. If we
output change vectors then we need to do some arithmetic to figure out
when it makes sense to merge records and how much of the block we need
to be replacing before we just decide to include the whole block.


--
greg


Re: tackling full page writes

From
Fujii Masao
Date:
On Wed, May 25, 2011 at 9:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, May 24, 2011 at 10:52 PM, Jeff Davis <pgsql@j-davis.com> wrote:
>> On Tue, 2011-05-24 at 16:34 -0400, Robert Haas wrote:
>>> As I think about it a bit more, we'd
>>> need to XLOG not only the parts of the page we actually modifying, but
>>> any that the WAL record would need to be correct on replay.
>>
>> I don't understand that statement. Can you clarify?
>
> I'll try.  Suppose we have two WAL records A and B, with no
> intervening checkpoint, that both modify the same page.  A reads chunk
> 1 of that page and then modifies chunk 2.  B modifies chunk 1.  Now,
> suppose we make A do a "partial page write" on chunk 2 only, and B do
> the same for chunk 1.  At the point the system crashes, A and B are
> both on disk, and the page has already been written to disk as well.
> Replay begins from a checkpoint preceding A.
>
> Now, when we get to the record for A, what are we to do?  If it were a
> full page image, we could just restore it, and everything would be
> fine after that.  But if we replay the partial page write, we've got
> trouble.  A will now see the state of the chunk 1 as it existed after
> the action protected by B occurred, and will presumably do the wrong
> thing.

If this is really true, full page writes would also cause the similar problem.
No? Imagine the case where A reads page 1, then modifies page 2, and B
modifies page 1. At the recovery, A will see the state of page 1 as it existed
after the action by B.

The replay of the WAL record for A doesn't rely on the content of chunk 1
which B modified. So I don't think that "partial page writes" has such
a problem.
No?

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


Re: tackling full page writes

From
Robert Haas
Date:
On Wed, May 25, 2011 at 10:09 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
> On Wed, May 25, 2011 at 9:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Tue, May 24, 2011 at 10:52 PM, Jeff Davis <pgsql@j-davis.com> wrote:
>>> On Tue, 2011-05-24 at 16:34 -0400, Robert Haas wrote:
>>>> As I think about it a bit more, we'd
>>>> need to XLOG not only the parts of the page we actually modifying, but
>>>> any that the WAL record would need to be correct on replay.
>>>
>>> I don't understand that statement. Can you clarify?
>>
>> I'll try.  Suppose we have two WAL records A and B, with no
>> intervening checkpoint, that both modify the same page.  A reads chunk
>> 1 of that page and then modifies chunk 2.  B modifies chunk 1.  Now,
>> suppose we make A do a "partial page write" on chunk 2 only, and B do
>> the same for chunk 1.  At the point the system crashes, A and B are
>> both on disk, and the page has already been written to disk as well.
>> Replay begins from a checkpoint preceding A.
>>
>> Now, when we get to the record for A, what are we to do?  If it were a
>> full page image, we could just restore it, and everything would be
>> fine after that.  But if we replay the partial page write, we've got
>> trouble.  A will now see the state of the chunk 1 as it existed after
>> the action protected by B occurred, and will presumably do the wrong
>> thing.
>
> If this is really true, full page writes would also cause the similar problem.
> No? Imagine the case where A reads page 1, then modifies page 2, and B
> modifies page 1. At the recovery, A will see the state of page 1 as it existed
> after the action by B.

Yeah, but it won't matter, because the LSN interlock will prevent A
from taking any action.  If you only write parts of the page, though,
the concept of "the" LSN of the page becomes a bit murky, because you
may have different parts of the page from different points in the WAL
stream.  I believe it's possible to cope with that if we design it
carefully, but it does seem rather complex and error-prone (which is
not necessarily the best design for a recovery system, but hey).

Anyway, you can either have the partial page write for A restore the
older LSN, or not.  If you do, then you have the problem as I
described it.  If you don't, then the effects of A vanish into the
either.  Either way, it doesn't work.

> The replay of the WAL record for A doesn't rely on the content of chunk 1
> which B modified. So I don't think that "partial page writes" has such
> a problem.
> No?

Sorry.  WAL records today DO rely on the prior state of the page.  If
they didn't, we wouldn't need full page writes.  They don't rely on
them terribly heavily - things like where pd_upper is pointing, and
what the page LSN is.  But they do rely on them.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
Fujii Masao
Date:
On Thu, May 26, 2011 at 1:18 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> The replay of the WAL record for A doesn't rely on the content of chunk 1
>> which B modified. So I don't think that "partial page writes" has such
>> a problem.
>> No?
>
> Sorry.  WAL records today DO rely on the prior state of the page.  If
> they didn't, we wouldn't need full page writes.  They don't rely on
> them terribly heavily - things like where pd_upper is pointing, and
> what the page LSN is.  But they do rely on them.

Yeah, I'm sure that normal WAL record (neither full page writes nor
"partial page writes") relies on the prior state of the page. But WAL
record for A is "partial page writes", which also relies on the prior
state?

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


Re: tackling full page writes

From
Robert Haas
Date:
On Thu, May 26, 2011 at 12:38 AM, Fujii Masao <masao.fujii@gmail.com> wrote:
> On Thu, May 26, 2011 at 1:18 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>>> The replay of the WAL record for A doesn't rely on the content of chunk 1
>>> which B modified. So I don't think that "partial page writes" has such
>>> a problem.
>>> No?
>>
>> Sorry.  WAL records today DO rely on the prior state of the page.  If
>> they didn't, we wouldn't need full page writes.  They don't rely on
>> them terribly heavily - things like where pd_upper is pointing, and
>> what the page LSN is.  But they do rely on them.
>
> Yeah, I'm sure that normal WAL record (neither full page writes nor
> "partial page writes") relies on the prior state of the page. But WAL
> record for A is "partial page writes", which also relies on the prior
> state?

Yeah, that's how it shakes out.  The idea is you have to write the
parts of the page that you rely on, but not the rest - which in turn
guarantees that those parts (but not the rest) will be correct when
you read them.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: tackling full page writes

From
"Ross J. Reedstrom"
Date:
On Wed, May 25, 2011 at 01:29:05PM -0400, Robert Haas wrote:
> On Wed, May 25, 2011 at 1:06 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> > On 05/24/2011 04:34 PM, Robert Haas wrote:
> 
> > I've been looking into a similar refactoring of the names here, where we
> > bundle all of these speed over safety things (fsync, full_page_writes, etc.)
> > into one control so they're easier to turn off at once.  Not sure if it
> > should be named "web_scale" or "do_you_feel_lucky_punk".
> 
> Actually, I suggested that same idea to you, or someone, a while back,
> only I was serious.  crash_safety=off.  I never got around to fleshing
> out the details, though.

clearly:
 crash_safety=running_with_scissors
-- 
Ross Reedstrom, Ph.D.                                 reedstrm@rice.edu
Systems Engineer & Admin, Research Scientist        phone: 713-348-6166
Connexions                  http://cnx.org            fax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE