Thread: Why not install pgstattuple by default?

Why not install pgstattuple by default?

From
Josh Berkus
Date:
Hackers,

I've run into a couple of occasions lately where I really wanted
pgstattuple on a production server in order to check table/index bloat.However, in the production environment at a
largesite installing a
 
contrib module can involve a process which takes days or weeks.

Is there some reason why the stattuple functions aren't just available
as core functions?  Are they unsafe somehow?

-- 
-- Josh Berkus
---------------------------------------------------------
Josh Berkus                       PostgreSQL Experts Inc.
CEO                               database professionals
josh.berkus@pgexperts.com         www.pgexperts.com
1-888-743-9778 x.508              San Francisco


Re: Why not install pgstattuple by default?

From
Magnus Hagander
Date:
On Fri, May 6, 2011 at 00:34, Josh Berkus <josh.berkus@pgexperts.com> wrote:
> Hackers,
>
> I've run into a couple of occasions lately where I really wanted
> pgstattuple on a production server in order to check table/index bloat.
>  However, in the production environment at a large site installing a
> contrib module can involve a process which takes days or weeks.

That can be said for a lot of things in contrib. pg_standby in 8.4 for
example. Or adminpack. Or dblink. Or hstore. There's a mix of "example
stuff" and "actually pretty darn useful in production stuff". I'm sure
you can find a couple of hundred emails in the archives on this very
topic.

From 9.1, it'll be a simple CREATE EXTENSION command - so much of the
problem goes away. Well. It doesn't go away, but it gets a lot more
neatly swept under the rug.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: Why not install pgstattuple by default?

From
Euler Taveira de Oliveira
Date:
Em 06-05-2011 05:06, Magnus Hagander escreveu:
> On Fri, May 6, 2011 at 00:34, Josh Berkus<josh.berkus@pgexperts.com>  wrote:
>> Hackers,
>>
>> I've run into a couple of occasions lately where I really wanted
>> pgstattuple on a production server in order to check table/index bloat.
>>   However, in the production environment at a large site installing a
>> contrib module can involve a process which takes days or weeks.
>
I already faced that problem too.

>  From 9.1, it'll be a simple CREATE EXTENSION command - so much of the
> problem goes away. Well. It doesn't go away, but it gets a lot more
> neatly swept under the rug.
>
That's half of the history. Admin needs to install postgresql-contrib package. 
Sometimes it takes too much time to convince clients that some additional 
supplied modules are useful for them.

Now that we have extensions, why not build and package the contrib modules by 
default? 'make world' is not the answer. There is not an option for "install 
all pieces of software". Let's install pg+contrib and leave only 'CREATE 
EXTENSION foo' for the admins.


--   Euler Taveira de Oliveira - Timbira       http://www.timbira.com.br/  PostgreSQL: Consultoria, Desenvolvimento,
Suporte24x7 e Treinamento
 


Re: Why not install pgstattuple by default?

From
Magnus Hagander
Date:
On Fri, May 6, 2011 at 18:22, Euler Taveira de Oliveira
<euler@timbira.com> wrote:
> Em 06-05-2011 05:06, Magnus Hagander escreveu:
>>
>> On Fri, May 6, 2011 at 00:34, Josh Berkus<josh.berkus@pgexperts.com>
>>  wrote:
>>>
>>> Hackers,
>>>
>>> I've run into a couple of occasions lately where I really wanted
>>> pgstattuple on a production server in order to check table/index bloat.
>>>  However, in the production environment at a large site installing a
>>> contrib module can involve a process which takes days or weeks.
>>
> I already faced that problem too.
>
>>  From 9.1, it'll be a simple CREATE EXTENSION command - so much of the
>> problem goes away. Well. It doesn't go away, but it gets a lot more
>> neatly swept under the rug.
>>
> That's half of the history. Admin needs to install postgresql-contrib
> package. Sometimes it takes too much time to convince clients that some
> additional supplied modules are useful for them.
>
> Now that we have extensions, why not build and package the contrib modules
> by default? 'make world' is not the answer. There is not an option for
> "install all pieces of software". Let's install pg+contrib and leave only
> 'CREATE EXTENSION foo' for the admins.

That's mostly an issue to be solved by the packagers. Some contrib
modules add dependencies, but those that don't could easily be
packaged in the main server package.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: Why not install pgstattuple by default?

From
Christopher Browne
Date:
On Fri, May 6, 2011 at 1:32 PM, Magnus Hagander <magnus@hagander.net> wrote:
> On Fri, May 6, 2011 at 18:22, Euler Taveira de Oliveira
> <euler@timbira.com> wrote:
>> Em 06-05-2011 05:06, Magnus Hagander escreveu:
>>>
>>> On Fri, May 6, 2011 at 00:34, Josh Berkus<josh.berkus@pgexperts.com>
>>>  wrote:
>>>>
>>>> Hackers,
>>>>
>>>> I've run into a couple of occasions lately where I really wanted
>>>> pgstattuple on a production server in order to check table/index bloat.
>>>>  However, in the production environment at a large site installing a
>>>> contrib module can involve a process which takes days or weeks.
>>>
>> I already faced that problem too.
>>
>>>  From 9.1, it'll be a simple CREATE EXTENSION command - so much of the
>>> problem goes away. Well. It doesn't go away, but it gets a lot more
>>> neatly swept under the rug.
>>>
>> That's half of the history. Admin needs to install postgresql-contrib
>> package. Sometimes it takes too much time to convince clients that some
>> additional supplied modules are useful for them.
>>
>> Now that we have extensions, why not build and package the contrib modules
>> by default? 'make world' is not the answer. There is not an option for
>> "install all pieces of software". Let's install pg+contrib and leave only
>> 'CREATE EXTENSION foo' for the admins.
>
> That's mostly an issue to be solved by the packagers. Some contrib
> modules add dependencies, but those that don't could easily be
> packaged in the main server package.

It seems to me that there's something of a "packaging policy" question to this.

A long time ago, on a pre-buildfarm planet, far, far away, it was
pretty uncertain what contrib modules could be hoped to run on what
platform.

At Afilias, we used to have to be *really* picky, because the subset
that ran on Solaris and AIX were not even close to all of them.
pgstattuples *was* one that the DBAs always wanted, but what would
compile was alway hit-and-miss.

Once we got AIX running a buildfarm node, that led to getting *ALL* of
contrib working there, and I'm pretty sure that similar happened with
other platforms at around the same time (I'm thinking this was 7.4,
but it might have been 8.0)

Be that all as it may, there has been a "sea change", where we have
moved from "sporadic usability of contrib" to it being *continually*
tested on *all* buildfarm platforms, which certainly adds to the
confidence level.

But people are evidently still setting packaging policies based on how
things were back in 7.3, even though that perhaps isn't necessary
anymore.

Certainly it's not a huge amount of code; less than 2MB these days.
-> % wc `dpkg -L postgresql-contrib-9.0` | tail -1 15952   67555 1770987 total

I'm getting "paper cuts" quite a bit these days over the differences
between what different packaging systems decide to install.  The one
*I* get notably bit on, of late, is that I have written code that
expects to have pg_config to do some degree of self-discovery, only to
find production folk complaining that they only have "psql" available
in their environment.

I don't expect the extension system to help with any of this, since if
"production folk" try to install minimal sets of packages, they're
liable to consciously exclude extension support.  The "improvement"
would come from drawing contrib a bit closer to core, and encouraging
packagers (dpkg, rpm, ports) to fold contrib into "base" rather than
separating it.  I'm sure that would get some pushback, though.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


Re: Why not install pgstattuple by default?

From
Andrew Dunstan
Date:

On 05/06/2011 01:55 PM, Christopher Browne wrote:
>
> Once we got AIX running a buildfarm node, that led to getting *ALL* of
> contrib working there, and I'm pretty sure that similar happened with
> other platforms at around the same time (I'm thinking this was 7.4,
> but it might have been 8.0)

FYI, the buildfarm started in late 2004, near the end of the 8.0 
development cycle. It quickly led to a number of contrib fixes.

Time flies when you're having fun ...

cheers

andrew




Re: Why not install pgstattuple by default?

From
Euler Taveira de Oliveira
Date:
Em 06-05-2011 14:55, Christopher Browne escreveu:
> The "improvement"
> would come from drawing contrib a bit closer to core, and encouraging
> packagers (dpkg, rpm, ports) to fold contrib into "base" rather than
> separating it.  I'm sure that would get some pushback, though.>
I'm in favor of find out what are the popular extensions and make them into 
"base"; the other ones could be moved to PGXN.


--   Euler Taveira de Oliveira - Timbira       http://www.timbira.com.br/  PostgreSQL: Consultoria, Desenvolvimento,
Suporte24x7 e Treinamento
 


Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
Christopher Browne wrote:
> I'm getting "paper cuts" quite a bit these days over the differences
> between what different packaging systems decide to install.  The one
> *I* get notably bit on, of late, is that I have written code that
> expects to have pg_config to do some degree of self-discovery, only to
> find production folk complaining that they only have "psql" available
> in their environment.
>   

Given the other improvements in being able to build extensions in 9.1, 
we really should push packagers to move pg_config from the PostgreSQL 
development package into the main one starting in that version.  I've 
gotten bit by this plenty of times.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Christopher Browne
Date:
On Fri, May 6, 2011 at 2:32 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> Christopher Browne wrote:
>>
>> I'm getting "paper cuts" quite a bit these days over the differences
>> between what different packaging systems decide to install.  The one
>> *I* get notably bit on, of late, is that I have written code that
>> expects to have pg_config to do some degree of self-discovery, only to
>> find production folk complaining that they only have "psql" available
>> in their environment.
>
> Given the other improvements in being able to build extensions in 9.1, we
> really should push packagers to move pg_config from the PostgreSQL
> development package into the main one starting in that version.  I've gotten
> bit by this plenty of times.

I'm agreeable to that, in general.

If there's a "server" package and a "client" package, it likely only
fits with the "server" package.  On a host where only the "client" is
installed, they won't be able to install extensions, so it's pretty
futile to have it there.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


Re: Why not install pgstattuple by default?

From
Andrew Dunstan
Date:

On 05/06/2011 03:14 PM, Christopher Browne wrote:
> On Fri, May 6, 2011 at 2:32 PM, Greg Smith<greg@2ndquadrant.com>  wrote:
>> Christopher Browne wrote:
>>> I'm getting "paper cuts" quite a bit these days over the differences
>>> between what different packaging systems decide to install.  The one
>>> *I* get notably bit on, of late, is that I have written code that
>>> expects to have pg_config to do some degree of self-discovery, only to
>>> find production folk complaining that they only have "psql" available
>>> in their environment.
>> Given the other improvements in being able to build extensions in 9.1, we
>> really should push packagers to move pg_config from the PostgreSQL
>> development package into the main one starting in that version.  I've gotten
>> bit by this plenty of times.
> I'm agreeable to that, in general.
>
> If there's a "server" package and a "client" package, it likely only
> fits with the "server" package.  On a host where only the "client" is
> installed, they won't be able to install extensions, so it's pretty
> futile to have it there.

I don't agree. It can be useful even there, to see how the libraries are 
configured, for example. I'd be inclined to bundle it with 
postgresql-libs or the moral equivalent.

cheers

andrew


Re: Why not install pgstattuple by default?

From
Magnus Hagander
Date:
On Fri, May 6, 2011 at 21:19, Andrew Dunstan <andrew@dunslane.net> wrote:
>
>
> On 05/06/2011 03:14 PM, Christopher Browne wrote:
>>
>> On Fri, May 6, 2011 at 2:32 PM, Greg Smith<greg@2ndquadrant.com>  wrote:
>>>
>>> Christopher Browne wrote:
>>>>
>>>> I'm getting "paper cuts" quite a bit these days over the differences
>>>> between what different packaging systems decide to install.  The one
>>>> *I* get notably bit on, of late, is that I have written code that
>>>> expects to have pg_config to do some degree of self-discovery, only to
>>>> find production folk complaining that they only have "psql" available
>>>> in their environment.
>>>
>>> Given the other improvements in being able to build extensions in 9.1, we
>>> really should push packagers to move pg_config from the PostgreSQL
>>> development package into the main one starting in that version.  I've
>>> gotten
>>> bit by this plenty of times.
>>
>> I'm agreeable to that, in general.
>>
>> If there's a "server" package and a "client" package, it likely only
>> fits with the "server" package.  On a host where only the "client" is
>> installed, they won't be able to install extensions, so it's pretty
>> futile to have it there.
>
> I don't agree. It can be useful even there, to see how the libraries are
> configured, for example. I'd be inclined to bundle it with postgresql-libs
> or the moral equivalent.

+1.

And it's not like it wastes huge amount of space...


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Magnus Hagander <magnus@hagander.net> writes:
> On Fri, May 6, 2011 at 21:19, Andrew Dunstan <andrew@dunslane.net> wrote:
>> On 05/06/2011 03:14 PM, Christopher Browne wrote:
>>> If there's a "server" package and a "client" package, it likely only
>>> fits with the "server" package. �On a host where only the "client" is
>>> installed, they won't be able to install extensions, so it's pretty
>>> futile to have it there.

>> I don't agree. It can be useful even there, to see how the libraries are
>> configured, for example. I'd be inclined to bundle it with postgresql-libs
>> or the moral equivalent.

> +1.

Well, actually, I think packagers have generally put it into a -devel
subpackage.  If it were in either a "server" or "client" package there
would be much less of an issue.

Bundling pg_config into a -libs package is probably not going to happen,
at least not on Red Hat systems, because it would create multilib issues
(ie, you're supposed to be able to install 32-bit and 64-bit libraries
concurrently, but there's noplace to put a /usr/bin file without causing
a conflict).

FWIW, I did move pg_config from -devel to the "main" (really client)
postgresql package in Fedora, as of 9.0.  That will ensure it's present
in either client or server installations.  Eventually that packaging
will reach RHEL ...
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Andrew Dunstan
Date:

On 05/06/2011 04:06 PM, Tom Lane wrote:
> Magnus Hagander<magnus@hagander.net>  writes:
>> On Fri, May 6, 2011 at 21:19, Andrew Dunstan<andrew@dunslane.net>  wrote:
>>> On 05/06/2011 03:14 PM, Christopher Browne wrote:
>>>> If there's a "server" package and a "client" package, it likely only
>>>> fits with the "server" package.  On a host where only the "client" is
>>>> installed, they won't be able to install extensions, so it's pretty
>>>> futile to have it there.
>>> I don't agree. It can be useful even there, to see how the libraries are
>>> configured, for example. I'd be inclined to bundle it with postgresql-libs
>>> or the moral equivalent.
>> +1.
> Well, actually, I think packagers have generally put it into a -devel
> subpackage.  If it were in either a "server" or "client" package there
> would be much less of an issue.
>
> Bundling pg_config into a -libs package is probably not going to happen,
> at least not on Red Hat systems, because it would create multilib issues
> (ie, you're supposed to be able to install 32-bit and 64-bit libraries
> concurrently, but there's noplace to put a /usr/bin file without causing
> a conflict).
>
> FWIW, I did move pg_config from -devel to the "main" (really client)
> postgresql package in Fedora, as of 9.0.  That will ensure it's present
> in either client or server installations.  Eventually that packaging
> will reach RHEL ...
>
>             

That's reasonable, and certainly better than having it in -devel.

cheers

andrew


Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Christopher Browne <cbbrowne@gmail.com> writes:
> But people are evidently still setting packaging policies based on how
> things were back in 7.3, even though that perhaps isn't necessary
> anymore.

FWIW, once you get past the client versus server distinction, I think
most subpackaging decisions are based on either the idea that "only a
minority of people will want this", or a desire to limit how many
dependencies are pulled in by the main package(s).  Both of those
concerns apply to various subsets of -contrib, which means it's going
to be hard to persuade packagers to fold -contrib into the -server
package altogether.  Nor would you gain their approval by trying to
pre-empt the decision.

We might get somewhere by trying to identify a small set of particularly
popular contrib modules that don't add any extra dependencies, and then
recommending to packagers that those ones get bundled into the main
server package.

> Certainly it's not a huge amount of code; less than 2MB these days.
> -> % wc `dpkg -L postgresql-contrib-9.0` | tail -1
>   15952   67555 1770987 total

Well, to add some concrete facts rather than generalities to my own post,
here are the sizes of the built RPMs from my last build for Fedora:

-rw-r--r--. 1 tgl tgl  3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl   490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl 27337677 Apr 18 10:51 postgresql-debuginfo-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl   961660 Apr 18 10:50 postgresql-devel-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl  7569048 Apr 18 10:50 postgresql-docs-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl   246506 Apr 18 10:50 postgresql-libs-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl    64940 Apr 18 10:50 postgresql-plperl-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl    65776 Apr 18 10:50 postgresql-plpython-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl    45941 Apr 18 10:50 postgresql-pltcl-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl  5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl  1370509 Apr 18 10:50 postgresql-test-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl  3644113 Apr 18 10:50 postgresql-upgrade-9.0.4-1.fc13.x86_64.rpm

The separate debuginfo package is distro policy enforced by toolchain;
I couldn't do anything about that even if I wanted to.  The separate
-libs subpackage is also hard to avoid because of distro policy about
multilib installations.  Separating devel support files (such as
headers) is also standard practice.  The other subdivisions are either
my fault or those of my predecessors.  plperl, plpython, and pltcl are
split out for dependency reasons, ie to not have the -server package
require you to install those languages and their respective ecosystems.
I think the separation of the -docs, -test, and -upgrade subpackages is
also pretty easy to defend on the grounds that "they're big and not
everyone wants 'em, especially not in production".

That leaves us with these three subpackages about which there's room
for argument:

-rw-r--r--. 1 tgl tgl  3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl   490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm
-rw-r--r--. 1 tgl tgl  5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm

Merging -contrib into the server package would increase the size of the
latter by almost 10%, which is enough to bother people.  Also, a bit of
dependency extraction shows that -contrib has these dependencies beyond
the ones in the two main packages:

libcrypt.so.1
libossp-uuid.so.16
libxslt.so.1

That's not a particularly large list, I guess, but they're still the
sorts of dependencies that don't win any friends when it's time to get
the distro to fit on a DVD.

Bottom line is that I'd rather have a smaller postgresql-server package
that gets included in the shipping DVD than a complete one that gets
kicked off because it's too large and pulls in too many other non-core
dependencies.

So, again, some selective migration of contrib modules into the main
-server package might be doable, but the key word there is selective.
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Josh Berkus
Date:
All,

> We might get somewhere by trying to identify a small set of particularly
> popular contrib modules that don't add any extra dependencies, and then
> recommending to packagers that those ones get bundled into the main
> server package.

Yeah, I wasn't thinking of including all of contrib.  There's a lot of
reasons not to do that.  I was asking about pgstattuple in particular,
since it's:
(a) small
(b) has no external dependancies
(c) adds no stability risk or performance overhead
(d) is usually needed on production systems when it's needed at all

It's possible that we have one or two other diagnostic utilities which
meet the above profile. pageinspect, maybe?

The reason why this is such an issue is that for big users with
high-demand production environments, installing any software, even
postgresql-devel or postgresql-contrib packages, are big major IT deals
which require weeks of advance scheduling.  As a result, diagnostic
tools from contrib tend not to be used because the problem they need to
diagnose is much more urgent than that.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


Re: Why not install pgstattuple by default?

From
Robert Haas
Date:
On Fri, May 6, 2011 at 5:58 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Yeah, I wasn't thinking of including all of contrib.  There's a lot of
> reasons not to do that.

Slightly off-topic, but I really think we would benefit from trying to
divide up contrib.  Right now it's a mixture of (a) debugging and
instrumentation tools (e.g. pgstattuple, pageinspect, pgrowlocks,
pg_freespacemap, pg_buffercache), (b) server functionality that is
generally useful but considered worth including in core (e.g. hstore,
citext, pg_trgm), (c) deprecated modules that we keep around mostly
for hysterical reasons (tsearch2, xml2, intagg), and (d) examples and
regression test support (dummy_seclabel, spi, start-scripts).  I think
it would make things a lot easier for both packagers and actual users
if we separated these things into different directories, e.g.:

debugging and instrumentation tools -> src/debug
server functionality -> contrib
server functionality (deprecated) -> contrib/deprecated
examples & regression test suport -> src/test/examples

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
On 05/06/2011 05:58 PM, Josh Berkus wrote:
> Yeah, I wasn't thinking of including all of contrib. There's a lot of
> reasons not to do that.  I was asking about pgstattuple in particular,
> since it's:
> (a) small
> (b) has no external dependancies
> (c) adds no stability risk or performance overhead
> (d) is usually needed on production systems when it's needed at all
>
> It's possible that we have one or two other diagnostic utilities which
> meet the above profile. pageinspect, maybe?
>    

I use pgstattuple, pageinspect, pg_freespacemap, and pg_buffercache 
regularly enough that I wish they were more common.  Throw in pgrowlocks 
and you've got the whole group Robert put into the debug set.  It makes 
me sad every time I finish a utility using one of these and realize I'll 
have to include the whole "make sure you have the contrib modules 
installed" disclaimer in its documentation again.

These are the only ones I'd care about moving into a more likely place.  
The rest of the contrib modules are the sort where if you need them, you 
realize that early and get them installed.  These are different by 
virtue of their need popping up most often during emergencies.  The fact 
that I believe they all match the low impact criteria too makes it even 
easier to consider.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, May 6, 2011 at 5:58 PM, Josh Berkus <josh@agliodbs.com> wrote:
>> Yeah, I wasn't thinking of including all of contrib. �There's a lot of
>> reasons not to do that.

> Slightly off-topic, but I really think we would benefit from trying to
> divide up contrib. [ snip ]
> I think
> it would make things a lot easier for both packagers and actual users
> if we separated these things into different directories, e.g.:

> debugging and instrumentation tools -> src/debug
> server functionality -> contrib
> server functionality (deprecated) -> contrib/deprecated
> examples & regression test suport -> src/test/examples

From a packager's standpoint, that would be entirely worthless.  The
source tree's just a source tree, they don't care what lives where
within it.  I was just thinking about what it'd take to actually
repackage things for Fedora, and the main problem is here:

%files contrib
...
%{_datadir}/pgsql/contrib/
...

If you're not adept at reading RPM specfiles, what that is saying
is that everything that "make install" has stuck under
${prefix}/share/pgsql/contrib/ is to be included in the contrib RPM.
To selectively move some stuff to the server RPM, I'd have to replace
that one line with a file-by-file list of *everything* in share/contrib,
and then move some of those lines to the "%files server" section, and
then look forward to having to maintain that list in future versions.
I'm already maintaining a file-by-file list of contrib's .so's, and I
can tell you it's a PITA.

As a packager, what I'd really want to see from a division into
recommended and not-so-recommended packages is that they get installed
into different subdirectories by "make install".  Then I could just
point RPM at those directories and I'd be done.

I don't know how practical this is from our development standpoint,
nor from a user's standpoint --- I doubt we want to ask people to use
different CREATE EXTENSION commands depending on the preferredness of
the extension.

A possibly workable compromise would be to provide two separate makefile
installation targets for preferred and less preferred modules.  The RPM
script could then do something like
make install-contrib-preferredls -R .../sharedir >contrib.files.for.server-packagemake
install-contrib-second-class-citizensls-R .../sharedir >all.contrib.files... and then some magic with "comm" to
separateout the contrib... files not mentioned in contrib.files.for.server-package ...
 

Pretty grotty but it would work.  Anyway my point is that this is all
driven off the *installed* file tree.  A specfile writer doesn't know
nor want to know where "make install" is getting things from in the
source tree.
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Josh Berkus
Date:
On 5/6/11 3:19 PM, Robert Haas wrote:
> Slightly off-topic, but I really think we would benefit from trying to
> divide up contrib.

I don't agree, unless by "divide up" you mean "move several things to
extensions".

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


Re: Why not install pgstattuple by default?

From
Josh Berkus
Date:
> These are the only ones I'd care about moving into a more likely place. 
> The rest of the contrib modules are the sort where if you need them, you
> realize that early and get them installed.  These are different by
> virtue of their need popping up most often during emergencies.  The fact
> that I believe they all match the low impact criteria too makes it even
> easier to consider.

Yes, precisely.  If I need intarray, I'm going to need it for a
development push, which is planned well in advance.  But if I need
pageinspect, it's almost certainly because an emergency has arisen.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


Re: Why not install pgstattuple by default?

From
Greg Stark
Date:
On Fri, May 6, 2011 at 11:32 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> I use pgstattuple, pageinspect, pg_freespacemap, and pg_buffercache
> regularly enough that I wish they were more common.  Throw in pgrowlocks and
> you've got the whole group Robert put into the debug set.  It makes me sad
> every time I finish a utility using one of these and realize I'll have to
> include the whole "make sure you have the contrib modules installed"
> disclaimer in its documentation again.

Well the lightweight way to achieve what you want is to just move
these functions into core. There's a pretty good argument to be made
for debugging tools being considered an integral part of a base
system. I remember making the same argument when Sun first made the
radical move for a Unix vendor to stop shipping a working C compiler
and debugger as part of the base Solaris packages.

The only argument I see as particularly frightening on that front is
people playing the sekurity card. A naive attacker who obtains access
to the postgres account could do more damage than they might be able
to do without these modules installed. Of course an attacker with
"postgres" can do just about anything but it's not entirely baseless
--  we don't set up the database with modules like plsh installed by
default for example.

The only actual security issue I can think of is that the pageinspect
module would let users look at deleted records more easily. It would
be pretty tricky, but not impossible, to do that without it.

--
greg


Re: Why not install pgstattuple by default?

From
Robert Haas
Date:
On Fri, May 6, 2011 at 6:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> As a packager, what I'd really want to see from a division into
> recommended and not-so-recommended packages is that they get installed
> into different subdirectories by "make install".  Then I could just
> point RPM at those directories and I'd be done.

Well, that might be good, too.  But, right now, if someone pulls up
our documentation, or our source tree, they could easily be forgiven
for thinking that hstore and dummy_seclabel are comparable, and they
aren't.

> I don't know how practical this is from our development standpoint,
> nor from a user's standpoint --- I doubt we want to ask people to use
> different CREATE EXTENSION commands depending on the preferredness of
> the extension.

Certainly not.

> A possibly workable compromise would be to provide two separate makefile
> installation targets for preferred and less preferred modules.  The RPM
> script could then do something like
>
>        make install-contrib-preferred
>        ls -R .../sharedir >contrib.files.for.server-package
>        make install-contrib-second-class-citizens
>        ls -R .../sharedir >all.contrib.files
>        ... and then some magic with "comm" to separate out the contrib
>        ... files not mentioned in contrib.files.for.server-package ...
>
> Pretty grotty but it would work.  Anyway my point is that this is all
> driven off the *installed* file tree.  A specfile writer doesn't know
> nor want to know where "make install" is getting things from in the
> source tree.

This isn't any uglier than some other RPM hacks I've seen, and less
ugly than some, but you'd have a better sense of that than I do.  At
any rate, having the various categories separated in the source tree
can't possibly hurt the effort to make something like this work, and
might make it somewhat easier.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, May 6, 2011 at 6:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> As a packager, what I'd really want to see from a division into
>> recommended and not-so-recommended packages is that they get installed
>> into different subdirectories by "make install".

> Well, that might be good, too.  But, right now, if someone pulls up
> our documentation, or our source tree, they could easily be forgiven
> for thinking that hstore and dummy_seclabel are comparable, and they
> aren't.

Sure, but that's a documentation issue, which again is not going to be
helped by a source-tree rearrangement.

As somebody who spends a lot of time on back-patching, I'm not excited
in the least by suggestions to rearrange the source tree for marginal
cosmetic benefits, which is all that I see here.
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Robert Haas
Date:
On Fri, May 6, 2011 at 9:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Fri, May 6, 2011 at 6:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> As a packager, what I'd really want to see from a division into
>>> recommended and not-so-recommended packages is that they get installed
>>> into different subdirectories by "make install".
>
>> Well, that might be good, too.  But, right now, if someone pulls up
>> our documentation, or our source tree, they could easily be forgiven
>> for thinking that hstore and dummy_seclabel are comparable, and they
>> aren't.
>
> Sure, but that's a documentation issue, which again is not going to be
> helped by a source-tree rearrangement.

I disagree - I think it would be helpful to rearrange both things.

> As somebody who spends a lot of time on back-patching, I'm not excited
> in the least by suggestions to rearrange the source tree for marginal
> cosmetic benefits, which is all that I see here.

I understand, but we have back-patched only 32 patches that touch
contrib into REL9_0_STABLE since its creation, of which 9 were done by
you, and only 4 of those would have required adjustment under the
separation criteria I proposed.  I think, therefore, that the impact
would be bearable.  Source-code rearrangement is never going to be
completely free, but that seems like a tolerable level of annoyance.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Why not install pgstattuple by default?

From
Dimitri Fontaine
Date:
Christopher Browne <cbbrowne@gmail.com> writes:
> I don't expect the extension system to help with any of this, since if
> "production folk" try to install minimal sets of packages, they're
> liable to consciously exclude extension support.  The "improvement"
> would come from drawing contrib a bit closer to core, and encouraging
> packagers (dpkg, rpm, ports) to fold contrib into "base" rather than
> separating it.  I'm sure that would get some pushback, though.

What I think the next step is here is to better classify contribs/ into
what we consider production ready core extension (adminpack, hstore,
ltree, pgstattuple, you name it — that's the trick), code example (spi,
some more I guess) and extensibility examples (for hooks or whatnot).

We've been talking about renaming contrib for a long time, but that will
not cut it.  Classifying it and agreeing to maintain some parts of it
the same way we maintain the core is what's asked here.  Is there a will
to go there?

If there's a will to maintain some contribs the way core is maintained
itself, we have to pick a new name for that, and to pick a list of
current contribs to move in there.  Then packagers will either include
that in the main package or have the main package depend on the new one.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support


Re: Why not install pgstattuple by default?

From
Robert Haas
Date:
On Sat, May 7, 2011 at 8:54 AM, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote:
> We've been talking about renaming contrib for a long time, but that will
> not cut it.  Classifying it and agreeing to maintain some parts of it
> the same way we maintain the core is what's asked here.  Is there a will
> to go there?

I'm game.  I doubt it'll be a lot more maintenance; we already pretty
much patch contrib when we patch everything else.  It'll just be
easier to sort the wheat from the chaff.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Why not install pgstattuple by default?

From
"Johann 'Myrkraverk' Oskarsson"
Date:
On Fri, 06 May 2011 20:06:04 -0000, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Bundling pg_config into a -libs package is probably not going to happen,
> at least not on Red Hat systems, because it would create multilib issues
> (ie, you're supposed to be able to install 32-bit and 64-bit libraries
> concurrently, but there's noplace to put a /usr/bin file without causing
> a conflict).

There is no separate directory for 64bit binaries like /usr/bin and
/usr/bin/amd64 on Solaris?  And even in the absence of such a convention
I'd still like to have both 32bit and 64bit binaries of the server/client
available.

I have a feeling this topic is digressing a bit.


--   Johann Oskarsson                http://www.2ndquadrant.com/    |[]  PostgreSQL Development, 24x7 Support, Training
andServices  --+--                                                                 |  Blog:
http://my.opera.com/myrkraverk/blog/


Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Greg Stark <gsstark@mit.edu> writes:
> On Fri, May 6, 2011 at 11:32 PM, Greg Smith <greg@2ndquadrant.com> wrote:
>> I use pgstattuple, pageinspect, pg_freespacemap, and pg_buffercache
>> regularly enough that I wish they were more common. �Throw in pgrowlocks and
>> you've got the whole group Robert put into the debug set. �It makes me sad
>> every time I finish a utility using one of these and realize I'll have to
>> include the whole "make sure you have the contrib modules installed"
>> disclaimer in its documentation again.

> Well the lightweight way to achieve what you want is to just move
> these functions into core.

I'm completely not in favor of that.  We have spent man-years upon
man-years on making Postgres an extensible system.  If we can't actually
*use* the extension features then that was all a waste of effort.

If anything I'd rather see us looking at what parts of the current core
system could be pushed out to extensions.  The geometric types are a
pretty obvious candidate, for example.

> The only argument I see as particularly frightening on that front is
> people playing the sekurity card.

Yeah, and it's a reasonable argument.  Even if it's not a reasonable
argument, you won't win any friends from the other side of the fence
by taking away their ability to choose.
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Peter Eisentraut
Date:
On fre, 2011-05-06 at 14:32 -0400, Greg Smith wrote:
> Given the other improvements in being able to build extensions in 9.1,
> we really should push packagers to move pg_config from the PostgreSQL 
> development package into the main one starting in that version.  I've 
> gotten bit by this plenty of times.

Do you need pg_config to install extensions?



Re: Why not install pgstattuple by default?

From
Euler Taveira de Oliveira
Date:
Em 07-05-2011 13:42, Peter Eisentraut escreveu:
> Do you need pg_config to install extensions?
>
No. But we need it to build other extensions.


--   Euler Taveira de Oliveira - Timbira       http://www.timbira.com.br/  PostgreSQL: Consultoria, Desenvolvimento,
Suporte24x7 e Treinamento
 


Re: Why not install pgstattuple by default?

From
Peter Eisentraut
Date:
On lör, 2011-05-07 at 17:35 -0300, Euler Taveira de Oliveira wrote:
> Em 07-05-2011 13:42, Peter Eisentraut escreveu:
> > Do you need pg_config to install extensions?
> >
> No. But we need it to build other extensions.

But for that you need the -dev[el] package anyway, so there would be no
point in moving pg_config out of it.



Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
On 05/07/2011 12:42 PM, Peter Eisentraut wrote:
> On fre, 2011-05-06 at 14:32 -0400, Greg Smith wrote:
>    
>> Given the other improvements in being able to build extensions in 9.1,
>> we really should push packagers to move pg_config from the PostgreSQL
>> development package into the main one starting in that version.  I've
>> gotten bit by this plenty of times.
>>      
> Do you need pg_config to install extensions?
>    

No, but you still need it to build them.  PGXN is a source code 
distribution method, not a binary one.  It presumes users can build 
modules they download using PGXS.  No pg_config, no working PGXS, no 
working PGXN.  For such a small binary to ripple out to that impact is bad.

The repmgr program we distribute has the same problem, so I've been 
getting first-hand reports of just how many people are likely to run 
into this recently.  You have to install postgresql-devel with RPM and 
on Debian, the very non-obvious postgresql-server-dev-$version

Anyway, didn't want to hijack this thread beyond pointing out that if 
there any package reshuffling that happens for contrib changes, it 
should check for and resolve this problem too.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Andrew Dunstan
Date:

On 05/07/2011 04:43 PM, Peter Eisentraut wrote:
> On lör, 2011-05-07 at 17:35 -0300, Euler Taveira de Oliveira wrote:
>> Em 07-05-2011 13:42, Peter Eisentraut escreveu:
>>> Do you need pg_config to install extensions?
>>>
>> No. But we need it to build other extensions.
> But for that you need the -dev[el] package anyway, so there would be no
> point in moving pg_config out of it.
>


pg_config is useful quite apart from its use in building things, as was 
discussed upthread.

cheers

andrew


Re: Why not install pgstattuple by default?

From
Peter Eisentraut
Date:
On lör, 2011-05-07 at 17:16 -0400, Andrew Dunstan wrote:
> pg_config is useful quite apart from its use in building things, as was 
> discussed upthread.

Link please.



Re: Why not install pgstattuple by default?

From
Peter Eisentraut
Date:
On lör, 2011-05-07 at 17:06 -0400, Greg Smith wrote:
> The repmgr program we distribute has the same problem, so I've been 
> getting first-hand reports of just how many people are likely to run 
> into this recently.  You have to install postgresql-devel with RPM and
> on Debian, the very non-obvious postgresql-server-dev-$version 

I'm pretty sure that for installing modules that need compilation from
CPAN or PyPI, you will need the corresponding p*-dev* package installed.
Nothing new here.  I don't think you will get distributors to abandon
the concept of -dev(el) packages for this.  Just saying.



Re: Why not install pgstattuple by default?

From
Andrew Dunstan
Date:

On 05/07/2011 05:26 PM, Peter Eisentraut wrote:
> On lör, 2011-05-07 at 17:16 -0400, Andrew Dunstan wrote:
>> pg_config is useful quite apart from its use in building things, as was
>> discussed upthread.
> Link please.
>

<http://archives.postgresql.org/pgsql-hackers/2011-05/msg00275.php>

cheers

andrew


Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
On 05/06/2011 04:06 PM, Tom Lane wrote:
> FWIW, I did move pg_config from -devel to the "main" (really client)
> postgresql package in Fedora, as of 9.0.  That will ensure it's present
> in either client or server installations.  Eventually that packaging
> will reach RHEL ...
>    

We should make sure that the PGDG packages adopt that for 9.1 then, so 
it starts catching on more.  Unless Devrim changed to catch up since I 
last installed an RPM set, in that 9.0 it's still in the same place:

$ rpm -qf /usr/pgsql-9.0/bin/pg_config
postgresql90-devel-9.0.2-2PGDG.rhel5

While Peter's question about whether it's really all that useful is 
reasonable, I'd at least like to get a better error message when you 
don't have everything needed to compile extensions.  I think the 
shortest path to that is making pg_config more likely to be installed, 
then to check whether the file "pg_config --pgxs" references exists.  
I'll see if I can turn that idea into an actual change to propose.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
Attached patch is a first cut at what moving one contrib module (in this
case pg_buffercache) to a new directory structure might look like.  The
idea is that src/extension could become a place for "first-class"
extensions to live.  Those are ones community is committed to providing
in core, but are just better implemented as extensions than in-database
functions, for reasons that include security.  This idea has been shared
by a lot of people for a while, only problem is that it wasn't really
practical to implement cleanly until the extensions code hit.  I think
it is now, this attempts to prove it.

Since patches involving file renaming are clunky, the changes might be
easier to see at
https://github.com/greg2ndQuadrant/postgres/commit/507923e21e963c873a84f1b850d64e895776574f
where I just pushed this change too.  The install step for the modules
looks like this now:

gsmith@grace:~/pgwork/src/move-contrib/src/extension/pg_buffercache$
make install
/bin/mkdir -p '/home/gsmith/pgwork/inst/move-contrib/lib/postgresql'
/bin/mkdir -p
'/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension'
/bin/sh ../../../config/install-sh -c -m 755  pg_buffercache.so
'/home/gsmith/pgwork/inst/move-contrib/lib/postgresql/pg_buffercache.so'
/bin/sh ../../../config/install-sh -c -m 644 ./pg_buffercache.control
'/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension/'
/bin/sh ../../../config/install-sh -c -m 644 ./pg_buffercache--1.0.sql
./pg_buffercache--unpackaged--1.0.sql
'/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension/'
$ psql -c "create extension pg_buffercache"
CREATE EXTENSION

The only clunky bit I wasn't really happy with is the amount of code
duplication that comes from having a src/extension/Makefile that looks
almost, but not quite, identical to contrib/Makefile.  The rest of the
changes don't seem too bad to me, and even that's really only 36 lines
that aren't touched often.  Yes, the paths are different, so backports
won't happen without an extra step.  But the code changes required were
easier than I was expecting, due to the general good modularity of the
extensions infrastructure.  So long as the result ends up in
share/postgresql/extension/ , whether they started in contrib/<module>
or src/extension/<module> doesn't really matter to CREATE EXTENSION.
But having them broke out this way makes it easy for the default
Makefile to build and install them all.  (I recognize I didn't do that
last step yet though)

I'll happily go covert pgstattuple and the rest of the internal
diagnostics modules to this scheme, and do the doc cleanups, this
upcoming week if it means I'll be able to use those things without
installing all of contrib one day.  Ditto for proposing RPM and Debian
packaging changes that match them.  All that work will get paid back the
first time I don't have to fill out a bunch of paperwork (again) at a
customer site justifying why they need to install the contrib [RPM|deb]
package (which has some scary stuff in it) on all their servers, just so
I can get some bloat or buffer inspection module.

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us



Attachment

Re: Why not install pgstattuple by default?

From
Peter Eisentraut
Date:
On lör, 2011-05-07 at 17:38 -0400, Andrew Dunstan wrote:
> 
> On 05/07/2011 05:26 PM, Peter Eisentraut wrote:
> > On lör, 2011-05-07 at 17:16 -0400, Andrew Dunstan wrote:
> >> pg_config is useful quite apart from its use in building things, as was
> >> discussed upthread.
> > Link please.
> >
> 
> <http://archives.postgresql.org/pgsql-hackers/2011-05/msg00275.php>

That thread just asserts that it might be useful, and I responded by
asking for what.



Re: Why not install pgstattuple by default?

From
Andrew Dunstan
Date:

On 05/08/2011 05:24 AM, Peter Eisentraut wrote:
> On lör, 2011-05-07 at 17:38 -0400, Andrew Dunstan wrote:
>> On 05/07/2011 05:26 PM, Peter Eisentraut wrote:
>>> On lör, 2011-05-07 at 17:16 -0400, Andrew Dunstan wrote:
>>>> pg_config is useful quite apart from its use in building things, as was
>>>> discussed upthread.
>>> Link please.
>>>
>> <http://archives.postgresql.org/pgsql-hackers/2011-05/msg00275.php>
> That thread just asserts that it might be useful, and I responded by
> asking for what.


As I said there: "to see how the libraries are configured, for example."

Just the other day I wanted to know what compilation options had been 
used for a particular installation. pg_config wasn't installed because 
the -devel package wasn't installed, and it would have saved me quite 
some time if pg_config had been available.

Another example is to find out what the installation is using for 
shares, the service directory and so on.

cheers

andrew


Re: Why not install pgstattuple by default?

From
Peter Eisentraut
Date:
On sön, 2011-05-08 at 07:21 -0400, Andrew Dunstan wrote:
> As I said there: "to see how the libraries are configured, for example."
> 
> Just the other day I wanted to know what compilation options had been 
> used for a particular installation. pg_config wasn't installed because 
> the -devel package wasn't installed, and it would have saved me quite 
> some time if pg_config had been available.
> 
> Another example is to find out what the installation is using for 
> shares, the service directory and so on.

Yeah, those are decent reasons.



Re: Why not install pgstattuple by default?

From
Christopher Browne
Date:
<p>My example is of doing "self-discovery" to see if all needful database components seem to be properly
installed.<p>E.g.- the app needs pgcrypto, intarray, and a custom data type.  The install script can consequently
informthe production folk either "looks good", or, alternately, "seems problematic!"<p>Actually, I haven't coded a
sampleof the "look for custom SPI & types" part, but it's a natural extension of what I have.<p>Of course, it only
providesa legitimate test when run on the database server, which isn't always how production folk want to do it, but
that'spart of a different argument...  

Re: Why not install pgstattuple by default?

From
Robert Haas
Date:
On Sun, May 8, 2011 at 12:02 AM, Greg Smith <greg@2ndquadrant.com> wrote:
> Attached patch is a first cut at what moving one contrib module (in this
> case pg_buffercache) to a new directory structure might look like.  The idea
> is that src/extension could become a place for "first-class" extensions to
> live.  Those are ones community is committed to providing in core, but are
> just better implemented as extensions than in-database functions, for
> reasons that include security.  This idea has been shared by a lot of people
> for a while, only problem is that it wasn't really practical to implement
> cleanly until the extensions code hit.  I think it is now, this attempts to
> prove it.
>
> Since patches involving file renaming are clunky, the changes might be
> easier to see at
> https://github.com/greg2ndQuadrant/postgres/commit/507923e21e963c873a84f1b850d64e895776574f
> where I just pushed this change too.  The install step for the modules looks
> like this now:
>
> gsmith@grace:~/pgwork/src/move-contrib/src/extension/pg_buffercache$ make
> install
> /bin/mkdir -p '/home/gsmith/pgwork/inst/move-contrib/lib/postgresql'
> /bin/mkdir -p
> '/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension'
> /bin/sh ../../../config/install-sh -c -m 755  pg_buffercache.so
> '/home/gsmith/pgwork/inst/move-contrib/lib/postgresql/pg_buffercache.so'
> /bin/sh ../../../config/install-sh -c -m 644 ./pg_buffercache.control
> '/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension/'
> /bin/sh ../../../config/install-sh -c -m 644 ./pg_buffercache--1.0.sql
> ./pg_buffercache--unpackaged--1.0.sql
>  '/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension/'
> $ psql -c "create extension pg_buffercache"
> CREATE EXTENSION
>
> The only clunky bit I wasn't really happy with is the amount of code
> duplication that comes from having a src/extension/Makefile that looks
> almost, but not quite, identical to contrib/Makefile.  The rest of the
> changes don't seem too bad to me, and even that's really only 36 lines that
> aren't touched often.  Yes, the paths are different, so backports won't
> happen without an extra step.  But the code changes required were easier
> than I was expecting, due to the general good modularity of the extensions
> infrastructure.  So long as the result ends up in
> share/postgresql/extension/ , whether they started in contrib/<module> or
> src/extension/<module> doesn't really matter to CREATE EXTENSION.  But
> having them broke out this way makes it easy for the default Makefile to
> build and install them all.  (I recognize I didn't do that last step yet
> though)
>
> I'll happily go covert pgstattuple and the rest of the internal diagnostics
> modules to this scheme, and do the doc cleanups, this upcoming week if it
> means I'll be able to use those things without installing all of contrib one
> day.  Ditto for proposing RPM and Debian packaging changes that match them.
>  All that work will get paid back the first time I don't have to fill out a
> bunch of paperwork (again) at a customer site justifying why they need to
> install the contrib [RPM|deb] package (which has some scary stuff in it) on
> all their servers, just so I can get some bloat or buffer inspection module.

I would really like to see us try to group things by topic, and not
just by whether or not we can all agree that the extension is
important enough to be first-class (which is bound to be a bit
tendentious).  We probably can't completely avoid some bikeshedding on
that topic, but even there it strikes me that sorting by topic might
make things a bit more clear.  For example, if we could somehow group
together all the diagnostic tools, maybe something like the list
below, I think that would be a start.  Now then we might go on to
argue about which are the more useful diagnostic tools, but I think
it's easier to argue about that category than it is to argue in the
abstract about whether you'd rather have hstore or pgstattuple, to
which the answer can only be "that depends".

auto_explain
oid2name
pageinspect
pg_buffercache
pg_freespacemap
pg_stat_statements
pg_test_fsync (perhaps)
pgrowlocks
pgstattuple

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
On 05/09/2011 10:53 AM, Robert Haas wrote:
> I would really like to see us try to group things by topic, and not
> just by whether or not we can all agree that the extension is
> important enough to be first-class (which is bound to be a bit
> tendentious).

Having played around with the prototype, I think it doesn't actually 
matter if there's a further division below the new one I introduced.  
The main thing I think is worth pointing out is that I only feel 
extensions with no external dependencies are worth the trouble of 
re-classifying here.  If it were worth reorganizing contrib just for the 
sake of categorizing it, that would have been done years ago.  The new 
thing is that extensions make it really easy to make some tools 
available in the server's extensions subdirectly, without actually 
activating them in the default install.

Looking at your list:

> auto_explain
> oid2name
> pageinspect
> pg_buffercache
> pg_freespacemap
> pg_stat_statements
> pg_test_fsync (perhaps)
> pgrowlocks
> pgstattuple
>    

oid2name and pg_test_fsync would be out because those are real 
executables.  I'd rather not introduce the risk/complexity of playing 
around with moving standalone utilities of such marginal value.  Whereas 
I think it sets an excellent precedent if the server is shipping with 
some standard add-ons, built using the same extension mechanism 
available to external code, in the core server package.  I'd certainly 
be happy to add auto_explain and pg_stat_statements (also extremely 
popular things to install for me) to that list.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Robert Haas
Date:
On Mon, May 9, 2011 at 1:14 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> On 05/09/2011 10:53 AM, Robert Haas wrote:
>>
>> I would really like to see us try to group things by topic, and not
>> just by whether or not we can all agree that the extension is
>> important enough to be first-class (which is bound to be a bit
>> tendentious).
>
> Having played around with the prototype, I think it doesn't actually matter
> if there's a further division below the new one I introduced.  The main
> thing I think is worth pointing out is that I only feel extensions with no
> external dependencies are worth the trouble of re-classifying here.  If it
> were worth reorganizing contrib just for the sake of categorizing it, that
> would have been done years ago.  The new thing is that extensions make it
> really easy to make some tools available in the server's extensions
> subdirectly, without actually activating them in the default install.
>
> Looking at your list:
>
>> auto_explain
>> oid2name
>> pageinspect
>> pg_buffercache
>> pg_freespacemap
>> pg_stat_statements
>> pg_test_fsync (perhaps)
>> pgrowlocks
>> pgstattuple
>>
>
> oid2name and pg_test_fsync would be out because those are real executables.
>  I'd rather not introduce the risk/complexity of playing around with moving
> standalone utilities of such marginal value.  Whereas I think it sets an
> excellent precedent if the server is shipping with some standard add-ons,
> built using the same extension mechanism available to external code, in the
> core server package.  I'd certainly be happy to add auto_explain and
> pg_stat_statements (also extremely popular things to install for me) to that
> list.

I'm happy enough with that set of guidelines: namely, that we'd use
src/extension only for things that don't require additional
dependencies, and not for things that build standalone executables.
If we're going to move things around, I think we should take the
trouble to categorize them along the way, and your idea of inserting
one more subdirectory under src/extension for grouping seems fine to
me.

I don't think we should be too obstinate about trying to twist the arm
of packagers who (as Tom points out) will do whatever they want in
spite of us, but the current state of contrib, with all sorts of
things of varying type, complexity, and value mixed together cannot
possibly be a good thing.  Even if the effect of all this is that some
distributions end up with postgresql-server-instrumentation and
postgresql-server-datatypes packages, rather than putting everything
in postgresql-server, I still think that'd be better than having a
monolithic lump called postgresql-contrib.  Heaven only knows what
could be in there (says the sys admin)...

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Why not install pgstattuple by default?

From
Alvaro Herrera
Date:
Excerpts from Robert Haas's message of lun may 09 14:31:33 -0400 2011:

> I'm happy enough with that set of guidelines: namely, that we'd use
> src/extension only for things that don't require additional
> dependencies, and not for things that build standalone executables.
> If we're going to move things around, I think we should take the
> trouble to categorize them along the way, and your idea of inserting
> one more subdirectory under src/extension for grouping seems fine to
> me.

For executables we already have src/bin.  Do we really need a separate
place for, say, pg_standby or pg_upgrade?

-- 
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Alvaro Herrera <alvherre@commandprompt.com> writes:
> Excerpts from Robert Haas's message of lun may 09 14:31:33 -0400 2011:
>> I'm happy enough with that set of guidelines: namely, that we'd use
>> src/extension only for things that don't require additional
>> dependencies, and not for things that build standalone executables.
>> If we're going to move things around, I think we should take the
>> trouble to categorize them along the way, and your idea of inserting
>> one more subdirectory under src/extension for grouping seems fine to
>> me.

> For executables we already have src/bin.  Do we really need a separate
> place for, say, pg_standby or pg_upgrade?

Putting them in there implies we think they are of core-code quality.
I'm definitely *not* ready to grant that status to pg_upgrade, for
instance.
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Dimitri Fontaine
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:
> Sure, but that's a documentation issue, which again is not going to be
> helped by a source-tree rearrangement.

So we have several problem to solve here, and I agree that source code
rearrangement is fixing none of them.  Maybe it would ease maintaining
down the road, though, but I'll leave that choice up to you.

Which contribs are ready (safe) for production?  We could handle that in
the version numbers, having most of contrib at version 0.9.1 (say) and
some of them at version 1.0.  We could also stop distributing examples
in binary form, only ship them in source package.

Then we need to include inspection extensions into the core packaging,
but still as extensions.  That's more of a packager problem, except that
they need a clear and strong message about it.  Maybe have a new
Makefile and build those extensions as part of the server build, and
install them as part as the "normal" install.

Another mix of those ideas is to ship inspection extensions and ready
for production ones all into a new package postgresql-server-extensions
that the main server would depend on, and the ones that are adding more
dependencies still in contrib, where not-ready for production extensions
would not get built.

This kind of ideas would also allow to quite easily remove things from
the main server and have them as extensions, available on any install
but a CREATE EXTENSION (WITH SCHEMA pg_catalog) command away.  And still
maintained at the same quality level in the source tree.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support


Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
On 05/09/2011 03:31 PM, Alvaro Herrera wrote:
> For executables we already have src/bin. Do we really need a separate
> place for, say, pg_standby or pg_upgrade?
>    

There's really no executables in contrib that I find myself regularly 
desperate for/angry at because they're not installed as an integral part 
of the server, the way I regularly find myself cursing some things that 
are now extensions.  The only contrib executable I use often is pgbench, 
and that's normally early in server life--when it's still possible to 
get things installed easily most places.  And it's something that can be 
removed when finished, in cases where people are nervous about the 
contrib package.

Situations where pg_standby or pg_upgrade suddenly pop up as emergency 
needs seem unlikely too, which is also the case with oid2name, 
pg_test_fsync, pg_archivecleanup, and vacuumlo.  I've had that happen 
with pg_archivecleanup exactly once since it appeared--running out of 
space and that was the easiest way to make the problem go away 
immediately and permanently--but since it was on an 8.4 server we had to 
download the source and build anyway.

Also, my experience is that people are not that paranoid about running 
external binaries, even though they could potentially do harm to the 
database.  Can always do a backup beforehand.  But the idea of loading a 
piece of code that lives in the server all the time freaks them out.  
The way the word contrib implies (and sometimes is meant to mean) low 
quality, while stuff that ships with the main server package does not, 
has been beaten up here for years already.  It's only a few cases where 
that's not fully justified, and the program can easily become an 
extension, that I feel are really worth changing here.  There are 49 
directories in contrib/ ; at best maybe 20% of them will ever fall into 
that category.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
On 05/09/2011 02:31 PM, Robert Haas wrote:
> I don't think we should be too obstinate about trying to twist the arm
> of packagers who (as Tom points out) will do whatever they want in
> spite of us, but the current state of contrib, with all sorts of
> things of varying type, complexity, and value mixed together cannot
> possibly be a good thing.

I think the idea I'm running with for now means that packagers won't 
actually have to do anything.  I'd expect typical packaging for 9.1 to 
include share/postgresql/extension from the build results without being 
more specific.  You need to grab 3 files from there to get the plpgsql 
extension, and I can't imagine any packager listing them all by name.  
So if I dump the converted contrib extensions to put files under there, 
and remove them from the contrib build area destination, I suspect they 
will magically jump from the contrib to the extensions area of the 
server package at next package build; no packager level changes 
required.  The more I look at this, the less obtrusive of a change it 
seems to be.  Only people who will really notice are users who discover 
more in the basic server package, and of course committers with 
backporting to do.

Since packaged builds of 9.1 current with beta1 seem to be in short 
supply still, this theory looks hard to prove just yet.  I'm very 
excited that it's packaging week here however (rolls eyes), so I'll 
check it myself.  I'll incorporate the suggestions made since I posted 
that test patch and do a bigger round of this next, end to end with an 
RPM set as the output.  It sounds like everyone who has a strong opinion 
on what this change might look like has sketched a similar looking 
bikeshed.  Once a reasonable implementation is hammered out, I'd rather 
jump to the big argument between not liking change vs. the advocacy 
benefits to PostgreSQL of doing this; they are considerable in my mind.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Bruce Momjian
Date:
Tom Lane wrote:
> Christopher Browne <cbbrowne@gmail.com> writes:
> > But people are evidently still setting packaging policies based on how
> > things were back in 7.3, even though that perhaps isn't necessary
> > anymore.
> 
> FWIW, once you get past the client versus server distinction, I think
> most subpackaging decisions are based on either the idea that "only a
> minority of people will want this", or a desire to limit how many
> dependencies are pulled in by the main package(s).  Both of those
> concerns apply to various subsets of -contrib, which means it's going
> to be hard to persuade packagers to fold -contrib into the -server
> package altogether.  Nor would you gain their approval by trying to
> pre-empt the decision.
> 
> We might get somewhere by trying to identify a small set of particularly
> popular contrib modules that don't add any extra dependencies, and then
> recommending to packagers that those ones get bundled into the main
> server package.
> 
> > Certainly it's not a huge amount of code; less than 2MB these days.
> > -> % wc `dpkg -L postgresql-contrib-9.0` | tail -1
> >   15952   67555 1770987 total
> 
> Well, to add some concrete facts rather than generalities to my own post,
> here are the sizes of the built RPMs from my last build for Fedora:
> 
> -rw-r--r--. 1 tgl tgl  3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl   490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl 27337677 Apr 18 10:51 postgresql-debuginfo-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl   961660 Apr 18 10:50 postgresql-devel-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl  7569048 Apr 18 10:50 postgresql-docs-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl   246506 Apr 18 10:50 postgresql-libs-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl    64940 Apr 18 10:50 postgresql-plperl-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl    65776 Apr 18 10:50 postgresql-plpython-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl    45941 Apr 18 10:50 postgresql-pltcl-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl  5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl  1370509 Apr 18 10:50 postgresql-test-9.0.4-1.fc13.x86_64.rpm
> -rw-r--r--. 1 tgl tgl  3644113 Apr 18 10:50 postgresql-upgrade-9.0.4-1.fc13.x86_64.rpm

Is that last one pg_upgrade?  It seems very big.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + It's impossible for everything to be true. +


Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Bruce Momjian <bruce@momjian.us> writes:
> Tom Lane wrote:
>> here are the sizes of the built RPMs from my last build for Fedora:
>> 
>> -rw-r--r--. 1 tgl tgl  3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl   490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl 27337677 Apr 18 10:51 postgresql-debuginfo-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl   961660 Apr 18 10:50 postgresql-devel-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl  7569048 Apr 18 10:50 postgresql-docs-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl   246506 Apr 18 10:50 postgresql-libs-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl    64940 Apr 18 10:50 postgresql-plperl-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl    65776 Apr 18 10:50 postgresql-plpython-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl    45941 Apr 18 10:50 postgresql-pltcl-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl  5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl  1370509 Apr 18 10:50 postgresql-test-9.0.4-1.fc13.x86_64.rpm
>> -rw-r--r--. 1 tgl tgl  3644113 Apr 18 10:50 postgresql-upgrade-9.0.4-1.fc13.x86_64.rpm

> Is that last one pg_upgrade?  It seems very big.

pg_upgrade plus a supporting set of 8.4 files ...
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <bruce@momjian.us> writes:
> > Tom Lane wrote:
> >> here are the sizes of the built RPMs from my last build for Fedora:
> >> 
> >> -rw-r--r--. 1 tgl tgl  3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl   490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl 27337677 Apr 18 10:51 postgresql-debuginfo-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl   961660 Apr 18 10:50 postgresql-devel-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl  7569048 Apr 18 10:50 postgresql-docs-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl   246506 Apr 18 10:50 postgresql-libs-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl    64940 Apr 18 10:50 postgresql-plperl-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl    65776 Apr 18 10:50 postgresql-plpython-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl    45941 Apr 18 10:50 postgresql-pltcl-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl  5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl  1370509 Apr 18 10:50 postgresql-test-9.0.4-1.fc13.x86_64.rpm
> >> -rw-r--r--. 1 tgl tgl  3644113 Apr 18 10:50 postgresql-upgrade-9.0.4-1.fc13.x86_64.rpm
> 
> > Is that last one pg_upgrade?  It seems very big.
> 
> pg_upgrade plus a supporting set of 8.4 files ...

OK, where do I get to dance around that pg_upgrade is packaged in Fedora
thanks to you?  At PGCon?  LOL

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + It's impossible for everything to be true. +


Re: Why not install pgstattuple by default?

From
Devrim GÜNDÜZ
Date:
On Sat, 2011-05-07 at 21:47 -0400, Greg Smith wrote:
> On 05/06/2011 04:06 PM, Tom Lane wrote:
> > FWIW, I did move pg_config from -devel to the "main" (really client)
> > postgresql package in Fedora, as of 9.0.  That will ensure it's
> present
> > in either client or server installations.  Eventually that packaging
> > will reach RHEL ...
> >
>
> We should make sure that the PGDG packages adopt that for 9.1 then, so
> it starts catching on more.  Unless Devrim changed to catch up since I
> last installed an RPM set, in that 9.0 it's still in the same place:
>
> $ rpm -qf /usr/pgsql-9.0/bin/pg_config
> postgresql90-devel-9.0.2-2PGDG.rhel5

I'm not sure that I can move it to main package in 9.0 package set, I
need to make sure that I won't break anything. But it is pretty doable
for 9.1.

Regards,
--
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz

Re: Why not install pgstattuple by default?

From
Tom Lane
Date:
Devrim GÜNDÜZ <devrim@gunduz.org> writes:
> I'm not sure that I can move it to main package in 9.0 package set, I
> need to make sure that I won't break anything. But it is pretty doable
> for 9.1.

It should be okay to move, since the -devel subpackage requires the main
one.  Therefore there is no configuration in which pg_config would be
present before and missing after the change.
        regards, tom lane


Re: Why not install pgstattuple by default?

From
Devrim GÜNDÜZ
Date:
On Thu, 2011-05-12 at 19:37 -0400, Tom Lane wrote:
>
>
> It should be okay to move, since the -devel subpackage requires the
> main one.  Therefore there is no configuration in which pg_config
> would be present before and missing after the change.

Thanks Tom. I can make this change in next build set.

Regards,

--
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz

Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
Attached is a second patch to move a number of extensions from contrib/
to src/test/.  Extensions there are built by the default built target,
making installation of the postgresql-XX-contrib package unnecessary for
them to be available.

This request--making some of these additions available without the
"contrib" name/package being involved--has popped up many times before,
and it turys out to be really easy to resolve with the new extensions
infrastructure.  I think it's even a reasonable change to consider
applying now, between 9.1 Beta 1 and Beta 2.  The documentation
adjustments are the only serious bit left here that I've been able to
find, the code changes here are all internal to the build process and easy.

I moved the following extensions:

auto_explain pageinspect pg_buffercache pg_freespacemap pgrowlocks
pg_stat_statements pgstattuple

My criteria was picking extensions that:

1) Don't have any special dependencies
2) Are in contrib mainly because they don't need to be internal
functions, not because their code quality is demo/early
3) Tend to be installed on a production server for troubleshooting
problems, rather than being required by development.
4) Regularly pop up as necessary/helpful in production deployment

Some of my personal discussions of this topic have suggested that some
other popular extensions like pgcrypto and hstore get converted too.  I
think those all fail test (3), and I'm not actually sure where pgcrypto
adds any special dependency/distribution issues were it to be moved to
the main database package.  If this general idea catches on, a wider
discussion of what else should get "promoted" to this extensions area
would be appropriate.  The ones I picked seemed the easiest to justify
by this criteria set.

Any packager who grabs the shared/postgresql/extension directory in 9.1,
which I expect to be all of them, shouldn't need any changes to pick up
this adjustment.  For example, pgstattuple installs these files:

share/postgresql/extension/pgstattuple--1.0.sql
share/postgresql/extension/pgstattuple--unpackaged--1.0.sql
share/postgresql/extension/pgstattuple.control

And these are the same locations they were already at.  The location of
the source and which target built it is the change here, the result
isn't any different.  This means that this change won't even break
extensions already installed.

Once the basic directory plumbing is in place, conversion of a single
extension from contrib/ to src/test/ is, trivial.  The diff view

I did five of them in an hour once I figured out what was needed.
Easiest to view the changes at
https://github.com/greg2ndQuadrant/postgres/commits/move-contrib , the
patch file is huge because of all the renames.
https://github.com/greg2ndQuadrant/postgres/commit/d647091b18c4448c5a582d423f8839ef0c717e91
show a good example of one convert, that changes pg_freespacemap.  There
are more changes to the comments listing the name of the file than to
any code.  (Yes, I know there are some whitespace issues I introduced in
the new Makefile, they should be fixed by a later commit in the series)

--
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


diff --git a/contrib/Makefile b/contrib/Makefile
index 6967767..bcb9465 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -7,7 +7,6 @@ include $(top_builddir)/src/Makefile.global
 SUBDIRS = \
         adminpack    \
         auth_delay    \
-        auto_explain    \
         btree_gin    \
         btree_gist    \
         chkpass        \
@@ -27,21 +26,15 @@ SUBDIRS = \
         lo        \
         ltree        \
         oid2name    \
-        pageinspect    \
         passwordcheck    \
         pg_archivecleanup \
-        pg_buffercache    \
-        pg_freespacemap \
         pg_standby    \
-        pg_stat_statements \
         pg_test_fsync    \
         pg_trgm        \
         pg_upgrade    \
         pg_upgrade_support \
         pgbench        \
         pgcrypto    \
-        pgrowlocks    \
-        pgstattuple    \
         seg        \
         spi        \
         tablefunc    \
diff --git a/contrib/auto_explain/Makefile b/contrib/auto_explain/Makefile
deleted file mode 100644
index 2d1443f..0000000
--- a/contrib/auto_explain/Makefile
+++ /dev/null
@@ -1,15 +0,0 @@
-# contrib/auto_explain/Makefile
-
-MODULE_big = auto_explain
-OBJS = auto_explain.o
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/auto_explain
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c
deleted file mode 100644
index b320698..0000000
--- a/contrib/auto_explain/auto_explain.c
+++ /dev/null
@@ -1,304 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * auto_explain.c
- *
- *
- * Copyright (c) 2008-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *      contrib/auto_explain/auto_explain.c
- *
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "commands/explain.h"
-#include "executor/instrument.h"
-#include "utils/guc.h"
-
-PG_MODULE_MAGIC;
-
-/* GUC variables */
-static int    auto_explain_log_min_duration = -1; /* msec or -1 */
-static bool auto_explain_log_analyze = false;
-static bool auto_explain_log_verbose = false;
-static bool auto_explain_log_buffers = false;
-static int    auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
-static bool auto_explain_log_nested_statements = false;
-
-static const struct config_enum_entry format_options[] = {
-    {"text", EXPLAIN_FORMAT_TEXT, false},
-    {"xml", EXPLAIN_FORMAT_XML, false},
-    {"json", EXPLAIN_FORMAT_JSON, false},
-    {"yaml", EXPLAIN_FORMAT_YAML, false},
-    {NULL, 0, false}
-};
-
-/* Current nesting depth of ExecutorRun calls */
-static int    nesting_level = 0;
-
-/* Saved hook values in case of unload */
-static ExecutorStart_hook_type prev_ExecutorStart = NULL;
-static ExecutorRun_hook_type prev_ExecutorRun = NULL;
-static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
-static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
-
-#define auto_explain_enabled() \
-    (auto_explain_log_min_duration >= 0 && \
-     (nesting_level == 0 || auto_explain_log_nested_statements))
-
-void        _PG_init(void);
-void        _PG_fini(void);
-
-static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
-static void explain_ExecutorRun(QueryDesc *queryDesc,
-                    ScanDirection direction,
-                    long count);
-static void explain_ExecutorFinish(QueryDesc *queryDesc);
-static void explain_ExecutorEnd(QueryDesc *queryDesc);
-
-
-/*
- * Module load callback
- */
-void
-_PG_init(void)
-{
-    /* Define custom GUC variables. */
-    DefineCustomIntVariable("auto_explain.log_min_duration",
-         "Sets the minimum execution time above which plans will be logged.",
-                         "Zero prints all plans. -1 turns this feature off.",
-                            &auto_explain_log_min_duration,
-                            -1,
-                            -1, INT_MAX / 1000,
-                            PGC_SUSET,
-                            GUC_UNIT_MS,
-                            NULL,
-                            NULL,
-                            NULL);
-
-    DefineCustomBoolVariable("auto_explain.log_analyze",
-                             "Use EXPLAIN ANALYZE for plan logging.",
-                             NULL,
-                             &auto_explain_log_analyze,
-                             false,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    DefineCustomBoolVariable("auto_explain.log_verbose",
-                             "Use EXPLAIN VERBOSE for plan logging.",
-                             NULL,
-                             &auto_explain_log_verbose,
-                             false,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    DefineCustomBoolVariable("auto_explain.log_buffers",
-                             "Log buffers usage.",
-                             NULL,
-                             &auto_explain_log_buffers,
-                             false,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    DefineCustomEnumVariable("auto_explain.log_format",
-                             "EXPLAIN format to be used for plan logging.",
-                             NULL,
-                             &auto_explain_log_format,
-                             EXPLAIN_FORMAT_TEXT,
-                             format_options,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    DefineCustomBoolVariable("auto_explain.log_nested_statements",
-                             "Log nested statements.",
-                             NULL,
-                             &auto_explain_log_nested_statements,
-                             false,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    EmitWarningsOnPlaceholders("auto_explain");
-
-    /* Install hooks. */
-    prev_ExecutorStart = ExecutorStart_hook;
-    ExecutorStart_hook = explain_ExecutorStart;
-    prev_ExecutorRun = ExecutorRun_hook;
-    ExecutorRun_hook = explain_ExecutorRun;
-    prev_ExecutorFinish = ExecutorFinish_hook;
-    ExecutorFinish_hook = explain_ExecutorFinish;
-    prev_ExecutorEnd = ExecutorEnd_hook;
-    ExecutorEnd_hook = explain_ExecutorEnd;
-}
-
-/*
- * Module unload callback
- */
-void
-_PG_fini(void)
-{
-    /* Uninstall hooks. */
-    ExecutorStart_hook = prev_ExecutorStart;
-    ExecutorRun_hook = prev_ExecutorRun;
-    ExecutorFinish_hook = prev_ExecutorFinish;
-    ExecutorEnd_hook = prev_ExecutorEnd;
-}
-
-/*
- * ExecutorStart hook: start up logging if needed
- */
-static void
-explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
-{
-    if (auto_explain_enabled())
-    {
-        /* Enable per-node instrumentation iff log_analyze is required. */
-        if (auto_explain_log_analyze && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
-        {
-            queryDesc->instrument_options |= INSTRUMENT_TIMER;
-            if (auto_explain_log_buffers)
-                queryDesc->instrument_options |= INSTRUMENT_BUFFERS;
-        }
-    }
-
-    if (prev_ExecutorStart)
-        prev_ExecutorStart(queryDesc, eflags);
-    else
-        standard_ExecutorStart(queryDesc, eflags);
-
-    if (auto_explain_enabled())
-    {
-        /*
-         * Set up to track total elapsed time in ExecutorRun.  Make sure the
-         * space is allocated in the per-query context so it will go away at
-         * ExecutorEnd.
-         */
-        if (queryDesc->totaltime == NULL)
-        {
-            MemoryContext oldcxt;
-
-            oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
-            queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
-            MemoryContextSwitchTo(oldcxt);
-        }
-    }
-}
-
-/*
- * ExecutorRun hook: all we need do is track nesting depth
- */
-static void
-explain_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
-{
-    nesting_level++;
-    PG_TRY();
-    {
-        if (prev_ExecutorRun)
-            prev_ExecutorRun(queryDesc, direction, count);
-        else
-            standard_ExecutorRun(queryDesc, direction, count);
-        nesting_level--;
-    }
-    PG_CATCH();
-    {
-        nesting_level--;
-        PG_RE_THROW();
-    }
-    PG_END_TRY();
-}
-
-/*
- * ExecutorFinish hook: all we need do is track nesting depth
- */
-static void
-explain_ExecutorFinish(QueryDesc *queryDesc)
-{
-    nesting_level++;
-    PG_TRY();
-    {
-        if (prev_ExecutorFinish)
-            prev_ExecutorFinish(queryDesc);
-        else
-            standard_ExecutorFinish(queryDesc);
-        nesting_level--;
-    }
-    PG_CATCH();
-    {
-        nesting_level--;
-        PG_RE_THROW();
-    }
-    PG_END_TRY();
-}
-
-/*
- * ExecutorEnd hook: log results if needed
- */
-static void
-explain_ExecutorEnd(QueryDesc *queryDesc)
-{
-    if (queryDesc->totaltime && auto_explain_enabled())
-    {
-        double        msec;
-
-        /*
-         * Make sure stats accumulation is done.  (Note: it's okay if several
-         * levels of hook all do this.)
-         */
-        InstrEndLoop(queryDesc->totaltime);
-
-        /* Log plan if duration is exceeded. */
-        msec = queryDesc->totaltime->total * 1000.0;
-        if (msec >= auto_explain_log_min_duration)
-        {
-            ExplainState es;
-
-            ExplainInitState(&es);
-            es.analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
-            es.verbose = auto_explain_log_verbose;
-            es.buffers = (es.analyze && auto_explain_log_buffers);
-            es.format = auto_explain_log_format;
-
-            ExplainBeginOutput(&es);
-            ExplainQueryText(&es, queryDesc);
-            ExplainPrintPlan(&es, queryDesc);
-            ExplainEndOutput(&es);
-
-            /* Remove last line break */
-            if (es.str->len > 0 && es.str->data[es.str->len - 1] == '\n')
-                es.str->data[--es.str->len] = '\0';
-
-            /*
-             * Note: we rely on the existing logging of context or
-             * debug_query_string to identify just which statement is being
-             * reported.  This isn't ideal but trying to do it here would
-             * often result in duplication.
-             */
-            ereport(LOG,
-                    (errmsg("duration: %.3f ms  plan:\n%s",
-                            msec, es.str->data),
-                     errhidestmt(true)));
-
-            pfree(es.str->data);
-        }
-    }
-
-    if (prev_ExecutorEnd)
-        prev_ExecutorEnd(queryDesc);
-    else
-        standard_ExecutorEnd(queryDesc);
-}
diff --git a/contrib/pageinspect/Makefile b/contrib/pageinspect/Makefile
deleted file mode 100644
index 13ba6d3..0000000
--- a/contrib/pageinspect/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pageinspect/Makefile
-
-MODULE_big    = pageinspect
-OBJS        = rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
-
-EXTENSION = pageinspect
-DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pageinspect
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
deleted file mode 100644
index ef27cd4..0000000
--- a/contrib/pageinspect/btreefuncs.c
+++ /dev/null
@@ -1,502 +0,0 @@
-/*
- * contrib/pageinspect/btreefuncs.c
- *
- *
- * btreefuncs.c
- *
- * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/nbtree.h"
-#include "catalog/namespace.h"
-#include "catalog/pg_type.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "utils/builtins.h"
-
-
-extern Datum bt_metap(PG_FUNCTION_ARGS);
-extern Datum bt_page_items(PG_FUNCTION_ARGS);
-extern Datum bt_page_stats(PG_FUNCTION_ARGS);
-
-PG_FUNCTION_INFO_V1(bt_metap);
-PG_FUNCTION_INFO_V1(bt_page_items);
-PG_FUNCTION_INFO_V1(bt_page_stats);
-
-#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
-#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
-
-#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
-        if ( !(FirstOffsetNumber <= (offnum) && \
-                        (offnum) <= PageGetMaxOffsetNumber(pg)) ) \
-             elog(ERROR, "page offset number out of range"); }
-
-/* note: BlockNumber is unsigned, hence can't be negative */
-#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
-        if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
-             elog(ERROR, "block number out of range"); }
-
-/* ------------------------------------------------
- * structure for single btree page statistics
- * ------------------------------------------------
- */
-typedef struct BTPageStat
-{
-    uint32        blkno;
-    uint32        live_items;
-    uint32        dead_items;
-    uint32        page_size;
-    uint32        max_avail;
-    uint32        free_size;
-    uint32        avg_item_size;
-    char        type;
-
-    /* opaque data */
-    BlockNumber btpo_prev;
-    BlockNumber btpo_next;
-    union
-    {
-        uint32        level;
-        TransactionId xact;
-    }            btpo;
-    uint16        btpo_flags;
-    BTCycleId    btpo_cycleid;
-} BTPageStat;
-
-
-/* -------------------------------------------------
- * GetBTPageStatistics()
- *
- * Collect statistics of single b-tree page
- * -------------------------------------------------
- */
-static void
-GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
-{
-    Page        page = BufferGetPage(buffer);
-    PageHeader    phdr = (PageHeader) page;
-    OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
-    BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
-    int            item_size = 0;
-    int            off;
-
-    stat->blkno = blkno;
-
-    stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
-
-    stat->dead_items = stat->live_items = 0;
-
-    stat->page_size = PageGetPageSize(page);
-
-    /* page type (flags) */
-    if (P_ISDELETED(opaque))
-    {
-        stat->type = 'd';
-        stat->btpo.xact = opaque->btpo.xact;
-        return;
-    }
-    else if (P_IGNORE(opaque))
-        stat->type = 'e';
-    else if (P_ISLEAF(opaque))
-        stat->type = 'l';
-    else if (P_ISROOT(opaque))
-        stat->type = 'r';
-    else
-        stat->type = 'i';
-
-    /* btpage opaque data */
-    stat->btpo_prev = opaque->btpo_prev;
-    stat->btpo_next = opaque->btpo_next;
-    stat->btpo.level = opaque->btpo.level;
-    stat->btpo_flags = opaque->btpo_flags;
-    stat->btpo_cycleid = opaque->btpo_cycleid;
-
-    /* count live and dead tuples, and free space */
-    for (off = FirstOffsetNumber; off <= maxoff; off++)
-    {
-        IndexTuple    itup;
-
-        ItemId        id = PageGetItemId(page, off);
-
-        itup = (IndexTuple) PageGetItem(page, id);
-
-        item_size += IndexTupleSize(itup);
-
-        if (!ItemIdIsDead(id))
-            stat->live_items++;
-        else
-            stat->dead_items++;
-    }
-    stat->free_size = PageGetFreeSpace(page);
-
-    if ((stat->live_items + stat->dead_items) > 0)
-        stat->avg_item_size = item_size / (stat->live_items + stat->dead_items);
-    else
-        stat->avg_item_size = 0;
-}
-
-/* -----------------------------------------------
- * bt_page()
- *
- * Usage: SELECT * FROM bt_page('t1_pkey', 1);
- * -----------------------------------------------
- */
-Datum
-bt_page_stats(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    uint32        blkno = PG_GETARG_UINT32(1);
-    Buffer        buffer;
-    Relation    rel;
-    RangeVar   *relrv;
-    Datum        result;
-    HeapTuple    tuple;
-    TupleDesc    tupleDesc;
-    int            j;
-    char       *values[11];
-    BTPageStat    stat;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pageinspect functions"))));
-
-    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-    rel = relation_openrv(relrv, AccessShareLock);
-
-    if (!IS_INDEX(rel) || !IS_BTREE(rel))
-        elog(ERROR, "relation \"%s\" is not a btree index",
-             RelationGetRelationName(rel));
-
-    /*
-     * Reject attempts to read non-local temporary relations; we would be
-     * likely to get wrong data since we have no visibility into the owning
-     * session's local buffers.
-     */
-    if (RELATION_IS_OTHER_TEMP(rel))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("cannot access temporary tables of other sessions")));
-
-    if (blkno == 0)
-        elog(ERROR, "block 0 is a meta page");
-
-    CHECK_RELATION_BLOCK_RANGE(rel, blkno);
-
-    buffer = ReadBuffer(rel, blkno);
-
-    /* keep compiler quiet */
-    stat.btpo_prev = stat.btpo_next = InvalidBlockNumber;
-    stat.btpo_flags = stat.free_size = stat.avg_item_size = 0;
-
-    GetBTPageStatistics(blkno, buffer, &stat);
-
-    /* Build a tuple descriptor for our result type */
-    if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-        elog(ERROR, "return type must be a row type");
-
-    j = 0;
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.blkno);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%c", stat.type);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.live_items);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.dead_items);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.avg_item_size);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.page_size);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.free_size);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.btpo_prev);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.btpo_next);
-    values[j] = palloc(32);
-    if (stat.type == 'd')
-        snprintf(values[j++], 32, "%d", stat.btpo.xact);
-    else
-        snprintf(values[j++], 32, "%d", stat.btpo.level);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", stat.btpo_flags);
-
-    tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
-                                   values);
-
-    result = HeapTupleGetDatum(tuple);
-
-    ReleaseBuffer(buffer);
-
-    relation_close(rel, AccessShareLock);
-
-    PG_RETURN_DATUM(result);
-}
-
-/*-------------------------------------------------------
- * bt_page_items()
- *
- * Get IndexTupleData set in a btree page
- *
- * Usage: SELECT * FROM bt_page_items('t1_pkey', 1);
- *-------------------------------------------------------
- */
-
-/*
- * cross-call data structure for SRF
- */
-struct user_args
-{
-    Page        page;
-    OffsetNumber offset;
-};
-
-Datum
-bt_page_items(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    uint32        blkno = PG_GETARG_UINT32(1);
-    Datum        result;
-    char       *values[6];
-    HeapTuple    tuple;
-    FuncCallContext *fctx;
-    MemoryContext mctx;
-    struct user_args *uargs;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pageinspect functions"))));
-
-    if (SRF_IS_FIRSTCALL())
-    {
-        RangeVar   *relrv;
-        Relation    rel;
-        Buffer        buffer;
-        BTPageOpaque opaque;
-        TupleDesc    tupleDesc;
-
-        fctx = SRF_FIRSTCALL_INIT();
-
-        relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-        rel = relation_openrv(relrv, AccessShareLock);
-
-        if (!IS_INDEX(rel) || !IS_BTREE(rel))
-            elog(ERROR, "relation \"%s\" is not a btree index",
-                 RelationGetRelationName(rel));
-
-        /*
-         * Reject attempts to read non-local temporary relations; we would be
-         * likely to get wrong data since we have no visibility into the
-         * owning session's local buffers.
-         */
-        if (RELATION_IS_OTHER_TEMP(rel))
-            ereport(ERROR,
-                    (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                errmsg("cannot access temporary tables of other sessions")));
-
-        if (blkno == 0)
-            elog(ERROR, "block 0 is a meta page");
-
-        CHECK_RELATION_BLOCK_RANGE(rel, blkno);
-
-        buffer = ReadBuffer(rel, blkno);
-
-        /*
-         * We copy the page into local storage to avoid holding pin on the
-         * buffer longer than we must, and possibly failing to release it at
-         * all if the calling query doesn't fetch all rows.
-         */
-        mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
-
-        uargs = palloc(sizeof(struct user_args));
-
-        uargs->page = palloc(BLCKSZ);
-        memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
-
-        ReleaseBuffer(buffer);
-        relation_close(rel, AccessShareLock);
-
-        uargs->offset = FirstOffsetNumber;
-
-        opaque = (BTPageOpaque) PageGetSpecialPointer(uargs->page);
-
-        if (P_ISDELETED(opaque))
-            elog(NOTICE, "page is deleted");
-
-        fctx->max_calls = PageGetMaxOffsetNumber(uargs->page);
-
-        /* Build a tuple descriptor for our result type */
-        if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-            elog(ERROR, "return type must be a row type");
-
-        fctx->attinmeta = TupleDescGetAttInMetadata(tupleDesc);
-
-        fctx->user_fctx = uargs;
-
-        MemoryContextSwitchTo(mctx);
-    }
-
-    fctx = SRF_PERCALL_SETUP();
-    uargs = fctx->user_fctx;
-
-    if (fctx->call_cntr < fctx->max_calls)
-    {
-        ItemId        id;
-        IndexTuple    itup;
-        int            j;
-        int            off;
-        int            dlen;
-        char       *dump;
-        char       *ptr;
-
-        id = PageGetItemId(uargs->page, uargs->offset);
-
-        if (!ItemIdIsValid(id))
-            elog(ERROR, "invalid ItemId");
-
-        itup = (IndexTuple) PageGetItem(uargs->page, id);
-
-        j = 0;
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%d", uargs->offset);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "(%u,%u)",
-                 BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-                 itup->t_tid.ip_posid);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%d", (int) IndexTupleSize(itup));
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%c", IndexTupleHasNulls(itup) ? 't' : 'f');
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
-
-        ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
-        dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
-        dump = palloc0(dlen * 3 + 1);
-        values[j] = dump;
-        for (off = 0; off < dlen; off++)
-        {
-            if (off > 0)
-                *dump++ = ' ';
-            sprintf(dump, "%02x", *(ptr + off) & 0xff);
-            dump += 2;
-        }
-
-        tuple = BuildTupleFromCStrings(fctx->attinmeta, values);
-        result = HeapTupleGetDatum(tuple);
-
-        uargs->offset = uargs->offset + 1;
-
-        SRF_RETURN_NEXT(fctx, result);
-    }
-    else
-    {
-        pfree(uargs->page);
-        pfree(uargs);
-        SRF_RETURN_DONE(fctx);
-    }
-}
-
-
-/* ------------------------------------------------
- * bt_metap()
- *
- * Get a btree's meta-page information
- *
- * Usage: SELECT * FROM bt_metap('t1_pkey')
- * ------------------------------------------------
- */
-Datum
-bt_metap(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    Datum        result;
-    Relation    rel;
-    RangeVar   *relrv;
-    BTMetaPageData *metad;
-    TupleDesc    tupleDesc;
-    int            j;
-    char       *values[6];
-    Buffer        buffer;
-    Page        page;
-    HeapTuple    tuple;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pageinspect functions"))));
-
-    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-    rel = relation_openrv(relrv, AccessShareLock);
-
-    if (!IS_INDEX(rel) || !IS_BTREE(rel))
-        elog(ERROR, "relation \"%s\" is not a btree index",
-             RelationGetRelationName(rel));
-
-    /*
-     * Reject attempts to read non-local temporary relations; we would be
-     * likely to get wrong data since we have no visibility into the owning
-     * session's local buffers.
-     */
-    if (RELATION_IS_OTHER_TEMP(rel))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("cannot access temporary tables of other sessions")));
-
-    buffer = ReadBuffer(rel, 0);
-    page = BufferGetPage(buffer);
-    metad = BTPageGetMeta(page);
-
-    /* Build a tuple descriptor for our result type */
-    if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-        elog(ERROR, "return type must be a row type");
-
-    j = 0;
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", metad->btm_magic);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", metad->btm_version);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", metad->btm_root);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", metad->btm_level);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", metad->btm_fastroot);
-    values[j] = palloc(32);
-    snprintf(values[j++], 32, "%d", metad->btm_fastlevel);
-
-    tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
-                                   values);
-
-    result = HeapTupleGetDatum(tuple);
-
-    ReleaseBuffer(buffer);
-
-    relation_close(rel, AccessShareLock);
-
-    PG_RETURN_DATUM(result);
-}
diff --git a/contrib/pageinspect/fsmfuncs.c b/contrib/pageinspect/fsmfuncs.c
deleted file mode 100644
index 38c4e23..0000000
--- a/contrib/pageinspect/fsmfuncs.c
+++ /dev/null
@@ -1,59 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * fsmfuncs.c
- *      Functions to investigate FSM pages
- *
- * These functions are restricted to superusers for the fear of introducing
- * security holes if the input checking isn't as water-tight as it should.
- * You'd need to be superuser to obtain a raw page image anyway, so
- * there's hardly any use case for using these without superuser-rights
- * anyway.
- *
- * Copyright (c) 2007-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *      contrib/pageinspect/fsmfuncs.c
- *
- *-------------------------------------------------------------------------
- */
-
-#include "postgres.h"
-#include "lib/stringinfo.h"
-#include "storage/fsm_internals.h"
-#include "utils/builtins.h"
-#include "miscadmin.h"
-#include "funcapi.h"
-
-Datum        fsm_page_contents(PG_FUNCTION_ARGS);
-
-/*
- * Dumps the contents of a FSM page.
- */
-PG_FUNCTION_INFO_V1(fsm_page_contents);
-
-Datum
-fsm_page_contents(PG_FUNCTION_ARGS)
-{
-    bytea       *raw_page = PG_GETARG_BYTEA_P(0);
-    StringInfoData sinfo;
-    FSMPage        fsmpage;
-    int            i;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use raw page functions"))));
-
-    fsmpage = (FSMPage) PageGetContents(VARDATA(raw_page));
-
-    initStringInfo(&sinfo);
-
-    for (i = 0; i < NodesPerPage; i++)
-    {
-        if (fsmpage->fp_nodes[i] != 0)
-            appendStringInfo(&sinfo, "%d: %d\n", i, fsmpage->fp_nodes[i]);
-    }
-    appendStringInfo(&sinfo, "fp_next_slot: %d\n", fsmpage->fp_next_slot);
-
-    PG_RETURN_TEXT_P(cstring_to_text(sinfo.data));
-}
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
deleted file mode 100644
index 20bca0d..0000000
--- a/contrib/pageinspect/heapfuncs.c
+++ /dev/null
@@ -1,230 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * heapfuncs.c
- *      Functions to investigate heap pages
- *
- * We check the input to these functions for corrupt pointers etc. that
- * might cause crashes, but at the same time we try to print out as much
- * information as possible, even if it's nonsense. That's because if a
- * page is corrupt, we don't know why and how exactly it is corrupt, so we
- * let the user judge it.
- *
- * These functions are restricted to superusers for the fear of introducing
- * security holes if the input checking isn't as water-tight as it should be.
- * You'd need to be superuser to obtain a raw page image anyway, so
- * there's hardly any use case for using these without superuser-rights
- * anyway.
- *
- * Copyright (c) 2007-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *      contrib/pageinspect/heapfuncs.c
- *
- *-------------------------------------------------------------------------
- */
-
-#include "postgres.h"
-
-#include "fmgr.h"
-#include "funcapi.h"
-#include "access/heapam.h"
-#include "access/transam.h"
-#include "catalog/namespace.h"
-#include "catalog/pg_type.h"
-#include "utils/builtins.h"
-#include "miscadmin.h"
-
-Datum        heap_page_items(PG_FUNCTION_ARGS);
-
-
-/*
- * bits_to_text
- *
- * Converts a bits8-array of 'len' bits to a human-readable
- * c-string representation.
- */
-static char *
-bits_to_text(bits8 *bits, int len)
-{
-    int            i;
-    char       *str;
-
-    str = palloc(len + 1);
-
-    for (i = 0; i < len; i++)
-        str[i] = (bits[(i / 8)] & (1 << (i % 8))) ? '1' : '0';
-
-    str[i] = '\0';
-
-    return str;
-}
-
-
-/*
- * heap_page_items
- *
- * Allows inspection of line pointers and tuple headers of a heap page.
- */
-PG_FUNCTION_INFO_V1(heap_page_items);
-
-typedef struct heap_page_items_state
-{
-    TupleDesc    tupd;
-    Page        page;
-    uint16        offset;
-} heap_page_items_state;
-
-Datum
-heap_page_items(PG_FUNCTION_ARGS)
-{
-    bytea       *raw_page = PG_GETARG_BYTEA_P(0);
-    heap_page_items_state *inter_call_data = NULL;
-    FuncCallContext *fctx;
-    int            raw_page_size;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use raw page functions"))));
-
-    raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
-
-    if (SRF_IS_FIRSTCALL())
-    {
-        TupleDesc    tupdesc;
-        MemoryContext mctx;
-
-        if (raw_page_size < SizeOfPageHeaderData)
-            ereport(ERROR,
-                    (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-                  errmsg("input page too small (%d bytes)", raw_page_size)));
-
-        fctx = SRF_FIRSTCALL_INIT();
-        mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
-
-        inter_call_data = palloc(sizeof(heap_page_items_state));
-
-        /* Build a tuple descriptor for our result type */
-        if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-            elog(ERROR, "return type must be a row type");
-
-        inter_call_data->tupd = tupdesc;
-
-        inter_call_data->offset = FirstOffsetNumber;
-        inter_call_data->page = VARDATA(raw_page);
-
-        fctx->max_calls = PageGetMaxOffsetNumber(inter_call_data->page);
-        fctx->user_fctx = inter_call_data;
-
-        MemoryContextSwitchTo(mctx);
-    }
-
-    fctx = SRF_PERCALL_SETUP();
-    inter_call_data = fctx->user_fctx;
-
-    if (fctx->call_cntr < fctx->max_calls)
-    {
-        Page        page = inter_call_data->page;
-        HeapTuple    resultTuple;
-        Datum        result;
-        ItemId        id;
-        Datum        values[13];
-        bool        nulls[13];
-        uint16        lp_offset;
-        uint16        lp_flags;
-        uint16        lp_len;
-
-        memset(nulls, 0, sizeof(nulls));
-
-        /* Extract information from the line pointer */
-
-        id = PageGetItemId(page, inter_call_data->offset);
-
-        lp_offset = ItemIdGetOffset(id);
-        lp_flags = ItemIdGetFlags(id);
-        lp_len = ItemIdGetLength(id);
-
-        values[0] = UInt16GetDatum(inter_call_data->offset);
-        values[1] = UInt16GetDatum(lp_offset);
-        values[2] = UInt16GetDatum(lp_flags);
-        values[3] = UInt16GetDatum(lp_len);
-
-        /*
-         * We do just enough validity checking to make sure we don't reference
-         * data outside the page passed to us. The page could be corrupt in
-         * many other ways, but at least we won't crash.
-         */
-        if (ItemIdHasStorage(id) &&
-            lp_len >= sizeof(HeapTupleHeader) &&
-            lp_offset == MAXALIGN(lp_offset) &&
-            lp_offset + lp_len <= raw_page_size)
-        {
-            HeapTupleHeader tuphdr;
-            int            bits_len;
-
-            /* Extract information from the tuple header */
-
-            tuphdr = (HeapTupleHeader) PageGetItem(page, id);
-
-            values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
-            values[5] = UInt32GetDatum(HeapTupleHeaderGetXmax(tuphdr));
-            values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
-            values[7] = PointerGetDatum(&tuphdr->t_ctid);
-            values[8] = UInt32GetDatum(tuphdr->t_infomask2);
-            values[9] = UInt32GetDatum(tuphdr->t_infomask);
-            values[10] = UInt8GetDatum(tuphdr->t_hoff);
-
-            /*
-             * We already checked that the item as is completely within the
-             * raw page passed to us, with the length given in the line
-             * pointer.. Let's check that t_hoff doesn't point over lp_len,
-             * before using it to access t_bits and oid.
-             */
-            if (tuphdr->t_hoff >= sizeof(HeapTupleHeader) &&
-                tuphdr->t_hoff <= lp_len)
-            {
-                if (tuphdr->t_infomask & HEAP_HASNULL)
-                {
-                    bits_len = tuphdr->t_hoff -
-                        (((char *) tuphdr->t_bits) -((char *) tuphdr));
-
-                    values[11] = CStringGetTextDatum(
-                                 bits_to_text(tuphdr->t_bits, bits_len * 8));
-                }
-                else
-                    nulls[11] = true;
-
-                if (tuphdr->t_infomask & HEAP_HASOID)
-                    values[12] = HeapTupleHeaderGetOid(tuphdr);
-                else
-                    nulls[12] = true;
-            }
-            else
-            {
-                nulls[11] = true;
-                nulls[12] = true;
-            }
-        }
-        else
-        {
-            /*
-             * The line pointer is not used, or it's invalid. Set the rest of
-             * the fields to NULL
-             */
-            int            i;
-
-            for (i = 4; i <= 12; i++)
-                nulls[i] = true;
-        }
-
-        /* Build and return the result tuple. */
-        resultTuple = heap_form_tuple(inter_call_data->tupd, values, nulls);
-        result = HeapTupleGetDatum(resultTuple);
-
-        inter_call_data->offset++;
-
-        SRF_RETURN_NEXT(fctx, result);
-    }
-    else
-        SRF_RETURN_DONE(fctx);
-}
diff --git a/contrib/pageinspect/pageinspect--1.0.sql b/contrib/pageinspect/pageinspect--1.0.sql
deleted file mode 100644
index a711f58..0000000
--- a/contrib/pageinspect/pageinspect--1.0.sql
+++ /dev/null
@@ -1,104 +0,0 @@
-/* contrib/pageinspect/pageinspect--1.0.sql */
-
---
--- get_raw_page()
---
-CREATE FUNCTION get_raw_page(text, int4)
-RETURNS bytea
-AS 'MODULE_PATHNAME', 'get_raw_page'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION get_raw_page(text, text, int4)
-RETURNS bytea
-AS 'MODULE_PATHNAME', 'get_raw_page_fork'
-LANGUAGE C STRICT;
-
---
--- page_header()
---
-CREATE FUNCTION page_header(IN page bytea,
-    OUT lsn text,
-    OUT tli smallint,
-    OUT flags smallint,
-    OUT lower smallint,
-    OUT upper smallint,
-    OUT special smallint,
-    OUT pagesize smallint,
-    OUT version smallint,
-    OUT prune_xid xid)
-AS 'MODULE_PATHNAME', 'page_header'
-LANGUAGE C STRICT;
-
---
--- heap_page_items()
---
-CREATE FUNCTION heap_page_items(IN page bytea,
-    OUT lp smallint,
-    OUT lp_off smallint,
-    OUT lp_flags smallint,
-    OUT lp_len smallint,
-    OUT t_xmin xid,
-    OUT t_xmax xid,
-    OUT t_field3 int4,
-    OUT t_ctid tid,
-    OUT t_infomask2 integer,
-    OUT t_infomask integer,
-    OUT t_hoff smallint,
-    OUT t_bits text,
-    OUT t_oid oid)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'heap_page_items'
-LANGUAGE C STRICT;
-
---
--- bt_metap()
---
-CREATE FUNCTION bt_metap(IN relname text,
-    OUT magic int4,
-    OUT version int4,
-    OUT root int4,
-    OUT level int4,
-    OUT fastroot int4,
-    OUT fastlevel int4)
-AS 'MODULE_PATHNAME', 'bt_metap'
-LANGUAGE C STRICT;
-
---
--- bt_page_stats()
---
-CREATE FUNCTION bt_page_stats(IN relname text, IN blkno int4,
-    OUT blkno int4,
-    OUT type "char",
-    OUT live_items int4,
-    OUT dead_items int4,
-    OUT avg_item_size int4,
-    OUT page_size int4,
-    OUT free_size int4,
-    OUT btpo_prev int4,
-    OUT btpo_next int4,
-    OUT btpo int4,
-    OUT btpo_flags int4)
-AS 'MODULE_PATHNAME', 'bt_page_stats'
-LANGUAGE C STRICT;
-
---
--- bt_page_items()
---
-CREATE FUNCTION bt_page_items(IN relname text, IN blkno int4,
-    OUT itemoffset smallint,
-    OUT ctid tid,
-    OUT itemlen smallint,
-    OUT nulls bool,
-    OUT vars bool,
-    OUT data text)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'bt_page_items'
-LANGUAGE C STRICT;
-
---
--- fsm_page_contents()
---
-CREATE FUNCTION fsm_page_contents(IN page bytea)
-RETURNS text
-AS 'MODULE_PATHNAME', 'fsm_page_contents'
-LANGUAGE C STRICT;
diff --git a/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
b/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
deleted file mode 100644
index 7d4feaf..0000000
--- a/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
+++ /dev/null
@@ -1,28 +0,0 @@
-/* contrib/pageinspect/pageinspect--unpackaged--1.0.sql */
-
-DROP FUNCTION heap_page_items(bytea);
-CREATE FUNCTION heap_page_items(IN page bytea,
-    OUT lp smallint,
-    OUT lp_off smallint,
-    OUT lp_flags smallint,
-    OUT lp_len smallint,
-    OUT t_xmin xid,
-    OUT t_xmax xid,
-    OUT t_field3 int4,
-    OUT t_ctid tid,
-    OUT t_infomask2 integer,
-    OUT t_infomask integer,
-    OUT t_hoff smallint,
-    OUT t_bits text,
-    OUT t_oid oid)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'heap_page_items'
-LANGUAGE C STRICT;
-
-ALTER EXTENSION pageinspect ADD function get_raw_page(text,integer);
-ALTER EXTENSION pageinspect ADD function get_raw_page(text,text,integer);
-ALTER EXTENSION pageinspect ADD function page_header(bytea);
-ALTER EXTENSION pageinspect ADD function bt_metap(text);
-ALTER EXTENSION pageinspect ADD function bt_page_stats(text,integer);
-ALTER EXTENSION pageinspect ADD function bt_page_items(text,integer);
-ALTER EXTENSION pageinspect ADD function fsm_page_contents(bytea);
diff --git a/contrib/pageinspect/pageinspect.control b/contrib/pageinspect/pageinspect.control
deleted file mode 100644
index f9da0e8..0000000
--- a/contrib/pageinspect/pageinspect.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pageinspect extension
-comment = 'inspect the contents of database pages at a low level'
-default_version = '1.0'
-module_pathname = '$libdir/pageinspect'
-relocatable = true
diff --git a/contrib/pageinspect/rawpage.c b/contrib/pageinspect/rawpage.c
deleted file mode 100644
index 2607576..0000000
--- a/contrib/pageinspect/rawpage.c
+++ /dev/null
@@ -1,232 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * rawpage.c
- *      Functions to extract a raw page as bytea and inspect it
- *
- * Access-method specific inspection functions are in separate files.
- *
- * Copyright (c) 2007-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *      contrib/pageinspect/rawpage.c
- *
- *-------------------------------------------------------------------------
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/transam.h"
-#include "catalog/catalog.h"
-#include "catalog/namespace.h"
-#include "catalog/pg_type.h"
-#include "fmgr.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "utils/builtins.h"
-
-PG_MODULE_MAGIC;
-
-Datum        get_raw_page(PG_FUNCTION_ARGS);
-Datum        get_raw_page_fork(PG_FUNCTION_ARGS);
-Datum        page_header(PG_FUNCTION_ARGS);
-
-static bytea *get_raw_page_internal(text *relname, ForkNumber forknum,
-                      BlockNumber blkno);
-
-
-/*
- * get_raw_page
- *
- * Returns a copy of a page from shared buffers as a bytea
- */
-PG_FUNCTION_INFO_V1(get_raw_page);
-
-Datum
-get_raw_page(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    uint32        blkno = PG_GETARG_UINT32(1);
-    bytea       *raw_page;
-
-    /*
-     * We don't normally bother to check the number of arguments to a C
-     * function, but here it's needed for safety because early 8.4 beta
-     * releases mistakenly redefined get_raw_page() as taking three arguments.
-     */
-    if (PG_NARGS() != 2)
-        ereport(ERROR,
-                (errmsg("wrong number of arguments to get_raw_page()"),
-                 errhint("Run the updated pageinspect.sql script.")));
-
-    raw_page = get_raw_page_internal(relname, MAIN_FORKNUM, blkno);
-
-    PG_RETURN_BYTEA_P(raw_page);
-}
-
-/*
- * get_raw_page_fork
- *
- * Same, for any fork
- */
-PG_FUNCTION_INFO_V1(get_raw_page_fork);
-
-Datum
-get_raw_page_fork(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    text       *forkname = PG_GETARG_TEXT_P(1);
-    uint32        blkno = PG_GETARG_UINT32(2);
-    bytea       *raw_page;
-    ForkNumber    forknum;
-
-    forknum = forkname_to_number(text_to_cstring(forkname));
-
-    raw_page = get_raw_page_internal(relname, forknum, blkno);
-
-    PG_RETURN_BYTEA_P(raw_page);
-}
-
-/*
- * workhorse
- */
-static bytea *
-get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
-{
-    bytea       *raw_page;
-    RangeVar   *relrv;
-    Relation    rel;
-    char       *raw_page_data;
-    Buffer        buf;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use raw functions"))));
-
-    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-    rel = relation_openrv(relrv, AccessShareLock);
-
-    /* Check that this relation has storage */
-    if (rel->rd_rel->relkind == RELKIND_VIEW)
-        ereport(ERROR,
-                (errcode(ERRCODE_WRONG_OBJECT_TYPE),
-                 errmsg("cannot get raw page from view \"%s\"",
-                        RelationGetRelationName(rel))));
-    if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
-        ereport(ERROR,
-                (errcode(ERRCODE_WRONG_OBJECT_TYPE),
-                 errmsg("cannot get raw page from composite type \"%s\"",
-                        RelationGetRelationName(rel))));
-    if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
-        ereport(ERROR,
-                (errcode(ERRCODE_WRONG_OBJECT_TYPE),
-                 errmsg("cannot get raw page from foreign table \"%s\"",
-                        RelationGetRelationName(rel))));
-
-    /*
-     * Reject attempts to read non-local temporary relations; we would be
-     * likely to get wrong data since we have no visibility into the owning
-     * session's local buffers.
-     */
-    if (RELATION_IS_OTHER_TEMP(rel))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("cannot access temporary tables of other sessions")));
-
-    if (blkno >= RelationGetNumberOfBlocks(rel))
-        elog(ERROR, "block number %u is out of range for relation \"%s\"",
-             blkno, RelationGetRelationName(rel));
-
-    /* Initialize buffer to copy to */
-    raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
-    SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
-    raw_page_data = VARDATA(raw_page);
-
-    /* Take a verbatim copy of the page */
-
-    buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
-    LockBuffer(buf, BUFFER_LOCK_SHARE);
-
-    memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
-
-    LockBuffer(buf, BUFFER_LOCK_UNLOCK);
-    ReleaseBuffer(buf);
-
-    relation_close(rel, AccessShareLock);
-
-    return raw_page;
-}
-
-/*
- * page_header
- *
- * Allows inspection of page header fields of a raw page
- */
-
-PG_FUNCTION_INFO_V1(page_header);
-
-Datum
-page_header(PG_FUNCTION_ARGS)
-{
-    bytea       *raw_page = PG_GETARG_BYTEA_P(0);
-    int            raw_page_size;
-
-    TupleDesc    tupdesc;
-
-    Datum        result;
-    HeapTuple    tuple;
-    Datum        values[9];
-    bool        nulls[9];
-
-    PageHeader    page;
-    XLogRecPtr    lsn;
-    char        lsnchar[64];
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use raw page functions"))));
-
-    raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
-
-    /*
-     * Check that enough data was supplied, so that we don't try to access
-     * fields outside the supplied buffer.
-     */
-    if (raw_page_size < sizeof(PageHeaderData))
-        ereport(ERROR,
-                (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-                 errmsg("input page too small (%d bytes)", raw_page_size)));
-
-    page = (PageHeader) VARDATA(raw_page);
-
-    /* Build a tuple descriptor for our result type */
-    if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-        elog(ERROR, "return type must be a row type");
-
-    /* Extract information from the page header */
-
-    lsn = PageGetLSN(page);
-    snprintf(lsnchar, sizeof(lsnchar), "%X/%X", lsn.xlogid, lsn.xrecoff);
-
-    values[0] = CStringGetTextDatum(lsnchar);
-    values[1] = UInt16GetDatum(PageGetTLI(page));
-    values[2] = UInt16GetDatum(page->pd_flags);
-    values[3] = UInt16GetDatum(page->pd_lower);
-    values[4] = UInt16GetDatum(page->pd_upper);
-    values[5] = UInt16GetDatum(page->pd_special);
-    values[6] = UInt16GetDatum(PageGetPageSize(page));
-    values[7] = UInt16GetDatum(PageGetPageLayoutVersion(page));
-    values[8] = TransactionIdGetDatum(page->pd_prune_xid);
-
-    /* Build and return the tuple. */
-
-    memset(nulls, 0, sizeof(nulls));
-
-    tuple = heap_form_tuple(tupdesc, values, nulls);
-    result = HeapTupleGetDatum(tuple);
-
-    PG_RETURN_DATUM(result);
-}
diff --git a/contrib/pg_buffercache/Makefile b/contrib/pg_buffercache/Makefile
deleted file mode 100644
index 323c0ac..0000000
--- a/contrib/pg_buffercache/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_buffercache/Makefile
-
-MODULE_big = pg_buffercache
-OBJS = pg_buffercache_pages.o
-
-EXTENSION = pg_buffercache
-DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_buffercache
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_buffercache/pg_buffercache--1.0.sql b/contrib/pg_buffercache/pg_buffercache--1.0.sql
deleted file mode 100644
index 9407d21..0000000
--- a/contrib/pg_buffercache/pg_buffercache--1.0.sql
+++ /dev/null
@@ -1,17 +0,0 @@
-/* contrib/pg_buffercache/pg_buffercache--1.0.sql */
-
--- Register the function.
-CREATE FUNCTION pg_buffercache_pages()
-RETURNS SETOF RECORD
-AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
-LANGUAGE C;
-
--- Create a view for convenient access.
-CREATE VIEW pg_buffercache AS
-    SELECT P.* FROM pg_buffercache_pages() AS P
-    (bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
-     relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
-
--- Don't want these to be available to public.
-REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
-REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
b/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
deleted file mode 100644
index f00a954..0000000
--- a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
+++ /dev/null
@@ -1,4 +0,0 @@
-/* contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
-ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/contrib/pg_buffercache/pg_buffercache.control b/contrib/pg_buffercache/pg_buffercache.control
deleted file mode 100644
index 709513c..0000000
--- a/contrib/pg_buffercache/pg_buffercache.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_buffercache extension
-comment = 'examine the shared buffer cache'
-default_version = '1.0'
-module_pathname = '$libdir/pg_buffercache'
-relocatable = true
diff --git a/contrib/pg_buffercache/pg_buffercache_pages.c b/contrib/pg_buffercache/pg_buffercache_pages.c
deleted file mode 100644
index ed88288..0000000
--- a/contrib/pg_buffercache/pg_buffercache_pages.c
+++ /dev/null
@@ -1,219 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_buffercache_pages.c
- *      display some contents of the buffer cache
- *
- *      contrib/pg_buffercache/pg_buffercache_pages.c
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "catalog/pg_type.h"
-#include "funcapi.h"
-#include "storage/buf_internals.h"
-#include "storage/bufmgr.h"
-#include "utils/relcache.h"
-
-
-#define NUM_BUFFERCACHE_PAGES_ELEM    8
-
-PG_MODULE_MAGIC;
-
-Datum        pg_buffercache_pages(PG_FUNCTION_ARGS);
-
-
-/*
- * Record structure holding the to be exposed cache data.
- */
-typedef struct
-{
-    uint32        bufferid;
-    Oid            relfilenode;
-    Oid            reltablespace;
-    Oid            reldatabase;
-    ForkNumber    forknum;
-    BlockNumber blocknum;
-    bool        isvalid;
-    bool        isdirty;
-    uint16        usagecount;
-} BufferCachePagesRec;
-
-
-/*
- * Function context for data persisting over repeated calls.
- */
-typedef struct
-{
-    TupleDesc    tupdesc;
-    BufferCachePagesRec *record;
-} BufferCachePagesContext;
-
-
-/*
- * Function returning data from the shared buffer cache - buffer number,
- * relation node/tablespace/database/blocknum and dirty indicator.
- */
-PG_FUNCTION_INFO_V1(pg_buffercache_pages);
-
-Datum
-pg_buffercache_pages(PG_FUNCTION_ARGS)
-{
-    FuncCallContext *funcctx;
-    Datum        result;
-    MemoryContext oldcontext;
-    BufferCachePagesContext *fctx;        /* User function context. */
-    TupleDesc    tupledesc;
-    HeapTuple    tuple;
-
-    if (SRF_IS_FIRSTCALL())
-    {
-        int            i;
-        volatile BufferDesc *bufHdr;
-
-        funcctx = SRF_FIRSTCALL_INIT();
-
-        /* Switch context when allocating stuff to be used in later calls */
-        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
-
-        /* Create a user function context for cross-call persistence */
-        fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
-
-        /* Construct a tuple descriptor for the result rows. */
-        tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
-                           INT4OID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
-                           OIDOID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
-                           OIDOID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
-                           OIDOID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
-                           INT2OID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
-                           INT8OID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
-                           BOOLOID, -1, 0);
-        TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
-                           INT2OID, -1, 0);
-
-        fctx->tupdesc = BlessTupleDesc(tupledesc);
-
-        /* Allocate NBuffers worth of BufferCachePagesRec records. */
-        fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
-
-        /* Set max calls and remember the user function context. */
-        funcctx->max_calls = NBuffers;
-        funcctx->user_fctx = fctx;
-
-        /* Return to original context when allocating transient memory */
-        MemoryContextSwitchTo(oldcontext);
-
-        /*
-         * To get a consistent picture of the buffer state, we must lock all
-         * partitions of the buffer map.  Needless to say, this is horrible
-         * for concurrency.  Must grab locks in increasing order to avoid
-         * possible deadlocks.
-         */
-        for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
-            LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
-
-        /*
-         * Scan though all the buffers, saving the relevant fields in the
-         * fctx->record structure.
-         */
-        for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
-        {
-            /* Lock each buffer header before inspecting. */
-            LockBufHdr(bufHdr);
-
-            fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
-            fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
-            fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
-            fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
-            fctx->record[i].forknum = bufHdr->tag.forkNum;
-            fctx->record[i].blocknum = bufHdr->tag.blockNum;
-            fctx->record[i].usagecount = bufHdr->usage_count;
-
-            if (bufHdr->flags & BM_DIRTY)
-                fctx->record[i].isdirty = true;
-            else
-                fctx->record[i].isdirty = false;
-
-            /* Note if the buffer is valid, and has storage created */
-            if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
-                fctx->record[i].isvalid = true;
-            else
-                fctx->record[i].isvalid = false;
-
-            UnlockBufHdr(bufHdr);
-        }
-
-        /*
-         * And release locks.  We do this in reverse order for two reasons:
-         * (1) Anyone else who needs more than one of the locks will be trying
-         * to lock them in increasing order; we don't want to release the
-         * other process until it can get all the locks it needs. (2) This
-         * avoids O(N^2) behavior inside LWLockRelease.
-         */
-        for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
-            LWLockRelease(FirstBufMappingLock + i);
-    }
-
-    funcctx = SRF_PERCALL_SETUP();
-
-    /* Get the saved state */
-    fctx = funcctx->user_fctx;
-
-    if (funcctx->call_cntr < funcctx->max_calls)
-    {
-        uint32        i = funcctx->call_cntr;
-        Datum        values[NUM_BUFFERCACHE_PAGES_ELEM];
-        bool        nulls[NUM_BUFFERCACHE_PAGES_ELEM];
-
-        values[0] = Int32GetDatum(fctx->record[i].bufferid);
-        nulls[0] = false;
-
-        /*
-         * Set all fields except the bufferid to null if the buffer is unused
-         * or not valid.
-         */
-        if (fctx->record[i].blocknum == InvalidBlockNumber ||
-            fctx->record[i].isvalid == false)
-        {
-            nulls[1] = true;
-            nulls[2] = true;
-            nulls[3] = true;
-            nulls[4] = true;
-            nulls[5] = true;
-            nulls[6] = true;
-            nulls[7] = true;
-        }
-        else
-        {
-            values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
-            nulls[1] = false;
-            values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
-            nulls[2] = false;
-            values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
-            nulls[3] = false;
-            values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
-            nulls[4] = false;
-            values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
-            nulls[5] = false;
-            values[6] = BoolGetDatum(fctx->record[i].isdirty);
-            nulls[6] = false;
-            values[7] = Int16GetDatum(fctx->record[i].usagecount);
-            nulls[7] = false;
-        }
-
-        /* Build and return the tuple. */
-        tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
-        result = HeapTupleGetDatum(tuple);
-
-        SRF_RETURN_NEXT(funcctx, result);
-    }
-    else
-        SRF_RETURN_DONE(funcctx);
-}
diff --git a/contrib/pg_freespacemap/Makefile b/contrib/pg_freespacemap/Makefile
deleted file mode 100644
index b2e3ba3..0000000
--- a/contrib/pg_freespacemap/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_freespacemap/Makefile
-
-MODULE_big = pg_freespacemap
-OBJS = pg_freespacemap.o
-
-EXTENSION = pg_freespacemap
-DATA = pg_freespacemap--1.0.sql pg_freespacemap--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_freespacemap
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_freespacemap/pg_freespacemap--1.0.sql b/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
deleted file mode 100644
index d63420e..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
+++ /dev/null
@@ -1,22 +0,0 @@
-/* contrib/pg_freespacemap/pg_freespacemap--1.0.sql */
-
--- Register the C function.
-CREATE FUNCTION pg_freespace(regclass, bigint)
-RETURNS int2
-AS 'MODULE_PATHNAME', 'pg_freespace'
-LANGUAGE C STRICT;
-
--- pg_freespace shows the recorded space avail at each block in a relation
-CREATE FUNCTION
-  pg_freespace(rel regclass, blkno OUT bigint, avail OUT int2)
-RETURNS SETOF RECORD
-AS $$
-  SELECT blkno, pg_freespace($1, blkno) AS avail
-  FROM generate_series(0, pg_relation_size($1) / current_setting('block_size')::bigint - 1) AS blkno;
-$$
-LANGUAGE SQL;
-
-
--- Don't want these to be available to public.
-REVOKE ALL ON FUNCTION pg_freespace(regclass, bigint) FROM PUBLIC;
-REVOKE ALL ON FUNCTION pg_freespace(regclass) FROM PUBLIC;
diff --git a/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
b/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
deleted file mode 100644
index 4c7487f..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
+++ /dev/null
@@ -1,4 +0,0 @@
-/* contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass,bigint);
-ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass);
diff --git a/contrib/pg_freespacemap/pg_freespacemap.c b/contrib/pg_freespacemap/pg_freespacemap.c
deleted file mode 100644
index bf6b0df..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap.c
+++ /dev/null
@@ -1,46 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_freespacemap.c
- *      display contents of a free space map
- *
- *      contrib/pg_freespacemap/pg_freespacemap.c
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "funcapi.h"
-#include "storage/block.h"
-#include "storage/freespace.h"
-
-
-PG_MODULE_MAGIC;
-
-Datum        pg_freespace(PG_FUNCTION_ARGS);
-
-/*
- * Returns the amount of free space on a given page, according to the
- * free space map.
- */
-PG_FUNCTION_INFO_V1(pg_freespace);
-
-Datum
-pg_freespace(PG_FUNCTION_ARGS)
-{
-    Oid            relid = PG_GETARG_OID(0);
-    int64        blkno = PG_GETARG_INT64(1);
-    int16        freespace;
-    Relation    rel;
-
-    rel = relation_open(relid, AccessShareLock);
-
-    if (blkno < 0 || blkno > MaxBlockNumber)
-        ereport(ERROR,
-                (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-                 errmsg("invalid block number")));
-
-    freespace = GetRecordedFreeSpace(rel, blkno);
-
-    relation_close(rel, AccessShareLock);
-    PG_RETURN_INT16(freespace);
-}
diff --git a/contrib/pg_freespacemap/pg_freespacemap.control b/contrib/pg_freespacemap/pg_freespacemap.control
deleted file mode 100644
index 34b695f..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_freespacemap extension
-comment = 'examine the free space map (FSM)'
-default_version = '1.0'
-module_pathname = '$libdir/pg_freespacemap'
-relocatable = true
diff --git a/contrib/pg_stat_statements/Makefile b/contrib/pg_stat_statements/Makefile
deleted file mode 100644
index e086fd8..0000000
--- a/contrib/pg_stat_statements/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_stat_statements/Makefile
-
-MODULE_big = pg_stat_statements
-OBJS = pg_stat_statements.o
-
-EXTENSION = pg_stat_statements
-DATA = pg_stat_statements--1.0.sql pg_stat_statements--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_stat_statements
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
b/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
deleted file mode 100644
index e17b82c..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
+++ /dev/null
@@ -1,36 +0,0 @@
-/* contrib/pg_stat_statements/pg_stat_statements--1.0.sql */
-
--- Register functions.
-CREATE FUNCTION pg_stat_statements_reset()
-RETURNS void
-AS 'MODULE_PATHNAME'
-LANGUAGE C;
-
-CREATE FUNCTION pg_stat_statements(
-    OUT userid oid,
-    OUT dbid oid,
-    OUT query text,
-    OUT calls int8,
-    OUT total_time float8,
-    OUT rows int8,
-    OUT shared_blks_hit int8,
-    OUT shared_blks_read int8,
-    OUT shared_blks_written int8,
-    OUT local_blks_hit int8,
-    OUT local_blks_read int8,
-    OUT local_blks_written int8,
-    OUT temp_blks_read int8,
-    OUT temp_blks_written int8
-)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME'
-LANGUAGE C;
-
--- Register a view on the function for ease of use.
-CREATE VIEW pg_stat_statements AS
-  SELECT * FROM pg_stat_statements();
-
-GRANT SELECT ON pg_stat_statements TO PUBLIC;
-
--- Don't want this to be available to non-superusers.
-REVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;
diff --git a/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
b/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
deleted file mode 100644
index 9dda85c..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
+++ /dev/null
@@ -1,5 +0,0 @@
-/* contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements_reset();
-ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements();
-ALTER EXTENSION pg_stat_statements ADD view pg_stat_statements;
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
deleted file mode 100644
index 0236b87..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ /dev/null
@@ -1,1046 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_stat_statements.c
- *        Track statement execution times across a whole database cluster.
- *
- * Note about locking issues: to create or delete an entry in the shared
- * hashtable, one must hold pgss->lock exclusively.  Modifying any field
- * in an entry except the counters requires the same.  To look up an entry,
- * one must hold the lock shared.  To read or update the counters within
- * an entry, one must hold the lock shared or exclusive (so the entry doesn't
- * disappear!) and also take the entry's mutex spinlock.
- *
- *
- * Copyright (c) 2008-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *      contrib/pg_stat_statements/pg_stat_statements.c
- *
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include <unistd.h>
-
-#include "access/hash.h"
-#include "catalog/pg_type.h"
-#include "executor/executor.h"
-#include "executor/instrument.h"
-#include "funcapi.h"
-#include "mb/pg_wchar.h"
-#include "miscadmin.h"
-#include "pgstat.h"
-#include "storage/fd.h"
-#include "storage/ipc.h"
-#include "storage/spin.h"
-#include "tcop/utility.h"
-#include "utils/builtins.h"
-#include "utils/hsearch.h"
-#include "utils/guc.h"
-
-
-PG_MODULE_MAGIC;
-
-/* Location of stats file */
-#define PGSS_DUMP_FILE    "global/pg_stat_statements.stat"
-
-/* This constant defines the magic number in the stats file header */
-static const uint32 PGSS_FILE_HEADER = 0x20100108;
-
-/* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
-#define USAGE_EXEC(duration)    (1.0)
-#define USAGE_INIT                (1.0)    /* including initial planning */
-#define USAGE_DECREASE_FACTOR    (0.99)    /* decreased every entry_dealloc */
-#define USAGE_DEALLOC_PERCENT    5        /* free this % of entries at once */
-
-/*
- * Hashtable key that defines the identity of a hashtable entry.  The
- * hash comparators do not assume that the query string is null-terminated;
- * this lets us search for an mbcliplen'd string without copying it first.
- *
- * Presently, the query encoding is fully determined by the source database
- * and so we don't really need it to be in the key.  But that might not always
- * be true. Anyway it's notationally convenient to pass it as part of the key.
- */
-typedef struct pgssHashKey
-{
-    Oid            userid;            /* user OID */
-    Oid            dbid;            /* database OID */
-    int            encoding;        /* query encoding */
-    int            query_len;        /* # of valid bytes in query string */
-    const char *query_ptr;        /* query string proper */
-} pgssHashKey;
-
-/*
- * The actual stats counters kept within pgssEntry.
- */
-typedef struct Counters
-{
-    int64        calls;            /* # of times executed */
-    double        total_time;        /* total execution time in seconds */
-    int64        rows;            /* total # of retrieved or affected rows */
-    int64        shared_blks_hit;    /* # of shared buffer hits */
-    int64        shared_blks_read;        /* # of shared disk blocks read */
-    int64        shared_blks_written;    /* # of shared disk blocks written */
-    int64        local_blks_hit; /* # of local buffer hits */
-    int64        local_blks_read;    /* # of local disk blocks read */
-    int64        local_blks_written;        /* # of local disk blocks written */
-    int64        temp_blks_read; /* # of temp blocks read */
-    int64        temp_blks_written;        /* # of temp blocks written */
-    double        usage;            /* usage factor */
-} Counters;
-
-/*
- * Statistics per statement
- *
- * NB: see the file read/write code before changing field order here.
- */
-typedef struct pgssEntry
-{
-    pgssHashKey key;            /* hash key of entry - MUST BE FIRST */
-    Counters    counters;        /* the statistics for this query */
-    slock_t        mutex;            /* protects the counters only */
-    char        query[1];        /* VARIABLE LENGTH ARRAY - MUST BE LAST */
-    /* Note: the allocated length of query[] is actually pgss->query_size */
-} pgssEntry;
-
-/*
- * Global shared state
- */
-typedef struct pgssSharedState
-{
-    LWLockId    lock;            /* protects hashtable search/modification */
-    int            query_size;        /* max query length in bytes */
-} pgssSharedState;
-
-/*---- Local variables ----*/
-
-/* Current nesting depth of ExecutorRun calls */
-static int    nested_level = 0;
-
-/* Saved hook values in case of unload */
-static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
-static ExecutorStart_hook_type prev_ExecutorStart = NULL;
-static ExecutorRun_hook_type prev_ExecutorRun = NULL;
-static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
-static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
-static ProcessUtility_hook_type prev_ProcessUtility = NULL;
-
-/* Links to shared memory state */
-static pgssSharedState *pgss = NULL;
-static HTAB *pgss_hash = NULL;
-
-/*---- GUC variables ----*/
-
-typedef enum
-{
-    PGSS_TRACK_NONE,            /* track no statements */
-    PGSS_TRACK_TOP,                /* only top level statements */
-    PGSS_TRACK_ALL                /* all statements, including nested ones */
-}    PGSSTrackLevel;
-
-static const struct config_enum_entry track_options[] =
-{
-    {"none", PGSS_TRACK_NONE, false},
-    {"top", PGSS_TRACK_TOP, false},
-    {"all", PGSS_TRACK_ALL, false},
-    {NULL, 0, false}
-};
-
-static int    pgss_max;            /* max # statements to track */
-static int    pgss_track;            /* tracking level */
-static bool pgss_track_utility; /* whether to track utility commands */
-static bool pgss_save;            /* whether to save stats across shutdown */
-
-
-#define pgss_enabled() \
-    (pgss_track == PGSS_TRACK_ALL || \
-    (pgss_track == PGSS_TRACK_TOP && nested_level == 0))
-
-/*---- Function declarations ----*/
-
-void        _PG_init(void);
-void        _PG_fini(void);
-
-Datum        pg_stat_statements_reset(PG_FUNCTION_ARGS);
-Datum        pg_stat_statements(PG_FUNCTION_ARGS);
-
-PG_FUNCTION_INFO_V1(pg_stat_statements_reset);
-PG_FUNCTION_INFO_V1(pg_stat_statements);
-
-static void pgss_shmem_startup(void);
-static void pgss_shmem_shutdown(int code, Datum arg);
-static void pgss_ExecutorStart(QueryDesc *queryDesc, int eflags);
-static void pgss_ExecutorRun(QueryDesc *queryDesc,
-                 ScanDirection direction,
-                 long count);
-static void pgss_ExecutorFinish(QueryDesc *queryDesc);
-static void pgss_ExecutorEnd(QueryDesc *queryDesc);
-static void pgss_ProcessUtility(Node *parsetree,
-              const char *queryString, ParamListInfo params, bool isTopLevel,
-                    DestReceiver *dest, char *completionTag);
-static uint32 pgss_hash_fn(const void *key, Size keysize);
-static int    pgss_match_fn(const void *key1, const void *key2, Size keysize);
-static void pgss_store(const char *query, double total_time, uint64 rows,
-           const BufferUsage *bufusage);
-static Size pgss_memsize(void);
-static pgssEntry *entry_alloc(pgssHashKey *key);
-static void entry_dealloc(void);
-static void entry_reset(void);
-
-
-/*
- * Module load callback
- */
-void
-_PG_init(void)
-{
-    /*
-     * In order to create our shared memory area, we have to be loaded via
-     * shared_preload_libraries.  If not, fall out without hooking into any of
-     * the main system.  (We don't throw error here because it seems useful to
-     * allow the pg_stat_statements functions to be created even when the
-     * module isn't active.  The functions must protect themselves against
-     * being called then, however.)
-     */
-    if (!process_shared_preload_libraries_in_progress)
-        return;
-
-    /*
-     * Define (or redefine) custom GUC variables.
-     */
-    DefineCustomIntVariable("pg_stat_statements.max",
-      "Sets the maximum number of statements tracked by pg_stat_statements.",
-                            NULL,
-                            &pgss_max,
-                            1000,
-                            100,
-                            INT_MAX,
-                            PGC_POSTMASTER,
-                            0,
-                            NULL,
-                            NULL,
-                            NULL);
-
-    DefineCustomEnumVariable("pg_stat_statements.track",
-               "Selects which statements are tracked by pg_stat_statements.",
-                             NULL,
-                             &pgss_track,
-                             PGSS_TRACK_TOP,
-                             track_options,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    DefineCustomBoolVariable("pg_stat_statements.track_utility",
-       "Selects whether utility commands are tracked by pg_stat_statements.",
-                             NULL,
-                             &pgss_track_utility,
-                             true,
-                             PGC_SUSET,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    DefineCustomBoolVariable("pg_stat_statements.save",
-               "Save pg_stat_statements statistics across server shutdowns.",
-                             NULL,
-                             &pgss_save,
-                             true,
-                             PGC_SIGHUP,
-                             0,
-                             NULL,
-                             NULL,
-                             NULL);
-
-    EmitWarningsOnPlaceholders("pg_stat_statements");
-
-    /*
-     * Request additional shared resources.  (These are no-ops if we're not in
-     * the postmaster process.)  We'll allocate or attach to the shared
-     * resources in pgss_shmem_startup().
-     */
-    RequestAddinShmemSpace(pgss_memsize());
-    RequestAddinLWLocks(1);
-
-    /*
-     * Install hooks.
-     */
-    prev_shmem_startup_hook = shmem_startup_hook;
-    shmem_startup_hook = pgss_shmem_startup;
-    prev_ExecutorStart = ExecutorStart_hook;
-    ExecutorStart_hook = pgss_ExecutorStart;
-    prev_ExecutorRun = ExecutorRun_hook;
-    ExecutorRun_hook = pgss_ExecutorRun;
-    prev_ExecutorFinish = ExecutorFinish_hook;
-    ExecutorFinish_hook = pgss_ExecutorFinish;
-    prev_ExecutorEnd = ExecutorEnd_hook;
-    ExecutorEnd_hook = pgss_ExecutorEnd;
-    prev_ProcessUtility = ProcessUtility_hook;
-    ProcessUtility_hook = pgss_ProcessUtility;
-}
-
-/*
- * Module unload callback
- */
-void
-_PG_fini(void)
-{
-    /* Uninstall hooks. */
-    shmem_startup_hook = prev_shmem_startup_hook;
-    ExecutorStart_hook = prev_ExecutorStart;
-    ExecutorRun_hook = prev_ExecutorRun;
-    ExecutorFinish_hook = prev_ExecutorFinish;
-    ExecutorEnd_hook = prev_ExecutorEnd;
-    ProcessUtility_hook = prev_ProcessUtility;
-}
-
-/*
- * shmem_startup hook: allocate or attach to shared memory,
- * then load any pre-existing statistics from file.
- */
-static void
-pgss_shmem_startup(void)
-{
-    bool        found;
-    HASHCTL        info;
-    FILE       *file;
-    uint32        header;
-    int32        num;
-    int32        i;
-    int            query_size;
-    int            buffer_size;
-    char       *buffer = NULL;
-
-    if (prev_shmem_startup_hook)
-        prev_shmem_startup_hook();
-
-    /* reset in case this is a restart within the postmaster */
-    pgss = NULL;
-    pgss_hash = NULL;
-
-    /*
-     * Create or attach to the shared memory state, including hash table
-     */
-    LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
-
-    pgss = ShmemInitStruct("pg_stat_statements",
-                           sizeof(pgssSharedState),
-                           &found);
-
-    if (!found)
-    {
-        /* First time through ... */
-        pgss->lock = LWLockAssign();
-        pgss->query_size = pgstat_track_activity_query_size;
-    }
-
-    /* Be sure everyone agrees on the hash table entry size */
-    query_size = pgss->query_size;
-
-    memset(&info, 0, sizeof(info));
-    info.keysize = sizeof(pgssHashKey);
-    info.entrysize = offsetof(pgssEntry, query) +query_size;
-    info.hash = pgss_hash_fn;
-    info.match = pgss_match_fn;
-    pgss_hash = ShmemInitHash("pg_stat_statements hash",
-                              pgss_max, pgss_max,
-                              &info,
-                              HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
-
-    LWLockRelease(AddinShmemInitLock);
-
-    /*
-     * If we're in the postmaster (or a standalone backend...), set up a shmem
-     * exit hook to dump the statistics to disk.
-     */
-    if (!IsUnderPostmaster)
-        on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
-
-    /*
-     * Attempt to load old statistics from the dump file, if this is the first
-     * time through and we weren't told not to.
-     */
-    if (found || !pgss_save)
-        return;
-
-    /*
-     * Note: we don't bother with locks here, because there should be no other
-     * processes running when this code is reached.
-     */
-    file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_R);
-    if (file == NULL)
-    {
-        if (errno == ENOENT)
-            return;                /* ignore not-found error */
-        goto error;
-    }
-
-    buffer_size = query_size;
-    buffer = (char *) palloc(buffer_size);
-
-    if (fread(&header, sizeof(uint32), 1, file) != 1 ||
-        header != PGSS_FILE_HEADER ||
-        fread(&num, sizeof(int32), 1, file) != 1)
-        goto error;
-
-    for (i = 0; i < num; i++)
-    {
-        pgssEntry    temp;
-        pgssEntry  *entry;
-
-        if (fread(&temp, offsetof(pgssEntry, mutex), 1, file) != 1)
-            goto error;
-
-        /* Encoding is the only field we can easily sanity-check */
-        if (!PG_VALID_BE_ENCODING(temp.key.encoding))
-            goto error;
-
-        /* Previous incarnation might have had a larger query_size */
-        if (temp.key.query_len >= buffer_size)
-        {
-            buffer = (char *) repalloc(buffer, temp.key.query_len + 1);
-            buffer_size = temp.key.query_len + 1;
-        }
-
-        if (fread(buffer, 1, temp.key.query_len, file) != temp.key.query_len)
-            goto error;
-        buffer[temp.key.query_len] = '\0';
-
-        /* Clip to available length if needed */
-        if (temp.key.query_len >= query_size)
-            temp.key.query_len = pg_encoding_mbcliplen(temp.key.encoding,
-                                                       buffer,
-                                                       temp.key.query_len,
-                                                       query_size - 1);
-        temp.key.query_ptr = buffer;
-
-        /* make the hashtable entry (discards old entries if too many) */
-        entry = entry_alloc(&temp.key);
-
-        /* copy in the actual stats */
-        entry->counters = temp.counters;
-    }
-
-    pfree(buffer);
-    FreeFile(file);
-    return;
-
-error:
-    ereport(LOG,
-            (errcode_for_file_access(),
-             errmsg("could not read pg_stat_statement file \"%s\": %m",
-                    PGSS_DUMP_FILE)));
-    if (buffer)
-        pfree(buffer);
-    if (file)
-        FreeFile(file);
-    /* If possible, throw away the bogus file; ignore any error */
-    unlink(PGSS_DUMP_FILE);
-}
-
-/*
- * shmem_shutdown hook: Dump statistics into file.
- *
- * Note: we don't bother with acquiring lock, because there should be no
- * other processes running when this is called.
- */
-static void
-pgss_shmem_shutdown(int code, Datum arg)
-{
-    FILE       *file;
-    HASH_SEQ_STATUS hash_seq;
-    int32        num_entries;
-    pgssEntry  *entry;
-
-    /* Don't try to dump during a crash. */
-    if (code)
-        return;
-
-    /* Safety check ... shouldn't get here unless shmem is set up. */
-    if (!pgss || !pgss_hash)
-        return;
-
-    /* Don't dump if told not to. */
-    if (!pgss_save)
-        return;
-
-    file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_W);
-    if (file == NULL)
-        goto error;
-
-    if (fwrite(&PGSS_FILE_HEADER, sizeof(uint32), 1, file) != 1)
-        goto error;
-    num_entries = hash_get_num_entries(pgss_hash);
-    if (fwrite(&num_entries, sizeof(int32), 1, file) != 1)
-        goto error;
-
-    hash_seq_init(&hash_seq, pgss_hash);
-    while ((entry = hash_seq_search(&hash_seq)) != NULL)
-    {
-        int            len = entry->key.query_len;
-
-        if (fwrite(entry, offsetof(pgssEntry, mutex), 1, file) != 1 ||
-            fwrite(entry->query, 1, len, file) != len)
-            goto error;
-    }
-
-    if (FreeFile(file))
-    {
-        file = NULL;
-        goto error;
-    }
-
-    return;
-
-error:
-    ereport(LOG,
-            (errcode_for_file_access(),
-             errmsg("could not write pg_stat_statement file \"%s\": %m",
-                    PGSS_DUMP_FILE)));
-    if (file)
-        FreeFile(file);
-    unlink(PGSS_DUMP_FILE);
-}
-
-/*
- * ExecutorStart hook: start up tracking if needed
- */
-static void
-pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
-{
-    if (prev_ExecutorStart)
-        prev_ExecutorStart(queryDesc, eflags);
-    else
-        standard_ExecutorStart(queryDesc, eflags);
-
-    if (pgss_enabled())
-    {
-        /*
-         * Set up to track total elapsed time in ExecutorRun.  Make sure the
-         * space is allocated in the per-query context so it will go away at
-         * ExecutorEnd.
-         */
-        if (queryDesc->totaltime == NULL)
-        {
-            MemoryContext oldcxt;
-
-            oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
-            queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
-            MemoryContextSwitchTo(oldcxt);
-        }
-    }
-}
-
-/*
- * ExecutorRun hook: all we need do is track nesting depth
- */
-static void
-pgss_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
-{
-    nested_level++;
-    PG_TRY();
-    {
-        if (prev_ExecutorRun)
-            prev_ExecutorRun(queryDesc, direction, count);
-        else
-            standard_ExecutorRun(queryDesc, direction, count);
-        nested_level--;
-    }
-    PG_CATCH();
-    {
-        nested_level--;
-        PG_RE_THROW();
-    }
-    PG_END_TRY();
-}
-
-/*
- * ExecutorFinish hook: all we need do is track nesting depth
- */
-static void
-pgss_ExecutorFinish(QueryDesc *queryDesc)
-{
-    nested_level++;
-    PG_TRY();
-    {
-        if (prev_ExecutorFinish)
-            prev_ExecutorFinish(queryDesc);
-        else
-            standard_ExecutorFinish(queryDesc);
-        nested_level--;
-    }
-    PG_CATCH();
-    {
-        nested_level--;
-        PG_RE_THROW();
-    }
-    PG_END_TRY();
-}
-
-/*
- * ExecutorEnd hook: store results if needed
- */
-static void
-pgss_ExecutorEnd(QueryDesc *queryDesc)
-{
-    if (queryDesc->totaltime && pgss_enabled())
-    {
-        /*
-         * Make sure stats accumulation is done.  (Note: it's okay if several
-         * levels of hook all do this.)
-         */
-        InstrEndLoop(queryDesc->totaltime);
-
-        pgss_store(queryDesc->sourceText,
-                   queryDesc->totaltime->total,
-                   queryDesc->estate->es_processed,
-                   &queryDesc->totaltime->bufusage);
-    }
-
-    if (prev_ExecutorEnd)
-        prev_ExecutorEnd(queryDesc);
-    else
-        standard_ExecutorEnd(queryDesc);
-}
-
-/*
- * ProcessUtility hook
- */
-static void
-pgss_ProcessUtility(Node *parsetree, const char *queryString,
-                    ParamListInfo params, bool isTopLevel,
-                    DestReceiver *dest, char *completionTag)
-{
-    if (pgss_track_utility && pgss_enabled())
-    {
-        instr_time    start;
-        instr_time    duration;
-        uint64        rows = 0;
-        BufferUsage bufusage;
-
-        bufusage = pgBufferUsage;
-        INSTR_TIME_SET_CURRENT(start);
-
-        nested_level++;
-        PG_TRY();
-        {
-            if (prev_ProcessUtility)
-                prev_ProcessUtility(parsetree, queryString, params,
-                                    isTopLevel, dest, completionTag);
-            else
-                standard_ProcessUtility(parsetree, queryString, params,
-                                        isTopLevel, dest, completionTag);
-            nested_level--;
-        }
-        PG_CATCH();
-        {
-            nested_level--;
-            PG_RE_THROW();
-        }
-        PG_END_TRY();
-
-        INSTR_TIME_SET_CURRENT(duration);
-        INSTR_TIME_SUBTRACT(duration, start);
-
-        /* parse command tag to retrieve the number of affected rows. */
-        if (completionTag &&
-            sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
-            rows = 0;
-
-        /* calc differences of buffer counters. */
-        bufusage.shared_blks_hit =
-            pgBufferUsage.shared_blks_hit - bufusage.shared_blks_hit;
-        bufusage.shared_blks_read =
-            pgBufferUsage.shared_blks_read - bufusage.shared_blks_read;
-        bufusage.shared_blks_written =
-            pgBufferUsage.shared_blks_written - bufusage.shared_blks_written;
-        bufusage.local_blks_hit =
-            pgBufferUsage.local_blks_hit - bufusage.local_blks_hit;
-        bufusage.local_blks_read =
-            pgBufferUsage.local_blks_read - bufusage.local_blks_read;
-        bufusage.local_blks_written =
-            pgBufferUsage.local_blks_written - bufusage.local_blks_written;
-        bufusage.temp_blks_read =
-            pgBufferUsage.temp_blks_read - bufusage.temp_blks_read;
-        bufusage.temp_blks_written =
-            pgBufferUsage.temp_blks_written - bufusage.temp_blks_written;
-
-        pgss_store(queryString, INSTR_TIME_GET_DOUBLE(duration), rows,
-                   &bufusage);
-    }
-    else
-    {
-        if (prev_ProcessUtility)
-            prev_ProcessUtility(parsetree, queryString, params,
-                                isTopLevel, dest, completionTag);
-        else
-            standard_ProcessUtility(parsetree, queryString, params,
-                                    isTopLevel, dest, completionTag);
-    }
-}
-
-/*
- * Calculate hash value for a key
- */
-static uint32
-pgss_hash_fn(const void *key, Size keysize)
-{
-    const pgssHashKey *k = (const pgssHashKey *) key;
-
-    /* we don't bother to include encoding in the hash */
-    return hash_uint32((uint32) k->userid) ^
-        hash_uint32((uint32) k->dbid) ^
-        DatumGetUInt32(hash_any((const unsigned char *) k->query_ptr,
-                                k->query_len));
-}
-
-/*
- * Compare two keys - zero means match
- */
-static int
-pgss_match_fn(const void *key1, const void *key2, Size keysize)
-{
-    const pgssHashKey *k1 = (const pgssHashKey *) key1;
-    const pgssHashKey *k2 = (const pgssHashKey *) key2;
-
-    if (k1->userid == k2->userid &&
-        k1->dbid == k2->dbid &&
-        k1->encoding == k2->encoding &&
-        k1->query_len == k2->query_len &&
-        memcmp(k1->query_ptr, k2->query_ptr, k1->query_len) == 0)
-        return 0;
-    else
-        return 1;
-}
-
-/*
- * Store some statistics for a statement.
- */
-static void
-pgss_store(const char *query, double total_time, uint64 rows,
-           const BufferUsage *bufusage)
-{
-    pgssHashKey key;
-    double        usage;
-    pgssEntry  *entry;
-
-    Assert(query != NULL);
-
-    /* Safety check... */
-    if (!pgss || !pgss_hash)
-        return;
-
-    /* Set up key for hashtable search */
-    key.userid = GetUserId();
-    key.dbid = MyDatabaseId;
-    key.encoding = GetDatabaseEncoding();
-    key.query_len = strlen(query);
-    if (key.query_len >= pgss->query_size)
-        key.query_len = pg_encoding_mbcliplen(key.encoding,
-                                              query,
-                                              key.query_len,
-                                              pgss->query_size - 1);
-    key.query_ptr = query;
-
-    usage = USAGE_EXEC(duration);
-
-    /* Lookup the hash table entry with shared lock. */
-    LWLockAcquire(pgss->lock, LW_SHARED);
-
-    entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);
-    if (!entry)
-    {
-        /* Must acquire exclusive lock to add a new entry. */
-        LWLockRelease(pgss->lock);
-        LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
-        entry = entry_alloc(&key);
-    }
-
-    /* Grab the spinlock while updating the counters. */
-    {
-        volatile pgssEntry *e = (volatile pgssEntry *) entry;
-
-        SpinLockAcquire(&e->mutex);
-        e->counters.calls += 1;
-        e->counters.total_time += total_time;
-        e->counters.rows += rows;
-        e->counters.shared_blks_hit += bufusage->shared_blks_hit;
-        e->counters.shared_blks_read += bufusage->shared_blks_read;
-        e->counters.shared_blks_written += bufusage->shared_blks_written;
-        e->counters.local_blks_hit += bufusage->local_blks_hit;
-        e->counters.local_blks_read += bufusage->local_blks_read;
-        e->counters.local_blks_written += bufusage->local_blks_written;
-        e->counters.temp_blks_read += bufusage->temp_blks_read;
-        e->counters.temp_blks_written += bufusage->temp_blks_written;
-        e->counters.usage += usage;
-        SpinLockRelease(&e->mutex);
-    }
-
-    LWLockRelease(pgss->lock);
-}
-
-/*
- * Reset all statement statistics.
- */
-Datum
-pg_stat_statements_reset(PG_FUNCTION_ARGS)
-{
-    if (!pgss || !pgss_hash)
-        ereport(ERROR,
-                (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-                 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
-    entry_reset();
-    PG_RETURN_VOID();
-}
-
-#define PG_STAT_STATEMENTS_COLS        14
-
-/*
- * Retrieve statement statistics.
- */
-Datum
-pg_stat_statements(PG_FUNCTION_ARGS)
-{
-    ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
-    TupleDesc    tupdesc;
-    Tuplestorestate *tupstore;
-    MemoryContext per_query_ctx;
-    MemoryContext oldcontext;
-    Oid            userid = GetUserId();
-    bool        is_superuser = superuser();
-    HASH_SEQ_STATUS hash_seq;
-    pgssEntry  *entry;
-
-    if (!pgss || !pgss_hash)
-        ereport(ERROR,
-                (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-                 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
-
-    /* check to see if caller supports us returning a tuplestore */
-    if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("set-valued function called in context that cannot accept a set")));
-    if (!(rsinfo->allowedModes & SFRM_Materialize))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("materialize mode required, but it is not " \
-                        "allowed in this context")));
-
-    /* Build a tuple descriptor for our result type */
-    if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-        elog(ERROR, "return type must be a row type");
-
-    per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
-    oldcontext = MemoryContextSwitchTo(per_query_ctx);
-
-    tupstore = tuplestore_begin_heap(true, false, work_mem);
-    rsinfo->returnMode = SFRM_Materialize;
-    rsinfo->setResult = tupstore;
-    rsinfo->setDesc = tupdesc;
-
-    MemoryContextSwitchTo(oldcontext);
-
-    LWLockAcquire(pgss->lock, LW_SHARED);
-
-    hash_seq_init(&hash_seq, pgss_hash);
-    while ((entry = hash_seq_search(&hash_seq)) != NULL)
-    {
-        Datum        values[PG_STAT_STATEMENTS_COLS];
-        bool        nulls[PG_STAT_STATEMENTS_COLS];
-        int            i = 0;
-        Counters    tmp;
-
-        memset(values, 0, sizeof(values));
-        memset(nulls, 0, sizeof(nulls));
-
-        values[i++] = ObjectIdGetDatum(entry->key.userid);
-        values[i++] = ObjectIdGetDatum(entry->key.dbid);
-
-        if (is_superuser || entry->key.userid == userid)
-        {
-            char       *qstr;
-
-            qstr = (char *)
-                pg_do_encoding_conversion((unsigned char *) entry->query,
-                                          entry->key.query_len,
-                                          entry->key.encoding,
-                                          GetDatabaseEncoding());
-            values[i++] = CStringGetTextDatum(qstr);
-            if (qstr != entry->query)
-                pfree(qstr);
-        }
-        else
-            values[i++] = CStringGetTextDatum("<insufficient privilege>");
-
-        /* copy counters to a local variable to keep locking time short */
-        {
-            volatile pgssEntry *e = (volatile pgssEntry *) entry;
-
-            SpinLockAcquire(&e->mutex);
-            tmp = e->counters;
-            SpinLockRelease(&e->mutex);
-        }
-
-        values[i++] = Int64GetDatumFast(tmp.calls);
-        values[i++] = Float8GetDatumFast(tmp.total_time);
-        values[i++] = Int64GetDatumFast(tmp.rows);
-        values[i++] = Int64GetDatumFast(tmp.shared_blks_hit);
-        values[i++] = Int64GetDatumFast(tmp.shared_blks_read);
-        values[i++] = Int64GetDatumFast(tmp.shared_blks_written);
-        values[i++] = Int64GetDatumFast(tmp.local_blks_hit);
-        values[i++] = Int64GetDatumFast(tmp.local_blks_read);
-        values[i++] = Int64GetDatumFast(tmp.local_blks_written);
-        values[i++] = Int64GetDatumFast(tmp.temp_blks_read);
-        values[i++] = Int64GetDatumFast(tmp.temp_blks_written);
-
-        Assert(i == PG_STAT_STATEMENTS_COLS);
-
-        tuplestore_putvalues(tupstore, tupdesc, values, nulls);
-    }
-
-    LWLockRelease(pgss->lock);
-
-    /* clean up and return the tuplestore */
-    tuplestore_donestoring(tupstore);
-
-    return (Datum) 0;
-}
-
-/*
- * Estimate shared memory space needed.
- */
-static Size
-pgss_memsize(void)
-{
-    Size        size;
-    Size        entrysize;
-
-    size = MAXALIGN(sizeof(pgssSharedState));
-    entrysize = offsetof(pgssEntry, query) +pgstat_track_activity_query_size;
-    size = add_size(size, hash_estimate_size(pgss_max, entrysize));
-
-    return size;
-}
-
-/*
- * Allocate a new hashtable entry.
- * caller must hold an exclusive lock on pgss->lock
- *
- * Note: despite needing exclusive lock, it's not an error for the target
- * entry to already exist.    This is because pgss_store releases and
- * reacquires lock after failing to find a match; so someone else could
- * have made the entry while we waited to get exclusive lock.
- */
-static pgssEntry *
-entry_alloc(pgssHashKey *key)
-{
-    pgssEntry  *entry;
-    bool        found;
-
-    /* Caller must have clipped query properly */
-    Assert(key->query_len < pgss->query_size);
-
-    /* Make space if needed */
-    while (hash_get_num_entries(pgss_hash) >= pgss_max)
-        entry_dealloc();
-
-    /* Find or create an entry with desired hash code */
-    entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
-
-    if (!found)
-    {
-        /* New entry, initialize it */
-
-        /* dynahash tried to copy the key for us, but must fix query_ptr */
-        entry->key.query_ptr = entry->query;
-        /* reset the statistics */
-        memset(&entry->counters, 0, sizeof(Counters));
-        entry->counters.usage = USAGE_INIT;
-        /* re-initialize the mutex each time ... we assume no one using it */
-        SpinLockInit(&entry->mutex);
-        /* ... and don't forget the query text */
-        memcpy(entry->query, key->query_ptr, key->query_len);
-        entry->query[key->query_len] = '\0';
-    }
-
-    return entry;
-}
-
-/*
- * qsort comparator for sorting into increasing usage order
- */
-static int
-entry_cmp(const void *lhs, const void *rhs)
-{
-    double        l_usage = (*(const pgssEntry **) lhs)->counters.usage;
-    double        r_usage = (*(const pgssEntry **) rhs)->counters.usage;
-
-    if (l_usage < r_usage)
-        return -1;
-    else if (l_usage > r_usage)
-        return +1;
-    else
-        return 0;
-}
-
-/*
- * Deallocate least used entries.
- * Caller must hold an exclusive lock on pgss->lock.
- */
-static void
-entry_dealloc(void)
-{
-    HASH_SEQ_STATUS hash_seq;
-    pgssEntry **entries;
-    pgssEntry  *entry;
-    int            nvictims;
-    int            i;
-
-    /* Sort entries by usage and deallocate USAGE_DEALLOC_PERCENT of them. */
-
-    entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
-
-    i = 0;
-    hash_seq_init(&hash_seq, pgss_hash);
-    while ((entry = hash_seq_search(&hash_seq)) != NULL)
-    {
-        entries[i++] = entry;
-        entry->counters.usage *= USAGE_DECREASE_FACTOR;
-    }
-
-    qsort(entries, i, sizeof(pgssEntry *), entry_cmp);
-    nvictims = Max(10, i * USAGE_DEALLOC_PERCENT / 100);
-    nvictims = Min(nvictims, i);
-
-    for (i = 0; i < nvictims; i++)
-    {
-        hash_search(pgss_hash, &entries[i]->key, HASH_REMOVE, NULL);
-    }
-
-    pfree(entries);
-}
-
-/*
- * Release all entries.
- */
-static void
-entry_reset(void)
-{
-    HASH_SEQ_STATUS hash_seq;
-    pgssEntry  *entry;
-
-    LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
-
-    hash_seq_init(&hash_seq, pgss_hash);
-    while ((entry = hash_seq_search(&hash_seq)) != NULL)
-    {
-        hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
-    }
-
-    LWLockRelease(pgss->lock);
-}
diff --git a/contrib/pg_stat_statements/pg_stat_statements.control
b/contrib/pg_stat_statements/pg_stat_statements.control
deleted file mode 100644
index 6f9a947..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_stat_statements extension
-comment = 'track execution statistics of all SQL statements executed'
-default_version = '1.0'
-module_pathname = '$libdir/pg_stat_statements'
-relocatable = true
diff --git a/contrib/pgrowlocks/Makefile b/contrib/pgrowlocks/Makefile
deleted file mode 100644
index f56389b..0000000
--- a/contrib/pgrowlocks/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pgrowlocks/Makefile
-
-MODULE_big    = pgrowlocks
-OBJS        = pgrowlocks.o
-
-EXTENSION = pgrowlocks
-DATA = pgrowlocks--1.0.sql pgrowlocks--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pgrowlocks
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pgrowlocks/pgrowlocks--1.0.sql b/contrib/pgrowlocks/pgrowlocks--1.0.sql
deleted file mode 100644
index 0b60fdc..0000000
--- a/contrib/pgrowlocks/pgrowlocks--1.0.sql
+++ /dev/null
@@ -1,12 +0,0 @@
-/* contrib/pgrowlocks/pgrowlocks--1.0.sql */
-
-CREATE FUNCTION pgrowlocks(IN relname text,
-    OUT locked_row TID,        -- row TID
-    OUT lock_type TEXT,        -- lock type
-    OUT locker XID,        -- locking XID
-    OUT multi bool,        -- multi XID?
-    OUT xids xid[],        -- multi XIDs
-    OUT pids INTEGER[])        -- locker's process id
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'pgrowlocks'
-LANGUAGE C STRICT;
diff --git a/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql b/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
deleted file mode 100644
index 2d9d1ee..0000000
--- a/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-/* contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql */
-
-ALTER EXTENSION pgrowlocks ADD function pgrowlocks(text);
diff --git a/contrib/pgrowlocks/pgrowlocks.c b/contrib/pgrowlocks/pgrowlocks.c
deleted file mode 100644
index 302bb5c..0000000
--- a/contrib/pgrowlocks/pgrowlocks.c
+++ /dev/null
@@ -1,220 +0,0 @@
-/*
- * contrib/pgrowlocks/pgrowlocks.c
- *
- * Copyright (c) 2005-2006    Tatsuo Ishii
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/multixact.h"
-#include "access/relscan.h"
-#include "access/xact.h"
-#include "catalog/namespace.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "storage/procarray.h"
-#include "utils/acl.h"
-#include "utils/builtins.h"
-#include "utils/tqual.h"
-
-
-PG_MODULE_MAGIC;
-
-PG_FUNCTION_INFO_V1(pgrowlocks);
-
-extern Datum pgrowlocks(PG_FUNCTION_ARGS);
-
-/* ----------
- * pgrowlocks:
- * returns tids of rows being locked
- * ----------
- */
-
-#define NCHARS 32
-
-typedef struct
-{
-    Relation    rel;
-    HeapScanDesc scan;
-    int            ncolumns;
-} MyData;
-
-Datum
-pgrowlocks(PG_FUNCTION_ARGS)
-{
-    FuncCallContext *funcctx;
-    HeapScanDesc scan;
-    HeapTuple    tuple;
-    TupleDesc    tupdesc;
-    AttInMetadata *attinmeta;
-    Datum        result;
-    MyData       *mydata;
-    Relation    rel;
-
-    if (SRF_IS_FIRSTCALL())
-    {
-        text       *relname;
-        RangeVar   *relrv;
-        MemoryContext oldcontext;
-        AclResult    aclresult;
-
-        funcctx = SRF_FIRSTCALL_INIT();
-        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
-
-        /* Build a tuple descriptor for our result type */
-        if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-            elog(ERROR, "return type must be a row type");
-
-        attinmeta = TupleDescGetAttInMetadata(tupdesc);
-        funcctx->attinmeta = attinmeta;
-
-        relname = PG_GETARG_TEXT_P(0);
-        relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-        rel = heap_openrv(relrv, AccessShareLock);
-
-        /* check permissions: must have SELECT on table */
-        aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),
-                                      ACL_SELECT);
-        if (aclresult != ACLCHECK_OK)
-            aclcheck_error(aclresult, ACL_KIND_CLASS,
-                           RelationGetRelationName(rel));
-
-        scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
-        mydata = palloc(sizeof(*mydata));
-        mydata->rel = rel;
-        mydata->scan = scan;
-        mydata->ncolumns = tupdesc->natts;
-        funcctx->user_fctx = mydata;
-
-        MemoryContextSwitchTo(oldcontext);
-    }
-
-    funcctx = SRF_PERCALL_SETUP();
-    attinmeta = funcctx->attinmeta;
-    mydata = (MyData *) funcctx->user_fctx;
-    scan = mydata->scan;
-
-    /* scan the relation */
-    while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
-    {
-        /* must hold a buffer lock to call HeapTupleSatisfiesUpdate */
-        LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
-
-        if (HeapTupleSatisfiesUpdate(tuple->t_data,
-                                     GetCurrentCommandId(false),
-                                     scan->rs_cbuf) == HeapTupleBeingUpdated)
-        {
-
-            char      **values;
-            int            i;
-
-            values = (char **) palloc(mydata->ncolumns * sizeof(char *));
-
-            i = 0;
-            values[i++] = (char *) DirectFunctionCall1(tidout, PointerGetDatum(&tuple->t_self));
-
-            if (tuple->t_data->t_infomask & HEAP_XMAX_SHARED_LOCK)
-                values[i++] = pstrdup("Shared");
-            else
-                values[i++] = pstrdup("Exclusive");
-            values[i] = palloc(NCHARS * sizeof(char));
-            snprintf(values[i++], NCHARS, "%d", HeapTupleHeaderGetXmax(tuple->t_data));
-            if (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
-            {
-                TransactionId *xids;
-                int            nxids;
-                int            j;
-                int            isValidXid = 0;        /* any valid xid ever exists? */
-
-                values[i++] = pstrdup("true");
-                nxids = GetMultiXactIdMembers(HeapTupleHeaderGetXmax(tuple->t_data), &xids);
-                if (nxids == -1)
-                {
-                    elog(ERROR, "GetMultiXactIdMembers returns error");
-                }
-
-                values[i] = palloc(NCHARS * nxids);
-                values[i + 1] = palloc(NCHARS * nxids);
-                strcpy(values[i], "{");
-                strcpy(values[i + 1], "{");
-
-                for (j = 0; j < nxids; j++)
-                {
-                    char        buf[NCHARS];
-
-                    if (TransactionIdIsInProgress(xids[j]))
-                    {
-                        if (isValidXid)
-                        {
-                            strcat(values[i], ",");
-                            strcat(values[i + 1], ",");
-                        }
-                        snprintf(buf, NCHARS, "%d", xids[j]);
-                        strcat(values[i], buf);
-                        snprintf(buf, NCHARS, "%d", BackendXidGetPid(xids[j]));
-                        strcat(values[i + 1], buf);
-
-                        isValidXid = 1;
-                    }
-                }
-
-                strcat(values[i], "}");
-                strcat(values[i + 1], "}");
-                i++;
-            }
-            else
-            {
-                values[i++] = pstrdup("false");
-                values[i] = palloc(NCHARS * sizeof(char));
-                snprintf(values[i++], NCHARS, "{%d}", HeapTupleHeaderGetXmax(tuple->t_data));
-
-                values[i] = palloc(NCHARS * sizeof(char));
-                snprintf(values[i++], NCHARS, "{%d}", BackendXidGetPid(HeapTupleHeaderGetXmax(tuple->t_data)));
-            }
-
-            LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
-
-            /* build a tuple */
-            tuple = BuildTupleFromCStrings(attinmeta, values);
-
-            /* make the tuple into a datum */
-            result = HeapTupleGetDatum(tuple);
-
-            /* Clean up */
-            for (i = 0; i < mydata->ncolumns; i++)
-                pfree(values[i]);
-            pfree(values);
-
-            SRF_RETURN_NEXT(funcctx, result);
-        }
-        else
-        {
-            LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
-        }
-    }
-
-    heap_endscan(scan);
-    heap_close(mydata->rel, AccessShareLock);
-
-    SRF_RETURN_DONE(funcctx);
-}
diff --git a/contrib/pgrowlocks/pgrowlocks.control b/contrib/pgrowlocks/pgrowlocks.control
deleted file mode 100644
index a6ba164..0000000
--- a/contrib/pgrowlocks/pgrowlocks.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pgrowlocks extension
-comment = 'show row-level locking information'
-default_version = '1.0'
-module_pathname = '$libdir/pgrowlocks'
-relocatable = true
diff --git a/contrib/pgstattuple/Makefile b/contrib/pgstattuple/Makefile
deleted file mode 100644
index 13b8709..0000000
--- a/contrib/pgstattuple/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pgstattuple/Makefile
-
-MODULE_big    = pgstattuple
-OBJS        = pgstattuple.o pgstatindex.o
-
-EXTENSION = pgstattuple
-DATA = pgstattuple--1.0.sql pgstattuple--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pgstattuple
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c
deleted file mode 100644
index fd2cc92..0000000
--- a/contrib/pgstattuple/pgstatindex.c
+++ /dev/null
@@ -1,282 +0,0 @@
-/*
- * contrib/pgstattuple/pgstatindex.c
- *
- *
- * pgstatindex
- *
- * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/nbtree.h"
-#include "catalog/namespace.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "utils/builtins.h"
-
-
-extern Datum pgstatindex(PG_FUNCTION_ARGS);
-extern Datum pg_relpages(PG_FUNCTION_ARGS);
-
-PG_FUNCTION_INFO_V1(pgstatindex);
-PG_FUNCTION_INFO_V1(pg_relpages);
-
-#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
-#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
-
-#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
-        if ( !(FirstOffsetNumber <= (offnum) && \
-                        (offnum) <= PageGetMaxOffsetNumber(pg)) ) \
-             elog(ERROR, "page offset number out of range"); }
-
-/* note: BlockNumber is unsigned, hence can't be negative */
-#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
-        if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
-             elog(ERROR, "block number out of range"); }
-
-/* ------------------------------------------------
- * A structure for a whole btree index statistics
- * used by pgstatindex().
- * ------------------------------------------------
- */
-typedef struct BTIndexStat
-{
-    uint32        version;
-    uint32        level;
-    BlockNumber root_blkno;
-
-    uint64        root_pages;
-    uint64        internal_pages;
-    uint64        leaf_pages;
-    uint64        empty_pages;
-    uint64        deleted_pages;
-
-    uint64        max_avail;
-    uint64        free_space;
-
-    uint64        fragments;
-} BTIndexStat;
-
-/* ------------------------------------------------------
- * pgstatindex()
- *
- * Usage: SELECT * FROM pgstatindex('t1_pkey');
- * ------------------------------------------------------
- */
-Datum
-pgstatindex(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    Relation    rel;
-    RangeVar   *relrv;
-    Datum        result;
-    BlockNumber nblocks;
-    BlockNumber blkno;
-    BTIndexStat indexStat;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pgstattuple functions"))));
-
-    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-    rel = relation_openrv(relrv, AccessShareLock);
-
-    if (!IS_INDEX(rel) || !IS_BTREE(rel))
-        elog(ERROR, "relation \"%s\" is not a btree index",
-             RelationGetRelationName(rel));
-
-    /*
-     * Reject attempts to read non-local temporary relations; we would be
-     * likely to get wrong data since we have no visibility into the owning
-     * session's local buffers.
-     */
-    if (RELATION_IS_OTHER_TEMP(rel))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("cannot access temporary tables of other sessions")));
-
-    /*
-     * Read metapage
-     */
-    {
-        Buffer        buffer = ReadBuffer(rel, 0);
-        Page        page = BufferGetPage(buffer);
-        BTMetaPageData *metad = BTPageGetMeta(page);
-
-        indexStat.version = metad->btm_version;
-        indexStat.level = metad->btm_level;
-        indexStat.root_blkno = metad->btm_root;
-
-        ReleaseBuffer(buffer);
-    }
-
-    /* -- init counters -- */
-    indexStat.root_pages = 0;
-    indexStat.internal_pages = 0;
-    indexStat.leaf_pages = 0;
-    indexStat.empty_pages = 0;
-    indexStat.deleted_pages = 0;
-
-    indexStat.max_avail = 0;
-    indexStat.free_space = 0;
-
-    indexStat.fragments = 0;
-
-    /*
-     * Scan all blocks except the metapage
-     */
-    nblocks = RelationGetNumberOfBlocks(rel);
-
-    for (blkno = 1; blkno < nblocks; blkno++)
-    {
-        Buffer        buffer;
-        Page        page;
-        BTPageOpaque opaque;
-
-        /* Read and lock buffer */
-        buffer = ReadBuffer(rel, blkno);
-        LockBuffer(buffer, BUFFER_LOCK_SHARE);
-
-        page = BufferGetPage(buffer);
-        opaque = (BTPageOpaque) PageGetSpecialPointer(page);
-
-        /* Determine page type, and update totals */
-
-        if (P_ISLEAF(opaque))
-        {
-            int            max_avail;
-
-            max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
-            indexStat.max_avail += max_avail;
-            indexStat.free_space += PageGetFreeSpace(page);
-
-            indexStat.leaf_pages++;
-
-            /*
-             * If the next leaf is on an earlier block, it means a
-             * fragmentation.
-             */
-            if (opaque->btpo_next != P_NONE && opaque->btpo_next < blkno)
-                indexStat.fragments++;
-        }
-        else if (P_ISDELETED(opaque))
-            indexStat.deleted_pages++;
-        else if (P_IGNORE(opaque))
-            indexStat.empty_pages++;
-        else if (P_ISROOT(opaque))
-            indexStat.root_pages++;
-        else
-            indexStat.internal_pages++;
-
-        /* Unlock and release buffer */
-        LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-        ReleaseBuffer(buffer);
-    }
-
-    relation_close(rel, AccessShareLock);
-
-    /*----------------------------
-     * Build a result tuple
-     *----------------------------
-     */
-    {
-        TupleDesc    tupleDesc;
-        int            j;
-        char       *values[10];
-        HeapTuple    tuple;
-
-        /* Build a tuple descriptor for our result type */
-        if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-            elog(ERROR, "return type must be a row type");
-
-        j = 0;
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%d", indexStat.version);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%d", indexStat.level);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, INT64_FORMAT,
-                 (indexStat.root_pages +
-                  indexStat.leaf_pages +
-                  indexStat.internal_pages +
-                  indexStat.deleted_pages +
-                  indexStat.empty_pages) * BLCKSZ);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%u", indexStat.root_blkno);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, INT64_FORMAT, indexStat.internal_pages);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, INT64_FORMAT, indexStat.leaf_pages);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, INT64_FORMAT, indexStat.empty_pages);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, INT64_FORMAT, indexStat.deleted_pages);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%.2f", 100.0 - (double) indexStat.free_space / (double) indexStat.max_avail *
100.0);
-        values[j] = palloc(32);
-        snprintf(values[j++], 32, "%.2f", (double) indexStat.fragments / (double) indexStat.leaf_pages * 100.0);
-
-        tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
-                                       values);
-
-        result = HeapTupleGetDatum(tuple);
-    }
-
-    PG_RETURN_DATUM(result);
-}
-
-/* --------------------------------------------------------
- * pg_relpages()
- *
- * Get the number of pages of the table/index.
- *
- * Usage: SELECT pg_relpages('t1');
- *          SELECT pg_relpages('t1_pkey');
- * --------------------------------------------------------
- */
-Datum
-pg_relpages(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    int64        relpages;
-    Relation    rel;
-    RangeVar   *relrv;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pgstattuple functions"))));
-
-    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-    rel = relation_openrv(relrv, AccessShareLock);
-
-    /* note: this will work OK on non-local temp tables */
-
-    relpages = RelationGetNumberOfBlocks(rel);
-
-    relation_close(rel, AccessShareLock);
-
-    PG_RETURN_INT64(relpages);
-}
diff --git a/contrib/pgstattuple/pgstattuple--1.0.sql b/contrib/pgstattuple/pgstattuple--1.0.sql
deleted file mode 100644
index 83445ec..0000000
--- a/contrib/pgstattuple/pgstattuple--1.0.sql
+++ /dev/null
@@ -1,46 +0,0 @@
-/* contrib/pgstattuple/pgstattuple--1.0.sql */
-
-CREATE FUNCTION pgstattuple(IN relname text,
-    OUT table_len BIGINT,        -- physical table length in bytes
-    OUT tuple_count BIGINT,        -- number of live tuples
-    OUT tuple_len BIGINT,        -- total tuples length in bytes
-    OUT tuple_percent FLOAT8,        -- live tuples in %
-    OUT dead_tuple_count BIGINT,    -- number of dead tuples
-    OUT dead_tuple_len BIGINT,        -- total dead tuples length in bytes
-    OUT dead_tuple_percent FLOAT8,    -- dead tuples in %
-    OUT free_space BIGINT,        -- free space in bytes
-    OUT free_percent FLOAT8)        -- free space in %
-AS 'MODULE_PATHNAME', 'pgstattuple'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION pgstattuple(IN reloid oid,
-    OUT table_len BIGINT,        -- physical table length in bytes
-    OUT tuple_count BIGINT,        -- number of live tuples
-    OUT tuple_len BIGINT,        -- total tuples length in bytes
-    OUT tuple_percent FLOAT8,        -- live tuples in %
-    OUT dead_tuple_count BIGINT,    -- number of dead tuples
-    OUT dead_tuple_len BIGINT,        -- total dead tuples length in bytes
-    OUT dead_tuple_percent FLOAT8,    -- dead tuples in %
-    OUT free_space BIGINT,        -- free space in bytes
-    OUT free_percent FLOAT8)        -- free space in %
-AS 'MODULE_PATHNAME', 'pgstattuplebyid'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION pgstatindex(IN relname text,
-    OUT version INT,
-    OUT tree_level INT,
-    OUT index_size BIGINT,
-    OUT root_block_no BIGINT,
-    OUT internal_pages BIGINT,
-    OUT leaf_pages BIGINT,
-    OUT empty_pages BIGINT,
-    OUT deleted_pages BIGINT,
-    OUT avg_leaf_density FLOAT8,
-    OUT leaf_fragmentation FLOAT8)
-AS 'MODULE_PATHNAME', 'pgstatindex'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION pg_relpages(IN relname text)
-RETURNS BIGINT
-AS 'MODULE_PATHNAME', 'pg_relpages'
-LANGUAGE C STRICT;
diff --git a/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
b/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
deleted file mode 100644
index 3cfb8db..0000000
--- a/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
+++ /dev/null
@@ -1,6 +0,0 @@
-/* contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql */
-
-ALTER EXTENSION pgstattuple ADD function pgstattuple(text);
-ALTER EXTENSION pgstattuple ADD function pgstattuple(oid);
-ALTER EXTENSION pgstattuple ADD function pgstatindex(text);
-ALTER EXTENSION pgstattuple ADD function pg_relpages(text);
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
deleted file mode 100644
index e5ddd87..0000000
--- a/contrib/pgstattuple/pgstattuple.c
+++ /dev/null
@@ -1,518 +0,0 @@
-/*
- * contrib/pgstattuple/pgstattuple.c
- *
- * Copyright (c) 2001,2002    Tatsuo Ishii
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/gist_private.h"
-#include "access/hash.h"
-#include "access/nbtree.h"
-#include "access/relscan.h"
-#include "catalog/namespace.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "storage/lmgr.h"
-#include "utils/builtins.h"
-#include "utils/tqual.h"
-
-
-PG_MODULE_MAGIC;
-
-PG_FUNCTION_INFO_V1(pgstattuple);
-PG_FUNCTION_INFO_V1(pgstattuplebyid);
-
-extern Datum pgstattuple(PG_FUNCTION_ARGS);
-extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
-
-/*
- * struct pgstattuple_type
- *
- * tuple_percent, dead_tuple_percent and free_percent are computable,
- * so not defined here.
- */
-typedef struct pgstattuple_type
-{
-    uint64        table_len;
-    uint64        tuple_count;
-    uint64        tuple_len;
-    uint64        dead_tuple_count;
-    uint64        dead_tuple_len;
-    uint64        free_space;        /* free/reusable space in bytes */
-} pgstattuple_type;
-
-typedef void (*pgstat_page) (pgstattuple_type *, Relation, BlockNumber);
-
-static Datum build_pgstattuple_type(pgstattuple_type *stat,
-                       FunctionCallInfo fcinfo);
-static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
-static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
-static void pgstat_btree_page(pgstattuple_type *stat,
-                  Relation rel, BlockNumber blkno);
-static void pgstat_hash_page(pgstattuple_type *stat,
-                 Relation rel, BlockNumber blkno);
-static void pgstat_gist_page(pgstattuple_type *stat,
-                 Relation rel, BlockNumber blkno);
-static Datum pgstat_index(Relation rel, BlockNumber start,
-             pgstat_page pagefn, FunctionCallInfo fcinfo);
-static void pgstat_index_page(pgstattuple_type *stat, Page page,
-                  OffsetNumber minoff, OffsetNumber maxoff);
-
-/*
- * build_pgstattuple_type -- build a pgstattuple_type tuple
- */
-static Datum
-build_pgstattuple_type(pgstattuple_type *stat, FunctionCallInfo fcinfo)
-{
-#define NCOLUMNS    9
-#define NCHARS        32
-
-    HeapTuple    tuple;
-    char       *values[NCOLUMNS];
-    char        values_buf[NCOLUMNS][NCHARS];
-    int            i;
-    double        tuple_percent;
-    double        dead_tuple_percent;
-    double        free_percent;    /* free/reusable space in % */
-    TupleDesc    tupdesc;
-    AttInMetadata *attinmeta;
-
-    /* Build a tuple descriptor for our result type */
-    if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-        elog(ERROR, "return type must be a row type");
-
-    /*
-     * Generate attribute metadata needed later to produce tuples from raw C
-     * strings
-     */
-    attinmeta = TupleDescGetAttInMetadata(tupdesc);
-
-    if (stat->table_len == 0)
-    {
-        tuple_percent = 0.0;
-        dead_tuple_percent = 0.0;
-        free_percent = 0.0;
-    }
-    else
-    {
-        tuple_percent = 100.0 * stat->tuple_len / stat->table_len;
-        dead_tuple_percent = 100.0 * stat->dead_tuple_len / stat->table_len;
-        free_percent = 100.0 * stat->free_space / stat->table_len;
-    }
-
-    /*
-     * Prepare a values array for constructing the tuple. This should be an
-     * array of C strings which will be processed later by the appropriate
-     * "in" functions.
-     */
-    for (i = 0; i < NCOLUMNS; i++)
-        values[i] = values_buf[i];
-    i = 0;
-    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->table_len);
-    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_count);
-    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_len);
-    snprintf(values[i++], NCHARS, "%.2f", tuple_percent);
-    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_count);
-    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_len);
-    snprintf(values[i++], NCHARS, "%.2f", dead_tuple_percent);
-    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->free_space);
-    snprintf(values[i++], NCHARS, "%.2f", free_percent);
-
-    /* build a tuple */
-    tuple = BuildTupleFromCStrings(attinmeta, values);
-
-    /* make the tuple into a datum */
-    return HeapTupleGetDatum(tuple);
-}
-
-/* ----------
- * pgstattuple:
- * returns live/dead tuples info
- *
- * C FUNCTION definition
- * pgstattuple(text) returns pgstattuple_type
- * see pgstattuple.sql for pgstattuple_type
- * ----------
- */
-
-Datum
-pgstattuple(PG_FUNCTION_ARGS)
-{
-    text       *relname = PG_GETARG_TEXT_P(0);
-    RangeVar   *relrv;
-    Relation    rel;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pgstattuple functions"))));
-
-    /* open relation */
-    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-    rel = relation_openrv(relrv, AccessShareLock);
-
-    PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
-}
-
-Datum
-pgstattuplebyid(PG_FUNCTION_ARGS)
-{
-    Oid            relid = PG_GETARG_OID(0);
-    Relation    rel;
-
-    if (!superuser())
-        ereport(ERROR,
-                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-                 (errmsg("must be superuser to use pgstattuple functions"))));
-
-    /* open relation */
-    rel = relation_open(relid, AccessShareLock);
-
-    PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
-}
-
-/*
- * pgstat_relation
- */
-static Datum
-pgstat_relation(Relation rel, FunctionCallInfo fcinfo)
-{
-    const char *err;
-
-    /*
-     * Reject attempts to read non-local temporary relations; we would be
-     * likely to get wrong data since we have no visibility into the owning
-     * session's local buffers.
-     */
-    if (RELATION_IS_OTHER_TEMP(rel))
-        ereport(ERROR,
-                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-                 errmsg("cannot access temporary tables of other sessions")));
-
-    switch (rel->rd_rel->relkind)
-    {
-        case RELKIND_RELATION:
-        case RELKIND_TOASTVALUE:
-        case RELKIND_UNCATALOGED:
-        case RELKIND_SEQUENCE:
-            return pgstat_heap(rel, fcinfo);
-        case RELKIND_INDEX:
-            switch (rel->rd_rel->relam)
-            {
-                case BTREE_AM_OID:
-                    return pgstat_index(rel, BTREE_METAPAGE + 1,
-                                        pgstat_btree_page, fcinfo);
-                case HASH_AM_OID:
-                    return pgstat_index(rel, HASH_METAPAGE + 1,
-                                        pgstat_hash_page, fcinfo);
-                case GIST_AM_OID:
-                    return pgstat_index(rel, GIST_ROOT_BLKNO + 1,
-                                        pgstat_gist_page, fcinfo);
-                case GIN_AM_OID:
-                    err = "gin index";
-                    break;
-                default:
-                    err = "unknown index";
-                    break;
-            }
-            break;
-        case RELKIND_VIEW:
-            err = "view";
-            break;
-        case RELKIND_COMPOSITE_TYPE:
-            err = "composite type";
-            break;
-        case RELKIND_FOREIGN_TABLE:
-            err = "foreign table";
-            break;
-        default:
-            err = "unknown";
-            break;
-    }
-
-    ereport(ERROR,
-            (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-             errmsg("\"%s\" (%s) is not supported",
-                    RelationGetRelationName(rel), err)));
-    return 0;                    /* should not happen */
-}
-
-/*
- * pgstat_heap -- returns live/dead tuples info in a heap
- */
-static Datum
-pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
-{
-    HeapScanDesc scan;
-    HeapTuple    tuple;
-    BlockNumber nblocks;
-    BlockNumber block = 0;        /* next block to count free space in */
-    BlockNumber tupblock;
-    Buffer        buffer;
-    pgstattuple_type stat = {0};
-
-    /* Disable syncscan because we assume we scan from block zero upwards */
-    scan = heap_beginscan_strat(rel, SnapshotAny, 0, NULL, true, false);
-
-    nblocks = scan->rs_nblocks; /* # blocks to be scanned */
-
-    /* scan the relation */
-    while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
-    {
-        CHECK_FOR_INTERRUPTS();
-
-        /* must hold a buffer lock to call HeapTupleSatisfiesVisibility */
-        LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
-
-        if (HeapTupleSatisfiesVisibility(tuple, SnapshotNow, scan->rs_cbuf))
-        {
-            stat.tuple_len += tuple->t_len;
-            stat.tuple_count++;
-        }
-        else
-        {
-            stat.dead_tuple_len += tuple->t_len;
-            stat.dead_tuple_count++;
-        }
-
-        LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
-
-        /*
-         * To avoid physically reading the table twice, try to do the
-         * free-space scan in parallel with the heap scan.    However,
-         * heap_getnext may find no tuples on a given page, so we cannot
-         * simply examine the pages returned by the heap scan.
-         */
-        tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
-
-        while (block <= tupblock)
-        {
-            CHECK_FOR_INTERRUPTS();
-
-            buffer = ReadBuffer(rel, block);
-            LockBuffer(buffer, BUFFER_LOCK_SHARE);
-            stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
-            UnlockReleaseBuffer(buffer);
-            block++;
-        }
-    }
-    heap_endscan(scan);
-
-    while (block < nblocks)
-    {
-        CHECK_FOR_INTERRUPTS();
-
-        buffer = ReadBuffer(rel, block);
-        LockBuffer(buffer, BUFFER_LOCK_SHARE);
-        stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
-        UnlockReleaseBuffer(buffer);
-        block++;
-    }
-
-    relation_close(rel, AccessShareLock);
-
-    stat.table_len = (uint64) nblocks *BLCKSZ;
-
-    return build_pgstattuple_type(&stat, fcinfo);
-}
-
-/*
- * pgstat_btree_page -- check tuples in a btree page
- */
-static void
-pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
-{
-    Buffer        buf;
-    Page        page;
-
-    buf = ReadBuffer(rel, blkno);
-    LockBuffer(buf, BT_READ);
-    page = BufferGetPage(buf);
-
-    /* Page is valid, see what to do with it */
-    if (PageIsNew(page))
-    {
-        /* fully empty page */
-        stat->free_space += BLCKSZ;
-    }
-    else
-    {
-        BTPageOpaque opaque;
-
-        opaque = (BTPageOpaque) PageGetSpecialPointer(page);
-        if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD))
-        {
-            /* recyclable page */
-            stat->free_space += BLCKSZ;
-        }
-        else if (P_ISLEAF(opaque))
-        {
-            pgstat_index_page(stat, page, P_FIRSTDATAKEY(opaque),
-                              PageGetMaxOffsetNumber(page));
-        }
-        else
-        {
-            /* root or node */
-        }
-    }
-
-    _bt_relbuf(rel, buf);
-}
-
-/*
- * pgstat_hash_page -- check tuples in a hash page
- */
-static void
-pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
-{
-    Buffer        buf;
-    Page        page;
-
-    _hash_getlock(rel, blkno, HASH_SHARE);
-    buf = _hash_getbuf(rel, blkno, HASH_READ, 0);
-    page = BufferGetPage(buf);
-
-    if (PageGetSpecialSize(page) == MAXALIGN(sizeof(HashPageOpaqueData)))
-    {
-        HashPageOpaque opaque;
-
-        opaque = (HashPageOpaque) PageGetSpecialPointer(page);
-        switch (opaque->hasho_flag)
-        {
-            case LH_UNUSED_PAGE:
-                stat->free_space += BLCKSZ;
-                break;
-            case LH_BUCKET_PAGE:
-            case LH_OVERFLOW_PAGE:
-                pgstat_index_page(stat, page, FirstOffsetNumber,
-                                  PageGetMaxOffsetNumber(page));
-                break;
-            case LH_BITMAP_PAGE:
-            case LH_META_PAGE:
-            default:
-                break;
-        }
-    }
-    else
-    {
-        /* maybe corrupted */
-    }
-
-    _hash_relbuf(rel, buf);
-    _hash_droplock(rel, blkno, HASH_SHARE);
-}
-
-/*
- * pgstat_gist_page -- check tuples in a gist page
- */
-static void
-pgstat_gist_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
-{
-    Buffer        buf;
-    Page        page;
-
-    buf = ReadBuffer(rel, blkno);
-    LockBuffer(buf, GIST_SHARE);
-    gistcheckpage(rel, buf);
-    page = BufferGetPage(buf);
-
-    if (GistPageIsLeaf(page))
-    {
-        pgstat_index_page(stat, page, FirstOffsetNumber,
-                          PageGetMaxOffsetNumber(page));
-    }
-    else
-    {
-        /* root or node */
-    }
-
-    UnlockReleaseBuffer(buf);
-}
-
-/*
- * pgstat_index -- returns live/dead tuples info in a generic index
- */
-static Datum
-pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
-             FunctionCallInfo fcinfo)
-{
-    BlockNumber nblocks;
-    BlockNumber blkno;
-    pgstattuple_type stat = {0};
-
-    blkno = start;
-    for (;;)
-    {
-        /* Get the current relation length */
-        LockRelationForExtension(rel, ExclusiveLock);
-        nblocks = RelationGetNumberOfBlocks(rel);
-        UnlockRelationForExtension(rel, ExclusiveLock);
-
-        /* Quit if we've scanned the whole relation */
-        if (blkno >= nblocks)
-        {
-            stat.table_len = (uint64) nblocks *BLCKSZ;
-
-            break;
-        }
-
-        for (; blkno < nblocks; blkno++)
-        {
-            CHECK_FOR_INTERRUPTS();
-
-            pagefn(&stat, rel, blkno);
-        }
-    }
-
-    relation_close(rel, AccessShareLock);
-
-    return build_pgstattuple_type(&stat, fcinfo);
-}
-
-/*
- * pgstat_index_page -- for generic index page
- */
-static void
-pgstat_index_page(pgstattuple_type *stat, Page page,
-                  OffsetNumber minoff, OffsetNumber maxoff)
-{
-    OffsetNumber i;
-
-    stat->free_space += PageGetFreeSpace(page);
-
-    for (i = minoff; i <= maxoff; i = OffsetNumberNext(i))
-    {
-        ItemId        itemid = PageGetItemId(page, i);
-
-        if (ItemIdIsDead(itemid))
-        {
-            stat->dead_tuple_count++;
-            stat->dead_tuple_len += ItemIdGetLength(itemid);
-        }
-        else
-        {
-            stat->tuple_count++;
-            stat->tuple_len += ItemIdGetLength(itemid);
-        }
-    }
-}
diff --git a/contrib/pgstattuple/pgstattuple.control b/contrib/pgstattuple/pgstattuple.control
deleted file mode 100644
index 7b5129b..0000000
--- a/contrib/pgstattuple/pgstattuple.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pgstattuple extension
-comment = 'show tuple-level statistics'
-default_version = '1.0'
-module_pathname = '$libdir/pgstattuple'
-relocatable = true
diff --git a/src/Makefile b/src/Makefile
index a046034..87d6e2c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -24,6 +24,7 @@ SUBDIRS = \
     bin \
     pl \
     makefiles \
+    extension \
     test/regress

 # There are too many interdependencies between the subdirectories, so
diff --git a/src/extension/Makefile b/src/extension/Makefile
new file mode 100644
index 0000000..282f076
--- /dev/null
+++ b/src/extension/Makefile
@@ -0,0 +1,41 @@
+# $PostgreSQL: pgsql/src/extension/Makefile $
+
+subdir = src/extension
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+
+SUBDIRS = \
+        auto_explain    \
+        pageinspect \
+        pg_buffercache \
+        pgrowlocks  \
+        pg_stat_statements \
+        pgstattuple
+
+ifeq ($(with_openssl),yes)
+SUBDIRS += sslinfo
+endif
+
+ifeq ($(with_ossp_uuid),yes)
+SUBDIRS += uuid-ossp
+endif
+
+ifeq ($(with_libxml),yes)
+SUBDIRS += xml2
+endif
+
+# Missing:
+#        start-scripts    \ (does not have a makefile)
+
+
+all install installdirs uninstall distprep clean distclean maintainer-clean:
+    @for dir in $(SUBDIRS); do \
+        $(MAKE) -C $$dir $@ || exit; \
+    done
+
+# We'd like check operations to run all the subtests before failing.
+check installcheck:
+    @CHECKERR=0; for dir in $(SUBDIRS); do \
+        $(MAKE) -C $$dir $@ || CHECKERR=$$?; \
+    done; \
+    exit $$CHECKERR
diff --git a/src/extension/auto_explain/Makefile b/src/extension/auto_explain/Makefile
new file mode 100644
index 0000000..023e59a
--- /dev/null
+++ b/src/extension/auto_explain/Makefile
@@ -0,0 +1,16 @@
+# src/extension/auto_explain/Makefile
+
+MODULE_big = auto_explain
+OBJS = auto_explain.o
+MODULEDIR=extension
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/auto_explain
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/auto_explain/auto_explain.c b/src/extension/auto_explain/auto_explain.c
new file mode 100644
index 0000000..647f6d0
--- /dev/null
+++ b/src/extension/auto_explain/auto_explain.c
@@ -0,0 +1,304 @@
+/*-------------------------------------------------------------------------
+ *
+ * auto_explain.c
+ *
+ *
+ * Copyright (c) 2008-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *      src/extension/auto_explain/auto_explain.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "commands/explain.h"
+#include "executor/instrument.h"
+#include "utils/guc.h"
+
+PG_MODULE_MAGIC;
+
+/* GUC variables */
+static int    auto_explain_log_min_duration = -1; /* msec or -1 */
+static bool auto_explain_log_analyze = false;
+static bool auto_explain_log_verbose = false;
+static bool auto_explain_log_buffers = false;
+static int    auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
+static bool auto_explain_log_nested_statements = false;
+
+static const struct config_enum_entry format_options[] = {
+    {"text", EXPLAIN_FORMAT_TEXT, false},
+    {"xml", EXPLAIN_FORMAT_XML, false},
+    {"json", EXPLAIN_FORMAT_JSON, false},
+    {"yaml", EXPLAIN_FORMAT_YAML, false},
+    {NULL, 0, false}
+};
+
+/* Current nesting depth of ExecutorRun calls */
+static int    nesting_level = 0;
+
+/* Saved hook values in case of unload */
+static ExecutorStart_hook_type prev_ExecutorStart = NULL;
+static ExecutorRun_hook_type prev_ExecutorRun = NULL;
+static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
+static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+
+#define auto_explain_enabled() \
+    (auto_explain_log_min_duration >= 0 && \
+     (nesting_level == 0 || auto_explain_log_nested_statements))
+
+void        _PG_init(void);
+void        _PG_fini(void);
+
+static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
+static void explain_ExecutorRun(QueryDesc *queryDesc,
+                    ScanDirection direction,
+                    long count);
+static void explain_ExecutorFinish(QueryDesc *queryDesc);
+static void explain_ExecutorEnd(QueryDesc *queryDesc);
+
+
+/*
+ * Module load callback
+ */
+void
+_PG_init(void)
+{
+    /* Define custom GUC variables. */
+    DefineCustomIntVariable("auto_explain.log_min_duration",
+         "Sets the minimum execution time above which plans will be logged.",
+                         "Zero prints all plans. -1 turns this feature off.",
+                            &auto_explain_log_min_duration,
+                            -1,
+                            -1, INT_MAX / 1000,
+                            PGC_SUSET,
+                            GUC_UNIT_MS,
+                            NULL,
+                            NULL,
+                            NULL);
+
+    DefineCustomBoolVariable("auto_explain.log_analyze",
+                             "Use EXPLAIN ANALYZE for plan logging.",
+                             NULL,
+                             &auto_explain_log_analyze,
+                             false,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    DefineCustomBoolVariable("auto_explain.log_verbose",
+                             "Use EXPLAIN VERBOSE for plan logging.",
+                             NULL,
+                             &auto_explain_log_verbose,
+                             false,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    DefineCustomBoolVariable("auto_explain.log_buffers",
+                             "Log buffers usage.",
+                             NULL,
+                             &auto_explain_log_buffers,
+                             false,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    DefineCustomEnumVariable("auto_explain.log_format",
+                             "EXPLAIN format to be used for plan logging.",
+                             NULL,
+                             &auto_explain_log_format,
+                             EXPLAIN_FORMAT_TEXT,
+                             format_options,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    DefineCustomBoolVariable("auto_explain.log_nested_statements",
+                             "Log nested statements.",
+                             NULL,
+                             &auto_explain_log_nested_statements,
+                             false,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    EmitWarningsOnPlaceholders("auto_explain");
+
+    /* Install hooks. */
+    prev_ExecutorStart = ExecutorStart_hook;
+    ExecutorStart_hook = explain_ExecutorStart;
+    prev_ExecutorRun = ExecutorRun_hook;
+    ExecutorRun_hook = explain_ExecutorRun;
+    prev_ExecutorFinish = ExecutorFinish_hook;
+    ExecutorFinish_hook = explain_ExecutorFinish;
+    prev_ExecutorEnd = ExecutorEnd_hook;
+    ExecutorEnd_hook = explain_ExecutorEnd;
+}
+
+/*
+ * Module unload callback
+ */
+void
+_PG_fini(void)
+{
+    /* Uninstall hooks. */
+    ExecutorStart_hook = prev_ExecutorStart;
+    ExecutorRun_hook = prev_ExecutorRun;
+    ExecutorFinish_hook = prev_ExecutorFinish;
+    ExecutorEnd_hook = prev_ExecutorEnd;
+}
+
+/*
+ * ExecutorStart hook: start up logging if needed
+ */
+static void
+explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
+{
+    if (auto_explain_enabled())
+    {
+        /* Enable per-node instrumentation iff log_analyze is required. */
+        if (auto_explain_log_analyze && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
+        {
+            queryDesc->instrument_options |= INSTRUMENT_TIMER;
+            if (auto_explain_log_buffers)
+                queryDesc->instrument_options |= INSTRUMENT_BUFFERS;
+        }
+    }
+
+    if (prev_ExecutorStart)
+        prev_ExecutorStart(queryDesc, eflags);
+    else
+        standard_ExecutorStart(queryDesc, eflags);
+
+    if (auto_explain_enabled())
+    {
+        /*
+         * Set up to track total elapsed time in ExecutorRun.  Make sure the
+         * space is allocated in the per-query context so it will go away at
+         * ExecutorEnd.
+         */
+        if (queryDesc->totaltime == NULL)
+        {
+            MemoryContext oldcxt;
+
+            oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
+            queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
+            MemoryContextSwitchTo(oldcxt);
+        }
+    }
+}
+
+/*
+ * ExecutorRun hook: all we need do is track nesting depth
+ */
+static void
+explain_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
+{
+    nesting_level++;
+    PG_TRY();
+    {
+        if (prev_ExecutorRun)
+            prev_ExecutorRun(queryDesc, direction, count);
+        else
+            standard_ExecutorRun(queryDesc, direction, count);
+        nesting_level--;
+    }
+    PG_CATCH();
+    {
+        nesting_level--;
+        PG_RE_THROW();
+    }
+    PG_END_TRY();
+}
+
+/*
+ * ExecutorFinish hook: all we need do is track nesting depth
+ */
+static void
+explain_ExecutorFinish(QueryDesc *queryDesc)
+{
+    nesting_level++;
+    PG_TRY();
+    {
+        if (prev_ExecutorFinish)
+            prev_ExecutorFinish(queryDesc);
+        else
+            standard_ExecutorFinish(queryDesc);
+        nesting_level--;
+    }
+    PG_CATCH();
+    {
+        nesting_level--;
+        PG_RE_THROW();
+    }
+    PG_END_TRY();
+}
+
+/*
+ * ExecutorEnd hook: log results if needed
+ */
+static void
+explain_ExecutorEnd(QueryDesc *queryDesc)
+{
+    if (queryDesc->totaltime && auto_explain_enabled())
+    {
+        double        msec;
+
+        /*
+         * Make sure stats accumulation is done.  (Note: it's okay if several
+         * levels of hook all do this.)
+         */
+        InstrEndLoop(queryDesc->totaltime);
+
+        /* Log plan if duration is exceeded. */
+        msec = queryDesc->totaltime->total * 1000.0;
+        if (msec >= auto_explain_log_min_duration)
+        {
+            ExplainState es;
+
+            ExplainInitState(&es);
+            es.analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+            es.verbose = auto_explain_log_verbose;
+            es.buffers = (es.analyze && auto_explain_log_buffers);
+            es.format = auto_explain_log_format;
+
+            ExplainBeginOutput(&es);
+            ExplainQueryText(&es, queryDesc);
+            ExplainPrintPlan(&es, queryDesc);
+            ExplainEndOutput(&es);
+
+            /* Remove last line break */
+            if (es.str->len > 0 && es.str->data[es.str->len - 1] == '\n')
+                es.str->data[--es.str->len] = '\0';
+
+            /*
+             * Note: we rely on the existing logging of context or
+             * debug_query_string to identify just which statement is being
+             * reported.  This isn't ideal but trying to do it here would
+             * often result in duplication.
+             */
+            ereport(LOG,
+                    (errmsg("duration: %.3f ms  plan:\n%s",
+                            msec, es.str->data),
+                     errhidestmt(true)));
+
+            pfree(es.str->data);
+        }
+    }
+
+    if (prev_ExecutorEnd)
+        prev_ExecutorEnd(queryDesc);
+    else
+        standard_ExecutorEnd(queryDesc);
+}
diff --git a/src/extension/extension-global.mk b/src/extension/extension-global.mk
new file mode 100644
index 0000000..cc7643b
--- /dev/null
+++ b/src/extension/extension-global.mk
@@ -0,0 +1,5 @@
+# $PostgreSQL: pgsql/extension/extension-global.mk,v 1.10 2005/09/27 17:43:31 tgl Exp $
+
+NO_PGXS = 1
+MODULEDIR=extension
+include $(top_srcdir)/src/makefiles/pgxs.mk
diff --git a/src/extension/pageinspect/Makefile b/src/extension/pageinspect/Makefile
new file mode 100644
index 0000000..c6940b8
--- /dev/null
+++ b/src/extension/pageinspect/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pageinspect/Makefile
+
+MODULE_big    = pageinspect
+OBJS        = rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
+MODULEDIR = extension
+
+EXTENSION = pageinspect
+DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pageinspect
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pageinspect/btreefuncs.c b/src/extension/pageinspect/btreefuncs.c
new file mode 100644
index 0000000..e378560
--- /dev/null
+++ b/src/extension/pageinspect/btreefuncs.c
@@ -0,0 +1,502 @@
+/*
+ * src/extension/pageinspect/btreefuncs.c
+ *
+ *
+ * btreefuncs.c
+ *
+ * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/nbtree.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "utils/builtins.h"
+
+
+extern Datum bt_metap(PG_FUNCTION_ARGS);
+extern Datum bt_page_items(PG_FUNCTION_ARGS);
+extern Datum bt_page_stats(PG_FUNCTION_ARGS);
+
+PG_FUNCTION_INFO_V1(bt_metap);
+PG_FUNCTION_INFO_V1(bt_page_items);
+PG_FUNCTION_INFO_V1(bt_page_stats);
+
+#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
+#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
+
+#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
+        if ( !(FirstOffsetNumber <= (offnum) && \
+                        (offnum) <= PageGetMaxOffsetNumber(pg)) ) \
+             elog(ERROR, "page offset number out of range"); }
+
+/* note: BlockNumber is unsigned, hence can't be negative */
+#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
+        if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
+             elog(ERROR, "block number out of range"); }
+
+/* ------------------------------------------------
+ * structure for single btree page statistics
+ * ------------------------------------------------
+ */
+typedef struct BTPageStat
+{
+    uint32        blkno;
+    uint32        live_items;
+    uint32        dead_items;
+    uint32        page_size;
+    uint32        max_avail;
+    uint32        free_size;
+    uint32        avg_item_size;
+    char        type;
+
+    /* opaque data */
+    BlockNumber btpo_prev;
+    BlockNumber btpo_next;
+    union
+    {
+        uint32        level;
+        TransactionId xact;
+    }            btpo;
+    uint16        btpo_flags;
+    BTCycleId    btpo_cycleid;
+} BTPageStat;
+
+
+/* -------------------------------------------------
+ * GetBTPageStatistics()
+ *
+ * Collect statistics of single b-tree page
+ * -------------------------------------------------
+ */
+static void
+GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
+{
+    Page        page = BufferGetPage(buffer);
+    PageHeader    phdr = (PageHeader) page;
+    OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
+    BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+    int            item_size = 0;
+    int            off;
+
+    stat->blkno = blkno;
+
+    stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
+
+    stat->dead_items = stat->live_items = 0;
+
+    stat->page_size = PageGetPageSize(page);
+
+    /* page type (flags) */
+    if (P_ISDELETED(opaque))
+    {
+        stat->type = 'd';
+        stat->btpo.xact = opaque->btpo.xact;
+        return;
+    }
+    else if (P_IGNORE(opaque))
+        stat->type = 'e';
+    else if (P_ISLEAF(opaque))
+        stat->type = 'l';
+    else if (P_ISROOT(opaque))
+        stat->type = 'r';
+    else
+        stat->type = 'i';
+
+    /* btpage opaque data */
+    stat->btpo_prev = opaque->btpo_prev;
+    stat->btpo_next = opaque->btpo_next;
+    stat->btpo.level = opaque->btpo.level;
+    stat->btpo_flags = opaque->btpo_flags;
+    stat->btpo_cycleid = opaque->btpo_cycleid;
+
+    /* count live and dead tuples, and free space */
+    for (off = FirstOffsetNumber; off <= maxoff; off++)
+    {
+        IndexTuple    itup;
+
+        ItemId        id = PageGetItemId(page, off);
+
+        itup = (IndexTuple) PageGetItem(page, id);
+
+        item_size += IndexTupleSize(itup);
+
+        if (!ItemIdIsDead(id))
+            stat->live_items++;
+        else
+            stat->dead_items++;
+    }
+    stat->free_size = PageGetFreeSpace(page);
+
+    if ((stat->live_items + stat->dead_items) > 0)
+        stat->avg_item_size = item_size / (stat->live_items + stat->dead_items);
+    else
+        stat->avg_item_size = 0;
+}
+
+/* -----------------------------------------------
+ * bt_page()
+ *
+ * Usage: SELECT * FROM bt_page('t1_pkey', 1);
+ * -----------------------------------------------
+ */
+Datum
+bt_page_stats(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    uint32        blkno = PG_GETARG_UINT32(1);
+    Buffer        buffer;
+    Relation    rel;
+    RangeVar   *relrv;
+    Datum        result;
+    HeapTuple    tuple;
+    TupleDesc    tupleDesc;
+    int            j;
+    char       *values[11];
+    BTPageStat    stat;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pageinspect functions"))));
+
+    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+    rel = relation_openrv(relrv, AccessShareLock);
+
+    if (!IS_INDEX(rel) || !IS_BTREE(rel))
+        elog(ERROR, "relation \"%s\" is not a btree index",
+             RelationGetRelationName(rel));
+
+    /*
+     * Reject attempts to read non-local temporary relations; we would be
+     * likely to get wrong data since we have no visibility into the owning
+     * session's local buffers.
+     */
+    if (RELATION_IS_OTHER_TEMP(rel))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("cannot access temporary tables of other sessions")));
+
+    if (blkno == 0)
+        elog(ERROR, "block 0 is a meta page");
+
+    CHECK_RELATION_BLOCK_RANGE(rel, blkno);
+
+    buffer = ReadBuffer(rel, blkno);
+
+    /* keep compiler quiet */
+    stat.btpo_prev = stat.btpo_next = InvalidBlockNumber;
+    stat.btpo_flags = stat.free_size = stat.avg_item_size = 0;
+
+    GetBTPageStatistics(blkno, buffer, &stat);
+
+    /* Build a tuple descriptor for our result type */
+    if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+        elog(ERROR, "return type must be a row type");
+
+    j = 0;
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.blkno);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%c", stat.type);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.live_items);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.dead_items);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.avg_item_size);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.page_size);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.free_size);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.btpo_prev);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.btpo_next);
+    values[j] = palloc(32);
+    if (stat.type == 'd')
+        snprintf(values[j++], 32, "%d", stat.btpo.xact);
+    else
+        snprintf(values[j++], 32, "%d", stat.btpo.level);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", stat.btpo_flags);
+
+    tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+                                   values);
+
+    result = HeapTupleGetDatum(tuple);
+
+    ReleaseBuffer(buffer);
+
+    relation_close(rel, AccessShareLock);
+
+    PG_RETURN_DATUM(result);
+}
+
+/*-------------------------------------------------------
+ * bt_page_items()
+ *
+ * Get IndexTupleData set in a btree page
+ *
+ * Usage: SELECT * FROM bt_page_items('t1_pkey', 1);
+ *-------------------------------------------------------
+ */
+
+/*
+ * cross-call data structure for SRF
+ */
+struct user_args
+{
+    Page        page;
+    OffsetNumber offset;
+};
+
+Datum
+bt_page_items(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    uint32        blkno = PG_GETARG_UINT32(1);
+    Datum        result;
+    char       *values[6];
+    HeapTuple    tuple;
+    FuncCallContext *fctx;
+    MemoryContext mctx;
+    struct user_args *uargs;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pageinspect functions"))));
+
+    if (SRF_IS_FIRSTCALL())
+    {
+        RangeVar   *relrv;
+        Relation    rel;
+        Buffer        buffer;
+        BTPageOpaque opaque;
+        TupleDesc    tupleDesc;
+
+        fctx = SRF_FIRSTCALL_INIT();
+
+        relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+        rel = relation_openrv(relrv, AccessShareLock);
+
+        if (!IS_INDEX(rel) || !IS_BTREE(rel))
+            elog(ERROR, "relation \"%s\" is not a btree index",
+                 RelationGetRelationName(rel));
+
+        /*
+         * Reject attempts to read non-local temporary relations; we would be
+         * likely to get wrong data since we have no visibility into the
+         * owning session's local buffers.
+         */
+        if (RELATION_IS_OTHER_TEMP(rel))
+            ereport(ERROR,
+                    (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                errmsg("cannot access temporary tables of other sessions")));
+
+        if (blkno == 0)
+            elog(ERROR, "block 0 is a meta page");
+
+        CHECK_RELATION_BLOCK_RANGE(rel, blkno);
+
+        buffer = ReadBuffer(rel, blkno);
+
+        /*
+         * We copy the page into local storage to avoid holding pin on the
+         * buffer longer than we must, and possibly failing to release it at
+         * all if the calling query doesn't fetch all rows.
+         */
+        mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
+
+        uargs = palloc(sizeof(struct user_args));
+
+        uargs->page = palloc(BLCKSZ);
+        memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
+
+        ReleaseBuffer(buffer);
+        relation_close(rel, AccessShareLock);
+
+        uargs->offset = FirstOffsetNumber;
+
+        opaque = (BTPageOpaque) PageGetSpecialPointer(uargs->page);
+
+        if (P_ISDELETED(opaque))
+            elog(NOTICE, "page is deleted");
+
+        fctx->max_calls = PageGetMaxOffsetNumber(uargs->page);
+
+        /* Build a tuple descriptor for our result type */
+        if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+            elog(ERROR, "return type must be a row type");
+
+        fctx->attinmeta = TupleDescGetAttInMetadata(tupleDesc);
+
+        fctx->user_fctx = uargs;
+
+        MemoryContextSwitchTo(mctx);
+    }
+
+    fctx = SRF_PERCALL_SETUP();
+    uargs = fctx->user_fctx;
+
+    if (fctx->call_cntr < fctx->max_calls)
+    {
+        ItemId        id;
+        IndexTuple    itup;
+        int            j;
+        int            off;
+        int            dlen;
+        char       *dump;
+        char       *ptr;
+
+        id = PageGetItemId(uargs->page, uargs->offset);
+
+        if (!ItemIdIsValid(id))
+            elog(ERROR, "invalid ItemId");
+
+        itup = (IndexTuple) PageGetItem(uargs->page, id);
+
+        j = 0;
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%d", uargs->offset);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "(%u,%u)",
+                 BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
+                 itup->t_tid.ip_posid);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%d", (int) IndexTupleSize(itup));
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%c", IndexTupleHasNulls(itup) ? 't' : 'f');
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
+
+        ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
+        dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
+        dump = palloc0(dlen * 3 + 1);
+        values[j] = dump;
+        for (off = 0; off < dlen; off++)
+        {
+            if (off > 0)
+                *dump++ = ' ';
+            sprintf(dump, "%02x", *(ptr + off) & 0xff);
+            dump += 2;
+        }
+
+        tuple = BuildTupleFromCStrings(fctx->attinmeta, values);
+        result = HeapTupleGetDatum(tuple);
+
+        uargs->offset = uargs->offset + 1;
+
+        SRF_RETURN_NEXT(fctx, result);
+    }
+    else
+    {
+        pfree(uargs->page);
+        pfree(uargs);
+        SRF_RETURN_DONE(fctx);
+    }
+}
+
+
+/* ------------------------------------------------
+ * bt_metap()
+ *
+ * Get a btree's meta-page information
+ *
+ * Usage: SELECT * FROM bt_metap('t1_pkey')
+ * ------------------------------------------------
+ */
+Datum
+bt_metap(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    Datum        result;
+    Relation    rel;
+    RangeVar   *relrv;
+    BTMetaPageData *metad;
+    TupleDesc    tupleDesc;
+    int            j;
+    char       *values[6];
+    Buffer        buffer;
+    Page        page;
+    HeapTuple    tuple;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pageinspect functions"))));
+
+    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+    rel = relation_openrv(relrv, AccessShareLock);
+
+    if (!IS_INDEX(rel) || !IS_BTREE(rel))
+        elog(ERROR, "relation \"%s\" is not a btree index",
+             RelationGetRelationName(rel));
+
+    /*
+     * Reject attempts to read non-local temporary relations; we would be
+     * likely to get wrong data since we have no visibility into the owning
+     * session's local buffers.
+     */
+    if (RELATION_IS_OTHER_TEMP(rel))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("cannot access temporary tables of other sessions")));
+
+    buffer = ReadBuffer(rel, 0);
+    page = BufferGetPage(buffer);
+    metad = BTPageGetMeta(page);
+
+    /* Build a tuple descriptor for our result type */
+    if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+        elog(ERROR, "return type must be a row type");
+
+    j = 0;
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", metad->btm_magic);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", metad->btm_version);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", metad->btm_root);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", metad->btm_level);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", metad->btm_fastroot);
+    values[j] = palloc(32);
+    snprintf(values[j++], 32, "%d", metad->btm_fastlevel);
+
+    tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+                                   values);
+
+    result = HeapTupleGetDatum(tuple);
+
+    ReleaseBuffer(buffer);
+
+    relation_close(rel, AccessShareLock);
+
+    PG_RETURN_DATUM(result);
+}
diff --git a/src/extension/pageinspect/fsmfuncs.c b/src/extension/pageinspect/fsmfuncs.c
new file mode 100644
index 0000000..45b2b9d
--- /dev/null
+++ b/src/extension/pageinspect/fsmfuncs.c
@@ -0,0 +1,59 @@
+/*-------------------------------------------------------------------------
+ *
+ * fsmfuncs.c
+ *      Functions to investigate FSM pages
+ *
+ * These functions are restricted to superusers for the fear of introducing
+ * security holes if the input checking isn't as water-tight as it should.
+ * You'd need to be superuser to obtain a raw page image anyway, so
+ * there's hardly any use case for using these without superuser-rights
+ * anyway.
+ *
+ * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *      src/extension/pageinspect/fsmfuncs.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+#include "lib/stringinfo.h"
+#include "storage/fsm_internals.h"
+#include "utils/builtins.h"
+#include "miscadmin.h"
+#include "funcapi.h"
+
+Datum        fsm_page_contents(PG_FUNCTION_ARGS);
+
+/*
+ * Dumps the contents of a FSM page.
+ */
+PG_FUNCTION_INFO_V1(fsm_page_contents);
+
+Datum
+fsm_page_contents(PG_FUNCTION_ARGS)
+{
+    bytea       *raw_page = PG_GETARG_BYTEA_P(0);
+    StringInfoData sinfo;
+    FSMPage        fsmpage;
+    int            i;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use raw page functions"))));
+
+    fsmpage = (FSMPage) PageGetContents(VARDATA(raw_page));
+
+    initStringInfo(&sinfo);
+
+    for (i = 0; i < NodesPerPage; i++)
+    {
+        if (fsmpage->fp_nodes[i] != 0)
+            appendStringInfo(&sinfo, "%d: %d\n", i, fsmpage->fp_nodes[i]);
+    }
+    appendStringInfo(&sinfo, "fp_next_slot: %d\n", fsmpage->fp_next_slot);
+
+    PG_RETURN_TEXT_P(cstring_to_text(sinfo.data));
+}
diff --git a/src/extension/pageinspect/heapfuncs.c b/src/extension/pageinspect/heapfuncs.c
new file mode 100644
index 0000000..a9f95b4
--- /dev/null
+++ b/src/extension/pageinspect/heapfuncs.c
@@ -0,0 +1,230 @@
+/*-------------------------------------------------------------------------
+ *
+ * heapfuncs.c
+ *      Functions to investigate heap pages
+ *
+ * We check the input to these functions for corrupt pointers etc. that
+ * might cause crashes, but at the same time we try to print out as much
+ * information as possible, even if it's nonsense. That's because if a
+ * page is corrupt, we don't know why and how exactly it is corrupt, so we
+ * let the user judge it.
+ *
+ * These functions are restricted to superusers for the fear of introducing
+ * security holes if the input checking isn't as water-tight as it should be.
+ * You'd need to be superuser to obtain a raw page image anyway, so
+ * there's hardly any use case for using these without superuser-rights
+ * anyway.
+ *
+ * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *      src/extension/pageinspect/heapfuncs.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "funcapi.h"
+#include "access/heapam.h"
+#include "access/transam.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_type.h"
+#include "utils/builtins.h"
+#include "miscadmin.h"
+
+Datum        heap_page_items(PG_FUNCTION_ARGS);
+
+
+/*
+ * bits_to_text
+ *
+ * Converts a bits8-array of 'len' bits to a human-readable
+ * c-string representation.
+ */
+static char *
+bits_to_text(bits8 *bits, int len)
+{
+    int            i;
+    char       *str;
+
+    str = palloc(len + 1);
+
+    for (i = 0; i < len; i++)
+        str[i] = (bits[(i / 8)] & (1 << (i % 8))) ? '1' : '0';
+
+    str[i] = '\0';
+
+    return str;
+}
+
+
+/*
+ * heap_page_items
+ *
+ * Allows inspection of line pointers and tuple headers of a heap page.
+ */
+PG_FUNCTION_INFO_V1(heap_page_items);
+
+typedef struct heap_page_items_state
+{
+    TupleDesc    tupd;
+    Page        page;
+    uint16        offset;
+} heap_page_items_state;
+
+Datum
+heap_page_items(PG_FUNCTION_ARGS)
+{
+    bytea       *raw_page = PG_GETARG_BYTEA_P(0);
+    heap_page_items_state *inter_call_data = NULL;
+    FuncCallContext *fctx;
+    int            raw_page_size;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use raw page functions"))));
+
+    raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
+
+    if (SRF_IS_FIRSTCALL())
+    {
+        TupleDesc    tupdesc;
+        MemoryContext mctx;
+
+        if (raw_page_size < SizeOfPageHeaderData)
+            ereport(ERROR,
+                    (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+                  errmsg("input page too small (%d bytes)", raw_page_size)));
+
+        fctx = SRF_FIRSTCALL_INIT();
+        mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
+
+        inter_call_data = palloc(sizeof(heap_page_items_state));
+
+        /* Build a tuple descriptor for our result type */
+        if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+            elog(ERROR, "return type must be a row type");
+
+        inter_call_data->tupd = tupdesc;
+
+        inter_call_data->offset = FirstOffsetNumber;
+        inter_call_data->page = VARDATA(raw_page);
+
+        fctx->max_calls = PageGetMaxOffsetNumber(inter_call_data->page);
+        fctx->user_fctx = inter_call_data;
+
+        MemoryContextSwitchTo(mctx);
+    }
+
+    fctx = SRF_PERCALL_SETUP();
+    inter_call_data = fctx->user_fctx;
+
+    if (fctx->call_cntr < fctx->max_calls)
+    {
+        Page        page = inter_call_data->page;
+        HeapTuple    resultTuple;
+        Datum        result;
+        ItemId        id;
+        Datum        values[13];
+        bool        nulls[13];
+        uint16        lp_offset;
+        uint16        lp_flags;
+        uint16        lp_len;
+
+        memset(nulls, 0, sizeof(nulls));
+
+        /* Extract information from the line pointer */
+
+        id = PageGetItemId(page, inter_call_data->offset);
+
+        lp_offset = ItemIdGetOffset(id);
+        lp_flags = ItemIdGetFlags(id);
+        lp_len = ItemIdGetLength(id);
+
+        values[0] = UInt16GetDatum(inter_call_data->offset);
+        values[1] = UInt16GetDatum(lp_offset);
+        values[2] = UInt16GetDatum(lp_flags);
+        values[3] = UInt16GetDatum(lp_len);
+
+        /*
+         * We do just enough validity checking to make sure we don't reference
+         * data outside the page passed to us. The page could be corrupt in
+         * many other ways, but at least we won't crash.
+         */
+        if (ItemIdHasStorage(id) &&
+            lp_len >= sizeof(HeapTupleHeader) &&
+            lp_offset == MAXALIGN(lp_offset) &&
+            lp_offset + lp_len <= raw_page_size)
+        {
+            HeapTupleHeader tuphdr;
+            int            bits_len;
+
+            /* Extract information from the tuple header */
+
+            tuphdr = (HeapTupleHeader) PageGetItem(page, id);
+
+            values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
+            values[5] = UInt32GetDatum(HeapTupleHeaderGetXmax(tuphdr));
+            values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
+            values[7] = PointerGetDatum(&tuphdr->t_ctid);
+            values[8] = UInt32GetDatum(tuphdr->t_infomask2);
+            values[9] = UInt32GetDatum(tuphdr->t_infomask);
+            values[10] = UInt8GetDatum(tuphdr->t_hoff);
+
+            /*
+             * We already checked that the item as is completely within the
+             * raw page passed to us, with the length given in the line
+             * pointer.. Let's check that t_hoff doesn't point over lp_len,
+             * before using it to access t_bits and oid.
+             */
+            if (tuphdr->t_hoff >= sizeof(HeapTupleHeader) &&
+                tuphdr->t_hoff <= lp_len)
+            {
+                if (tuphdr->t_infomask & HEAP_HASNULL)
+                {
+                    bits_len = tuphdr->t_hoff -
+                        (((char *) tuphdr->t_bits) -((char *) tuphdr));
+
+                    values[11] = CStringGetTextDatum(
+                                 bits_to_text(tuphdr->t_bits, bits_len * 8));
+                }
+                else
+                    nulls[11] = true;
+
+                if (tuphdr->t_infomask & HEAP_HASOID)
+                    values[12] = HeapTupleHeaderGetOid(tuphdr);
+                else
+                    nulls[12] = true;
+            }
+            else
+            {
+                nulls[11] = true;
+                nulls[12] = true;
+            }
+        }
+        else
+        {
+            /*
+             * The line pointer is not used, or it's invalid. Set the rest of
+             * the fields to NULL
+             */
+            int            i;
+
+            for (i = 4; i <= 12; i++)
+                nulls[i] = true;
+        }
+
+        /* Build and return the result tuple. */
+        resultTuple = heap_form_tuple(inter_call_data->tupd, values, nulls);
+        result = HeapTupleGetDatum(resultTuple);
+
+        inter_call_data->offset++;
+
+        SRF_RETURN_NEXT(fctx, result);
+    }
+    else
+        SRF_RETURN_DONE(fctx);
+}
diff --git a/src/extension/pageinspect/pageinspect--1.0.sql b/src/extension/pageinspect/pageinspect--1.0.sql
new file mode 100644
index 0000000..cadcb4a
--- /dev/null
+++ b/src/extension/pageinspect/pageinspect--1.0.sql
@@ -0,0 +1,104 @@
+/* src/extension/pageinspect/pageinspect--1.0.sql */
+
+--
+-- get_raw_page()
+--
+CREATE FUNCTION get_raw_page(text, int4)
+RETURNS bytea
+AS 'MODULE_PATHNAME', 'get_raw_page'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION get_raw_page(text, text, int4)
+RETURNS bytea
+AS 'MODULE_PATHNAME', 'get_raw_page_fork'
+LANGUAGE C STRICT;
+
+--
+-- page_header()
+--
+CREATE FUNCTION page_header(IN page bytea,
+    OUT lsn text,
+    OUT tli smallint,
+    OUT flags smallint,
+    OUT lower smallint,
+    OUT upper smallint,
+    OUT special smallint,
+    OUT pagesize smallint,
+    OUT version smallint,
+    OUT prune_xid xid)
+AS 'MODULE_PATHNAME', 'page_header'
+LANGUAGE C STRICT;
+
+--
+-- heap_page_items()
+--
+CREATE FUNCTION heap_page_items(IN page bytea,
+    OUT lp smallint,
+    OUT lp_off smallint,
+    OUT lp_flags smallint,
+    OUT lp_len smallint,
+    OUT t_xmin xid,
+    OUT t_xmax xid,
+    OUT t_field3 int4,
+    OUT t_ctid tid,
+    OUT t_infomask2 integer,
+    OUT t_infomask integer,
+    OUT t_hoff smallint,
+    OUT t_bits text,
+    OUT t_oid oid)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'heap_page_items'
+LANGUAGE C STRICT;
+
+--
+-- bt_metap()
+--
+CREATE FUNCTION bt_metap(IN relname text,
+    OUT magic int4,
+    OUT version int4,
+    OUT root int4,
+    OUT level int4,
+    OUT fastroot int4,
+    OUT fastlevel int4)
+AS 'MODULE_PATHNAME', 'bt_metap'
+LANGUAGE C STRICT;
+
+--
+-- bt_page_stats()
+--
+CREATE FUNCTION bt_page_stats(IN relname text, IN blkno int4,
+    OUT blkno int4,
+    OUT type "char",
+    OUT live_items int4,
+    OUT dead_items int4,
+    OUT avg_item_size int4,
+    OUT page_size int4,
+    OUT free_size int4,
+    OUT btpo_prev int4,
+    OUT btpo_next int4,
+    OUT btpo int4,
+    OUT btpo_flags int4)
+AS 'MODULE_PATHNAME', 'bt_page_stats'
+LANGUAGE C STRICT;
+
+--
+-- bt_page_items()
+--
+CREATE FUNCTION bt_page_items(IN relname text, IN blkno int4,
+    OUT itemoffset smallint,
+    OUT ctid tid,
+    OUT itemlen smallint,
+    OUT nulls bool,
+    OUT vars bool,
+    OUT data text)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'bt_page_items'
+LANGUAGE C STRICT;
+
+--
+-- fsm_page_contents()
+--
+CREATE FUNCTION fsm_page_contents(IN page bytea)
+RETURNS text
+AS 'MODULE_PATHNAME', 'fsm_page_contents'
+LANGUAGE C STRICT;
diff --git a/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
b/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
new file mode 100644
index 0000000..9e9d8cf
--- /dev/null
+++ b/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
@@ -0,0 +1,28 @@
+/* src/extension/pageinspect/pageinspect--unpackaged--1.0.sql */
+
+DROP FUNCTION heap_page_items(bytea);
+CREATE FUNCTION heap_page_items(IN page bytea,
+    OUT lp smallint,
+    OUT lp_off smallint,
+    OUT lp_flags smallint,
+    OUT lp_len smallint,
+    OUT t_xmin xid,
+    OUT t_xmax xid,
+    OUT t_field3 int4,
+    OUT t_ctid tid,
+    OUT t_infomask2 integer,
+    OUT t_infomask integer,
+    OUT t_hoff smallint,
+    OUT t_bits text,
+    OUT t_oid oid)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'heap_page_items'
+LANGUAGE C STRICT;
+
+ALTER EXTENSION pageinspect ADD function get_raw_page(text,integer);
+ALTER EXTENSION pageinspect ADD function get_raw_page(text,text,integer);
+ALTER EXTENSION pageinspect ADD function page_header(bytea);
+ALTER EXTENSION pageinspect ADD function bt_metap(text);
+ALTER EXTENSION pageinspect ADD function bt_page_stats(text,integer);
+ALTER EXTENSION pageinspect ADD function bt_page_items(text,integer);
+ALTER EXTENSION pageinspect ADD function fsm_page_contents(bytea);
diff --git a/src/extension/pageinspect/pageinspect.control b/src/extension/pageinspect/pageinspect.control
new file mode 100644
index 0000000..f9da0e8
--- /dev/null
+++ b/src/extension/pageinspect/pageinspect.control
@@ -0,0 +1,5 @@
+# pageinspect extension
+comment = 'inspect the contents of database pages at a low level'
+default_version = '1.0'
+module_pathname = '$libdir/pageinspect'
+relocatable = true
diff --git a/src/extension/pageinspect/rawpage.c b/src/extension/pageinspect/rawpage.c
new file mode 100644
index 0000000..87a029f
--- /dev/null
+++ b/src/extension/pageinspect/rawpage.c
@@ -0,0 +1,232 @@
+/*-------------------------------------------------------------------------
+ *
+ * rawpage.c
+ *      Functions to extract a raw page as bytea and inspect it
+ *
+ * Access-method specific inspection functions are in separate files.
+ *
+ * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *      src/extension/pageinspect/rawpage.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/transam.h"
+#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_type.h"
+#include "fmgr.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "utils/builtins.h"
+
+PG_MODULE_MAGIC;
+
+Datum        get_raw_page(PG_FUNCTION_ARGS);
+Datum        get_raw_page_fork(PG_FUNCTION_ARGS);
+Datum        page_header(PG_FUNCTION_ARGS);
+
+static bytea *get_raw_page_internal(text *relname, ForkNumber forknum,
+                      BlockNumber blkno);
+
+
+/*
+ * get_raw_page
+ *
+ * Returns a copy of a page from shared buffers as a bytea
+ */
+PG_FUNCTION_INFO_V1(get_raw_page);
+
+Datum
+get_raw_page(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    uint32        blkno = PG_GETARG_UINT32(1);
+    bytea       *raw_page;
+
+    /*
+     * We don't normally bother to check the number of arguments to a C
+     * function, but here it's needed for safety because early 8.4 beta
+     * releases mistakenly redefined get_raw_page() as taking three arguments.
+     */
+    if (PG_NARGS() != 2)
+        ereport(ERROR,
+                (errmsg("wrong number of arguments to get_raw_page()"),
+                 errhint("Run the updated pageinspect.sql script.")));
+
+    raw_page = get_raw_page_internal(relname, MAIN_FORKNUM, blkno);
+
+    PG_RETURN_BYTEA_P(raw_page);
+}
+
+/*
+ * get_raw_page_fork
+ *
+ * Same, for any fork
+ */
+PG_FUNCTION_INFO_V1(get_raw_page_fork);
+
+Datum
+get_raw_page_fork(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    text       *forkname = PG_GETARG_TEXT_P(1);
+    uint32        blkno = PG_GETARG_UINT32(2);
+    bytea       *raw_page;
+    ForkNumber    forknum;
+
+    forknum = forkname_to_number(text_to_cstring(forkname));
+
+    raw_page = get_raw_page_internal(relname, forknum, blkno);
+
+    PG_RETURN_BYTEA_P(raw_page);
+}
+
+/*
+ * workhorse
+ */
+static bytea *
+get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
+{
+    bytea       *raw_page;
+    RangeVar   *relrv;
+    Relation    rel;
+    char       *raw_page_data;
+    Buffer        buf;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use raw functions"))));
+
+    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+    rel = relation_openrv(relrv, AccessShareLock);
+
+    /* Check that this relation has storage */
+    if (rel->rd_rel->relkind == RELKIND_VIEW)
+        ereport(ERROR,
+                (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+                 errmsg("cannot get raw page from view \"%s\"",
+                        RelationGetRelationName(rel))));
+    if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
+        ereport(ERROR,
+                (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+                 errmsg("cannot get raw page from composite type \"%s\"",
+                        RelationGetRelationName(rel))));
+    if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
+        ereport(ERROR,
+                (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+                 errmsg("cannot get raw page from foreign table \"%s\"",
+                        RelationGetRelationName(rel))));
+
+    /*
+     * Reject attempts to read non-local temporary relations; we would be
+     * likely to get wrong data since we have no visibility into the owning
+     * session's local buffers.
+     */
+    if (RELATION_IS_OTHER_TEMP(rel))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("cannot access temporary tables of other sessions")));
+
+    if (blkno >= RelationGetNumberOfBlocks(rel))
+        elog(ERROR, "block number %u is out of range for relation \"%s\"",
+             blkno, RelationGetRelationName(rel));
+
+    /* Initialize buffer to copy to */
+    raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
+    SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
+    raw_page_data = VARDATA(raw_page);
+
+    /* Take a verbatim copy of the page */
+
+    buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
+    LockBuffer(buf, BUFFER_LOCK_SHARE);
+
+    memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
+
+    LockBuffer(buf, BUFFER_LOCK_UNLOCK);
+    ReleaseBuffer(buf);
+
+    relation_close(rel, AccessShareLock);
+
+    return raw_page;
+}
+
+/*
+ * page_header
+ *
+ * Allows inspection of page header fields of a raw page
+ */
+
+PG_FUNCTION_INFO_V1(page_header);
+
+Datum
+page_header(PG_FUNCTION_ARGS)
+{
+    bytea       *raw_page = PG_GETARG_BYTEA_P(0);
+    int            raw_page_size;
+
+    TupleDesc    tupdesc;
+
+    Datum        result;
+    HeapTuple    tuple;
+    Datum        values[9];
+    bool        nulls[9];
+
+    PageHeader    page;
+    XLogRecPtr    lsn;
+    char        lsnchar[64];
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use raw page functions"))));
+
+    raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
+
+    /*
+     * Check that enough data was supplied, so that we don't try to access
+     * fields outside the supplied buffer.
+     */
+    if (raw_page_size < sizeof(PageHeaderData))
+        ereport(ERROR,
+                (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+                 errmsg("input page too small (%d bytes)", raw_page_size)));
+
+    page = (PageHeader) VARDATA(raw_page);
+
+    /* Build a tuple descriptor for our result type */
+    if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+        elog(ERROR, "return type must be a row type");
+
+    /* Extract information from the page header */
+
+    lsn = PageGetLSN(page);
+    snprintf(lsnchar, sizeof(lsnchar), "%X/%X", lsn.xlogid, lsn.xrecoff);
+
+    values[0] = CStringGetTextDatum(lsnchar);
+    values[1] = UInt16GetDatum(PageGetTLI(page));
+    values[2] = UInt16GetDatum(page->pd_flags);
+    values[3] = UInt16GetDatum(page->pd_lower);
+    values[4] = UInt16GetDatum(page->pd_upper);
+    values[5] = UInt16GetDatum(page->pd_special);
+    values[6] = UInt16GetDatum(PageGetPageSize(page));
+    values[7] = UInt16GetDatum(PageGetPageLayoutVersion(page));
+    values[8] = TransactionIdGetDatum(page->pd_prune_xid);
+
+    /* Build and return the tuple. */
+
+    memset(nulls, 0, sizeof(nulls));
+
+    tuple = heap_form_tuple(tupdesc, values, nulls);
+    result = HeapTupleGetDatum(tuple);
+
+    PG_RETURN_DATUM(result);
+}
diff --git a/src/extension/pg_buffercache/Makefile b/src/extension/pg_buffercache/Makefile
new file mode 100644
index 0000000..e361592
--- /dev/null
+++ b/src/extension/pg_buffercache/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pg_buffercache/Makefile
+
+MODULE_big = pg_buffercache
+OBJS = pg_buffercache_pages.o
+MODULEDIR = extension
+
+EXTENSION = pg_buffercache
+DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_buffercache
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_buffercache/pg_buffercache--1.0.sql
b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
new file mode 100644
index 0000000..ceca6ae
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
@@ -0,0 +1,17 @@
+/* src/extension/pg_buffercache/pg_buffercache--1.0.sql */
+
+-- Register the function.
+CREATE FUNCTION pg_buffercache_pages()
+RETURNS SETOF RECORD
+AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
+LANGUAGE C;
+
+-- Create a view for convenient access.
+CREATE VIEW pg_buffercache AS
+    SELECT P.* FROM pg_buffercache_pages() AS P
+    (bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
+     relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
+
+-- Don't want these to be available to public.
+REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
+REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
new file mode 100644
index 0000000..0cfa317
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
@@ -0,0 +1,4 @@
+/* src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
+ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/src/extension/pg_buffercache/pg_buffercache.control b/src/extension/pg_buffercache/pg_buffercache.control
new file mode 100644
index 0000000..709513c
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache.control
@@ -0,0 +1,5 @@
+# pg_buffercache extension
+comment = 'examine the shared buffer cache'
+default_version = '1.0'
+module_pathname = '$libdir/pg_buffercache'
+relocatable = true
diff --git a/src/extension/pg_buffercache/pg_buffercache_pages.c b/src/extension/pg_buffercache/pg_buffercache_pages.c
new file mode 100644
index 0000000..a44610f
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache_pages.c
@@ -0,0 +1,219 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_buffercache_pages.c
+ *      display some contents of the buffer cache
+ *
+ *      src/extension/pg_buffercache/pg_buffercache_pages.c
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
+#include "storage/buf_internals.h"
+#include "storage/bufmgr.h"
+#include "utils/relcache.h"
+
+
+#define NUM_BUFFERCACHE_PAGES_ELEM    8
+
+PG_MODULE_MAGIC;
+
+Datum        pg_buffercache_pages(PG_FUNCTION_ARGS);
+
+
+/*
+ * Record structure holding the to be exposed cache data.
+ */
+typedef struct
+{
+    uint32        bufferid;
+    Oid            relfilenode;
+    Oid            reltablespace;
+    Oid            reldatabase;
+    ForkNumber    forknum;
+    BlockNumber blocknum;
+    bool        isvalid;
+    bool        isdirty;
+    uint16        usagecount;
+} BufferCachePagesRec;
+
+
+/*
+ * Function context for data persisting over repeated calls.
+ */
+typedef struct
+{
+    TupleDesc    tupdesc;
+    BufferCachePagesRec *record;
+} BufferCachePagesContext;
+
+
+/*
+ * Function returning data from the shared buffer cache - buffer number,
+ * relation node/tablespace/database/blocknum and dirty indicator.
+ */
+PG_FUNCTION_INFO_V1(pg_buffercache_pages);
+
+Datum
+pg_buffercache_pages(PG_FUNCTION_ARGS)
+{
+    FuncCallContext *funcctx;
+    Datum        result;
+    MemoryContext oldcontext;
+    BufferCachePagesContext *fctx;        /* User function context. */
+    TupleDesc    tupledesc;
+    HeapTuple    tuple;
+
+    if (SRF_IS_FIRSTCALL())
+    {
+        int            i;
+        volatile BufferDesc *bufHdr;
+
+        funcctx = SRF_FIRSTCALL_INIT();
+
+        /* Switch context when allocating stuff to be used in later calls */
+        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+        /* Create a user function context for cross-call persistence */
+        fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
+
+        /* Construct a tuple descriptor for the result rows. */
+        tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
+                           INT4OID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
+                           OIDOID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
+                           OIDOID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
+                           OIDOID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
+                           INT2OID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
+                           INT8OID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
+                           BOOLOID, -1, 0);
+        TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
+                           INT2OID, -1, 0);
+
+        fctx->tupdesc = BlessTupleDesc(tupledesc);
+
+        /* Allocate NBuffers worth of BufferCachePagesRec records. */
+        fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
+
+        /* Set max calls and remember the user function context. */
+        funcctx->max_calls = NBuffers;
+        funcctx->user_fctx = fctx;
+
+        /* Return to original context when allocating transient memory */
+        MemoryContextSwitchTo(oldcontext);
+
+        /*
+         * To get a consistent picture of the buffer state, we must lock all
+         * partitions of the buffer map.  Needless to say, this is horrible
+         * for concurrency.  Must grab locks in increasing order to avoid
+         * possible deadlocks.
+         */
+        for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
+            LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
+
+        /*
+         * Scan though all the buffers, saving the relevant fields in the
+         * fctx->record structure.
+         */
+        for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
+        {
+            /* Lock each buffer header before inspecting. */
+            LockBufHdr(bufHdr);
+
+            fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
+            fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
+            fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
+            fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
+            fctx->record[i].forknum = bufHdr->tag.forkNum;
+            fctx->record[i].blocknum = bufHdr->tag.blockNum;
+            fctx->record[i].usagecount = bufHdr->usage_count;
+
+            if (bufHdr->flags & BM_DIRTY)
+                fctx->record[i].isdirty = true;
+            else
+                fctx->record[i].isdirty = false;
+
+            /* Note if the buffer is valid, and has storage created */
+            if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
+                fctx->record[i].isvalid = true;
+            else
+                fctx->record[i].isvalid = false;
+
+            UnlockBufHdr(bufHdr);
+        }
+
+        /*
+         * And release locks.  We do this in reverse order for two reasons:
+         * (1) Anyone else who needs more than one of the locks will be trying
+         * to lock them in increasing order; we don't want to release the
+         * other process until it can get all the locks it needs. (2) This
+         * avoids O(N^2) behavior inside LWLockRelease.
+         */
+        for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
+            LWLockRelease(FirstBufMappingLock + i);
+    }
+
+    funcctx = SRF_PERCALL_SETUP();
+
+    /* Get the saved state */
+    fctx = funcctx->user_fctx;
+
+    if (funcctx->call_cntr < funcctx->max_calls)
+    {
+        uint32        i = funcctx->call_cntr;
+        Datum        values[NUM_BUFFERCACHE_PAGES_ELEM];
+        bool        nulls[NUM_BUFFERCACHE_PAGES_ELEM];
+
+        values[0] = Int32GetDatum(fctx->record[i].bufferid);
+        nulls[0] = false;
+
+        /*
+         * Set all fields except the bufferid to null if the buffer is unused
+         * or not valid.
+         */
+        if (fctx->record[i].blocknum == InvalidBlockNumber ||
+            fctx->record[i].isvalid == false)
+        {
+            nulls[1] = true;
+            nulls[2] = true;
+            nulls[3] = true;
+            nulls[4] = true;
+            nulls[5] = true;
+            nulls[6] = true;
+            nulls[7] = true;
+        }
+        else
+        {
+            values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
+            nulls[1] = false;
+            values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
+            nulls[2] = false;
+            values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
+            nulls[3] = false;
+            values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
+            nulls[4] = false;
+            values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
+            nulls[5] = false;
+            values[6] = BoolGetDatum(fctx->record[i].isdirty);
+            nulls[6] = false;
+            values[7] = Int16GetDatum(fctx->record[i].usagecount);
+            nulls[7] = false;
+        }
+
+        /* Build and return the tuple. */
+        tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
+        result = HeapTupleGetDatum(tuple);
+
+        SRF_RETURN_NEXT(funcctx, result);
+    }
+    else
+        SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/extension/pg_freespacemap/Makefile b/src/extension/pg_freespacemap/Makefile
new file mode 100644
index 0000000..0ffe226
--- /dev/null
+++ b/src/extension/pg_freespacemap/Makefile
@@ -0,0 +1,19 @@
+# src/extensions/pg_freespacemap/Makefile
+
+MODULE_big = pg_freespacemap
+OBJS = pg_freespacemap.o
+MODULEDIR = extension
+
+EXTENSION = pg_freespacemap
+DATA = pg_freespacemap--1.0.sql pg_freespacemap--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_freespacemap
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
b/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
new file mode 100644
index 0000000..8188786
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
@@ -0,0 +1,22 @@
+/* src/extensions/pg_freespacemap/pg_freespacemap--1.0.sql */
+
+-- Register the C function.
+CREATE FUNCTION pg_freespace(regclass, bigint)
+RETURNS int2
+AS 'MODULE_PATHNAME', 'pg_freespace'
+LANGUAGE C STRICT;
+
+-- pg_freespace shows the recorded space avail at each block in a relation
+CREATE FUNCTION
+  pg_freespace(rel regclass, blkno OUT bigint, avail OUT int2)
+RETURNS SETOF RECORD
+AS $$
+  SELECT blkno, pg_freespace($1, blkno) AS avail
+  FROM generate_series(0, pg_relation_size($1) / current_setting('block_size')::bigint - 1) AS blkno;
+$$
+LANGUAGE SQL;
+
+
+-- Don't want these to be available to public.
+REVOKE ALL ON FUNCTION pg_freespace(regclass, bigint) FROM PUBLIC;
+REVOKE ALL ON FUNCTION pg_freespace(regclass) FROM PUBLIC;
diff --git a/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
b/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
new file mode 100644
index 0000000..d2231ef
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
@@ -0,0 +1,4 @@
+/* src/extensions/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass,bigint);
+ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass);
diff --git a/src/extension/pg_freespacemap/pg_freespacemap.c b/src/extension/pg_freespacemap/pg_freespacemap.c
new file mode 100644
index 0000000..501da04
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap.c
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_freespacemap.c
+ *      display contents of a free space map
+ *
+ *      src/extensions/pg_freespacemap/pg_freespacemap.c
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "funcapi.h"
+#include "storage/block.h"
+#include "storage/freespace.h"
+
+
+PG_MODULE_MAGIC;
+
+Datum        pg_freespace(PG_FUNCTION_ARGS);
+
+/*
+ * Returns the amount of free space on a given page, according to the
+ * free space map.
+ */
+PG_FUNCTION_INFO_V1(pg_freespace);
+
+Datum
+pg_freespace(PG_FUNCTION_ARGS)
+{
+    Oid            relid = PG_GETARG_OID(0);
+    int64        blkno = PG_GETARG_INT64(1);
+    int16        freespace;
+    Relation    rel;
+
+    rel = relation_open(relid, AccessShareLock);
+
+    if (blkno < 0 || blkno > MaxBlockNumber)
+        ereport(ERROR,
+                (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+                 errmsg("invalid block number")));
+
+    freespace = GetRecordedFreeSpace(rel, blkno);
+
+    relation_close(rel, AccessShareLock);
+    PG_RETURN_INT16(freespace);
+}
diff --git a/src/extension/pg_freespacemap/pg_freespacemap.control
b/src/extension/pg_freespacemap/pg_freespacemap.control
new file mode 100644
index 0000000..34b695f
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap.control
@@ -0,0 +1,5 @@
+# pg_freespacemap extension
+comment = 'examine the free space map (FSM)'
+default_version = '1.0'
+module_pathname = '$libdir/pg_freespacemap'
+relocatable = true
diff --git a/src/extension/pg_stat_statements/Makefile b/src/extension/pg_stat_statements/Makefile
new file mode 100644
index 0000000..9cf3f99
--- /dev/null
+++ b/src/extension/pg_stat_statements/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pg_stat_statements/Makefile
+
+MODULE_big = pg_stat_statements
+OBJS = pg_stat_statements.o
+MODULEDIR = extension
+
+EXTENSION = pg_stat_statements
+DATA = pg_stat_statements--1.0.sql pg_stat_statements--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_stat_statements
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
b/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
new file mode 100644
index 0000000..41145e7
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
@@ -0,0 +1,36 @@
+/* src/extension/pg_stat_statements/pg_stat_statements--1.0.sql */
+
+-- Register functions.
+CREATE FUNCTION pg_stat_statements_reset()
+RETURNS void
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+CREATE FUNCTION pg_stat_statements(
+    OUT userid oid,
+    OUT dbid oid,
+    OUT query text,
+    OUT calls int8,
+    OUT total_time float8,
+    OUT rows int8,
+    OUT shared_blks_hit int8,
+    OUT shared_blks_read int8,
+    OUT shared_blks_written int8,
+    OUT local_blks_hit int8,
+    OUT local_blks_read int8,
+    OUT local_blks_written int8,
+    OUT temp_blks_read int8,
+    OUT temp_blks_written int8
+)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+-- Register a view on the function for ease of use.
+CREATE VIEW pg_stat_statements AS
+  SELECT * FROM pg_stat_statements();
+
+GRANT SELECT ON pg_stat_statements TO PUBLIC;
+
+-- Don't want this to be available to non-superusers.
+REVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;
diff --git a/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
b/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
new file mode 100644
index 0000000..c8993b5
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
@@ -0,0 +1,5 @@
+/* src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements_reset();
+ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements();
+ALTER EXTENSION pg_stat_statements ADD view pg_stat_statements;
diff --git a/src/extension/pg_stat_statements/pg_stat_statements.c
b/src/extension/pg_stat_statements/pg_stat_statements.c
new file mode 100644
index 0000000..4ecd445
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements.c
@@ -0,0 +1,1046 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_stat_statements.c
+ *        Track statement execution times across a whole database cluster.
+ *
+ * Note about locking issues: to create or delete an entry in the shared
+ * hashtable, one must hold pgss->lock exclusively.  Modifying any field
+ * in an entry except the counters requires the same.  To look up an entry,
+ * one must hold the lock shared.  To read or update the counters within
+ * an entry, one must hold the lock shared or exclusive (so the entry doesn't
+ * disappear!) and also take the entry's mutex spinlock.
+ *
+ *
+ * Copyright (c) 2008-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *      src/test/pg_stat_statements/pg_stat_statements.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+
+#include "access/hash.h"
+#include "catalog/pg_type.h"
+#include "executor/executor.h"
+#include "executor/instrument.h"
+#include "funcapi.h"
+#include "mb/pg_wchar.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "storage/spin.h"
+#include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/hsearch.h"
+#include "utils/guc.h"
+
+
+PG_MODULE_MAGIC;
+
+/* Location of stats file */
+#define PGSS_DUMP_FILE    "global/pg_stat_statements.stat"
+
+/* This constant defines the magic number in the stats file header */
+static const uint32 PGSS_FILE_HEADER = 0x20100108;
+
+/* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
+#define USAGE_EXEC(duration)    (1.0)
+#define USAGE_INIT                (1.0)    /* including initial planning */
+#define USAGE_DECREASE_FACTOR    (0.99)    /* decreased every entry_dealloc */
+#define USAGE_DEALLOC_PERCENT    5        /* free this % of entries at once */
+
+/*
+ * Hashtable key that defines the identity of a hashtable entry.  The
+ * hash comparators do not assume that the query string is null-terminated;
+ * this lets us search for an mbcliplen'd string without copying it first.
+ *
+ * Presently, the query encoding is fully determined by the source database
+ * and so we don't really need it to be in the key.  But that might not always
+ * be true. Anyway it's notationally convenient to pass it as part of the key.
+ */
+typedef struct pgssHashKey
+{
+    Oid            userid;            /* user OID */
+    Oid            dbid;            /* database OID */
+    int            encoding;        /* query encoding */
+    int            query_len;        /* # of valid bytes in query string */
+    const char *query_ptr;        /* query string proper */
+} pgssHashKey;
+
+/*
+ * The actual stats counters kept within pgssEntry.
+ */
+typedef struct Counters
+{
+    int64        calls;            /* # of times executed */
+    double        total_time;        /* total execution time in seconds */
+    int64        rows;            /* total # of retrieved or affected rows */
+    int64        shared_blks_hit;    /* # of shared buffer hits */
+    int64        shared_blks_read;        /* # of shared disk blocks read */
+    int64        shared_blks_written;    /* # of shared disk blocks written */
+    int64        local_blks_hit; /* # of local buffer hits */
+    int64        local_blks_read;    /* # of local disk blocks read */
+    int64        local_blks_written;        /* # of local disk blocks written */
+    int64        temp_blks_read; /* # of temp blocks read */
+    int64        temp_blks_written;        /* # of temp blocks written */
+    double        usage;            /* usage factor */
+} Counters;
+
+/*
+ * Statistics per statement
+ *
+ * NB: see the file read/write code before changing field order here.
+ */
+typedef struct pgssEntry
+{
+    pgssHashKey key;            /* hash key of entry - MUST BE FIRST */
+    Counters    counters;        /* the statistics for this query */
+    slock_t        mutex;            /* protects the counters only */
+    char        query[1];        /* VARIABLE LENGTH ARRAY - MUST BE LAST */
+    /* Note: the allocated length of query[] is actually pgss->query_size */
+} pgssEntry;
+
+/*
+ * Global shared state
+ */
+typedef struct pgssSharedState
+{
+    LWLockId    lock;            /* protects hashtable search/modification */
+    int            query_size;        /* max query length in bytes */
+} pgssSharedState;
+
+/*---- Local variables ----*/
+
+/* Current nesting depth of ExecutorRun calls */
+static int    nested_level = 0;
+
+/* Saved hook values in case of unload */
+static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
+static ExecutorStart_hook_type prev_ExecutorStart = NULL;
+static ExecutorRun_hook_type prev_ExecutorRun = NULL;
+static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
+static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+static ProcessUtility_hook_type prev_ProcessUtility = NULL;
+
+/* Links to shared memory state */
+static pgssSharedState *pgss = NULL;
+static HTAB *pgss_hash = NULL;
+
+/*---- GUC variables ----*/
+
+typedef enum
+{
+    PGSS_TRACK_NONE,            /* track no statements */
+    PGSS_TRACK_TOP,                /* only top level statements */
+    PGSS_TRACK_ALL                /* all statements, including nested ones */
+}    PGSSTrackLevel;
+
+static const struct config_enum_entry track_options[] =
+{
+    {"none", PGSS_TRACK_NONE, false},
+    {"top", PGSS_TRACK_TOP, false},
+    {"all", PGSS_TRACK_ALL, false},
+    {NULL, 0, false}
+};
+
+static int    pgss_max;            /* max # statements to track */
+static int    pgss_track;            /* tracking level */
+static bool pgss_track_utility; /* whether to track utility commands */
+static bool pgss_save;            /* whether to save stats across shutdown */
+
+
+#define pgss_enabled() \
+    (pgss_track == PGSS_TRACK_ALL || \
+    (pgss_track == PGSS_TRACK_TOP && nested_level == 0))
+
+/*---- Function declarations ----*/
+
+void        _PG_init(void);
+void        _PG_fini(void);
+
+Datum        pg_stat_statements_reset(PG_FUNCTION_ARGS);
+Datum        pg_stat_statements(PG_FUNCTION_ARGS);
+
+PG_FUNCTION_INFO_V1(pg_stat_statements_reset);
+PG_FUNCTION_INFO_V1(pg_stat_statements);
+
+static void pgss_shmem_startup(void);
+static void pgss_shmem_shutdown(int code, Datum arg);
+static void pgss_ExecutorStart(QueryDesc *queryDesc, int eflags);
+static void pgss_ExecutorRun(QueryDesc *queryDesc,
+                 ScanDirection direction,
+                 long count);
+static void pgss_ExecutorFinish(QueryDesc *queryDesc);
+static void pgss_ExecutorEnd(QueryDesc *queryDesc);
+static void pgss_ProcessUtility(Node *parsetree,
+              const char *queryString, ParamListInfo params, bool isTopLevel,
+                    DestReceiver *dest, char *completionTag);
+static uint32 pgss_hash_fn(const void *key, Size keysize);
+static int    pgss_match_fn(const void *key1, const void *key2, Size keysize);
+static void pgss_store(const char *query, double total_time, uint64 rows,
+           const BufferUsage *bufusage);
+static Size pgss_memsize(void);
+static pgssEntry *entry_alloc(pgssHashKey *key);
+static void entry_dealloc(void);
+static void entry_reset(void);
+
+
+/*
+ * Module load callback
+ */
+void
+_PG_init(void)
+{
+    /*
+     * In order to create our shared memory area, we have to be loaded via
+     * shared_preload_libraries.  If not, fall out without hooking into any of
+     * the main system.  (We don't throw error here because it seems useful to
+     * allow the pg_stat_statements functions to be created even when the
+     * module isn't active.  The functions must protect themselves against
+     * being called then, however.)
+     */
+    if (!process_shared_preload_libraries_in_progress)
+        return;
+
+    /*
+     * Define (or redefine) custom GUC variables.
+     */
+    DefineCustomIntVariable("pg_stat_statements.max",
+      "Sets the maximum number of statements tracked by pg_stat_statements.",
+                            NULL,
+                            &pgss_max,
+                            1000,
+                            100,
+                            INT_MAX,
+                            PGC_POSTMASTER,
+                            0,
+                            NULL,
+                            NULL,
+                            NULL);
+
+    DefineCustomEnumVariable("pg_stat_statements.track",
+               "Selects which statements are tracked by pg_stat_statements.",
+                             NULL,
+                             &pgss_track,
+                             PGSS_TRACK_TOP,
+                             track_options,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    DefineCustomBoolVariable("pg_stat_statements.track_utility",
+       "Selects whether utility commands are tracked by pg_stat_statements.",
+                             NULL,
+                             &pgss_track_utility,
+                             true,
+                             PGC_SUSET,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    DefineCustomBoolVariable("pg_stat_statements.save",
+               "Save pg_stat_statements statistics across server shutdowns.",
+                             NULL,
+                             &pgss_save,
+                             true,
+                             PGC_SIGHUP,
+                             0,
+                             NULL,
+                             NULL,
+                             NULL);
+
+    EmitWarningsOnPlaceholders("pg_stat_statements");
+
+    /*
+     * Request additional shared resources.  (These are no-ops if we're not in
+     * the postmaster process.)  We'll allocate or attach to the shared
+     * resources in pgss_shmem_startup().
+     */
+    RequestAddinShmemSpace(pgss_memsize());
+    RequestAddinLWLocks(1);
+
+    /*
+     * Install hooks.
+     */
+    prev_shmem_startup_hook = shmem_startup_hook;
+    shmem_startup_hook = pgss_shmem_startup;
+    prev_ExecutorStart = ExecutorStart_hook;
+    ExecutorStart_hook = pgss_ExecutorStart;
+    prev_ExecutorRun = ExecutorRun_hook;
+    ExecutorRun_hook = pgss_ExecutorRun;
+    prev_ExecutorFinish = ExecutorFinish_hook;
+    ExecutorFinish_hook = pgss_ExecutorFinish;
+    prev_ExecutorEnd = ExecutorEnd_hook;
+    ExecutorEnd_hook = pgss_ExecutorEnd;
+    prev_ProcessUtility = ProcessUtility_hook;
+    ProcessUtility_hook = pgss_ProcessUtility;
+}
+
+/*
+ * Module unload callback
+ */
+void
+_PG_fini(void)
+{
+    /* Uninstall hooks. */
+    shmem_startup_hook = prev_shmem_startup_hook;
+    ExecutorStart_hook = prev_ExecutorStart;
+    ExecutorRun_hook = prev_ExecutorRun;
+    ExecutorFinish_hook = prev_ExecutorFinish;
+    ExecutorEnd_hook = prev_ExecutorEnd;
+    ProcessUtility_hook = prev_ProcessUtility;
+}
+
+/*
+ * shmem_startup hook: allocate or attach to shared memory,
+ * then load any pre-existing statistics from file.
+ */
+static void
+pgss_shmem_startup(void)
+{
+    bool        found;
+    HASHCTL        info;
+    FILE       *file;
+    uint32        header;
+    int32        num;
+    int32        i;
+    int            query_size;
+    int            buffer_size;
+    char       *buffer = NULL;
+
+    if (prev_shmem_startup_hook)
+        prev_shmem_startup_hook();
+
+    /* reset in case this is a restart within the postmaster */
+    pgss = NULL;
+    pgss_hash = NULL;
+
+    /*
+     * Create or attach to the shared memory state, including hash table
+     */
+    LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
+
+    pgss = ShmemInitStruct("pg_stat_statements",
+                           sizeof(pgssSharedState),
+                           &found);
+
+    if (!found)
+    {
+        /* First time through ... */
+        pgss->lock = LWLockAssign();
+        pgss->query_size = pgstat_track_activity_query_size;
+    }
+
+    /* Be sure everyone agrees on the hash table entry size */
+    query_size = pgss->query_size;
+
+    memset(&info, 0, sizeof(info));
+    info.keysize = sizeof(pgssHashKey);
+    info.entrysize = offsetof(pgssEntry, query) +query_size;
+    info.hash = pgss_hash_fn;
+    info.match = pgss_match_fn;
+    pgss_hash = ShmemInitHash("pg_stat_statements hash",
+                              pgss_max, pgss_max,
+                              &info,
+                              HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+
+    LWLockRelease(AddinShmemInitLock);
+
+    /*
+     * If we're in the postmaster (or a standalone backend...), set up a shmem
+     * exit hook to dump the statistics to disk.
+     */
+    if (!IsUnderPostmaster)
+        on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
+
+    /*
+     * Attempt to load old statistics from the dump file, if this is the first
+     * time through and we weren't told not to.
+     */
+    if (found || !pgss_save)
+        return;
+
+    /*
+     * Note: we don't bother with locks here, because there should be no other
+     * processes running when this code is reached.
+     */
+    file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_R);
+    if (file == NULL)
+    {
+        if (errno == ENOENT)
+            return;                /* ignore not-found error */
+        goto error;
+    }
+
+    buffer_size = query_size;
+    buffer = (char *) palloc(buffer_size);
+
+    if (fread(&header, sizeof(uint32), 1, file) != 1 ||
+        header != PGSS_FILE_HEADER ||
+        fread(&num, sizeof(int32), 1, file) != 1)
+        goto error;
+
+    for (i = 0; i < num; i++)
+    {
+        pgssEntry    temp;
+        pgssEntry  *entry;
+
+        if (fread(&temp, offsetof(pgssEntry, mutex), 1, file) != 1)
+            goto error;
+
+        /* Encoding is the only field we can easily sanity-check */
+        if (!PG_VALID_BE_ENCODING(temp.key.encoding))
+            goto error;
+
+        /* Previous incarnation might have had a larger query_size */
+        if (temp.key.query_len >= buffer_size)
+        {
+            buffer = (char *) repalloc(buffer, temp.key.query_len + 1);
+            buffer_size = temp.key.query_len + 1;
+        }
+
+        if (fread(buffer, 1, temp.key.query_len, file) != temp.key.query_len)
+            goto error;
+        buffer[temp.key.query_len] = '\0';
+
+        /* Clip to available length if needed */
+        if (temp.key.query_len >= query_size)
+            temp.key.query_len = pg_encoding_mbcliplen(temp.key.encoding,
+                                                       buffer,
+                                                       temp.key.query_len,
+                                                       query_size - 1);
+        temp.key.query_ptr = buffer;
+
+        /* make the hashtable entry (discards old entries if too many) */
+        entry = entry_alloc(&temp.key);
+
+        /* copy in the actual stats */
+        entry->counters = temp.counters;
+    }
+
+    pfree(buffer);
+    FreeFile(file);
+    return;
+
+error:
+    ereport(LOG,
+            (errcode_for_file_access(),
+             errmsg("could not read pg_stat_statement file \"%s\": %m",
+                    PGSS_DUMP_FILE)));
+    if (buffer)
+        pfree(buffer);
+    if (file)
+        FreeFile(file);
+    /* If possible, throw away the bogus file; ignore any error */
+    unlink(PGSS_DUMP_FILE);
+}
+
+/*
+ * shmem_shutdown hook: Dump statistics into file.
+ *
+ * Note: we don't bother with acquiring lock, because there should be no
+ * other processes running when this is called.
+ */
+static void
+pgss_shmem_shutdown(int code, Datum arg)
+{
+    FILE       *file;
+    HASH_SEQ_STATUS hash_seq;
+    int32        num_entries;
+    pgssEntry  *entry;
+
+    /* Don't try to dump during a crash. */
+    if (code)
+        return;
+
+    /* Safety check ... shouldn't get here unless shmem is set up. */
+    if (!pgss || !pgss_hash)
+        return;
+
+    /* Don't dump if told not to. */
+    if (!pgss_save)
+        return;
+
+    file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_W);
+    if (file == NULL)
+        goto error;
+
+    if (fwrite(&PGSS_FILE_HEADER, sizeof(uint32), 1, file) != 1)
+        goto error;
+    num_entries = hash_get_num_entries(pgss_hash);
+    if (fwrite(&num_entries, sizeof(int32), 1, file) != 1)
+        goto error;
+
+    hash_seq_init(&hash_seq, pgss_hash);
+    while ((entry = hash_seq_search(&hash_seq)) != NULL)
+    {
+        int            len = entry->key.query_len;
+
+        if (fwrite(entry, offsetof(pgssEntry, mutex), 1, file) != 1 ||
+            fwrite(entry->query, 1, len, file) != len)
+            goto error;
+    }
+
+    if (FreeFile(file))
+    {
+        file = NULL;
+        goto error;
+    }
+
+    return;
+
+error:
+    ereport(LOG,
+            (errcode_for_file_access(),
+             errmsg("could not write pg_stat_statement file \"%s\": %m",
+                    PGSS_DUMP_FILE)));
+    if (file)
+        FreeFile(file);
+    unlink(PGSS_DUMP_FILE);
+}
+
+/*
+ * ExecutorStart hook: start up tracking if needed
+ */
+static void
+pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
+{
+    if (prev_ExecutorStart)
+        prev_ExecutorStart(queryDesc, eflags);
+    else
+        standard_ExecutorStart(queryDesc, eflags);
+
+    if (pgss_enabled())
+    {
+        /*
+         * Set up to track total elapsed time in ExecutorRun.  Make sure the
+         * space is allocated in the per-query context so it will go away at
+         * ExecutorEnd.
+         */
+        if (queryDesc->totaltime == NULL)
+        {
+            MemoryContext oldcxt;
+
+            oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
+            queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
+            MemoryContextSwitchTo(oldcxt);
+        }
+    }
+}
+
+/*
+ * ExecutorRun hook: all we need do is track nesting depth
+ */
+static void
+pgss_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
+{
+    nested_level++;
+    PG_TRY();
+    {
+        if (prev_ExecutorRun)
+            prev_ExecutorRun(queryDesc, direction, count);
+        else
+            standard_ExecutorRun(queryDesc, direction, count);
+        nested_level--;
+    }
+    PG_CATCH();
+    {
+        nested_level--;
+        PG_RE_THROW();
+    }
+    PG_END_TRY();
+}
+
+/*
+ * ExecutorFinish hook: all we need do is track nesting depth
+ */
+static void
+pgss_ExecutorFinish(QueryDesc *queryDesc)
+{
+    nested_level++;
+    PG_TRY();
+    {
+        if (prev_ExecutorFinish)
+            prev_ExecutorFinish(queryDesc);
+        else
+            standard_ExecutorFinish(queryDesc);
+        nested_level--;
+    }
+    PG_CATCH();
+    {
+        nested_level--;
+        PG_RE_THROW();
+    }
+    PG_END_TRY();
+}
+
+/*
+ * ExecutorEnd hook: store results if needed
+ */
+static void
+pgss_ExecutorEnd(QueryDesc *queryDesc)
+{
+    if (queryDesc->totaltime && pgss_enabled())
+    {
+        /*
+         * Make sure stats accumulation is done.  (Note: it's okay if several
+         * levels of hook all do this.)
+         */
+        InstrEndLoop(queryDesc->totaltime);
+
+        pgss_store(queryDesc->sourceText,
+                   queryDesc->totaltime->total,
+                   queryDesc->estate->es_processed,
+                   &queryDesc->totaltime->bufusage);
+    }
+
+    if (prev_ExecutorEnd)
+        prev_ExecutorEnd(queryDesc);
+    else
+        standard_ExecutorEnd(queryDesc);
+}
+
+/*
+ * ProcessUtility hook
+ */
+static void
+pgss_ProcessUtility(Node *parsetree, const char *queryString,
+                    ParamListInfo params, bool isTopLevel,
+                    DestReceiver *dest, char *completionTag)
+{
+    if (pgss_track_utility && pgss_enabled())
+    {
+        instr_time    start;
+        instr_time    duration;
+        uint64        rows = 0;
+        BufferUsage bufusage;
+
+        bufusage = pgBufferUsage;
+        INSTR_TIME_SET_CURRENT(start);
+
+        nested_level++;
+        PG_TRY();
+        {
+            if (prev_ProcessUtility)
+                prev_ProcessUtility(parsetree, queryString, params,
+                                    isTopLevel, dest, completionTag);
+            else
+                standard_ProcessUtility(parsetree, queryString, params,
+                                        isTopLevel, dest, completionTag);
+            nested_level--;
+        }
+        PG_CATCH();
+        {
+            nested_level--;
+            PG_RE_THROW();
+        }
+        PG_END_TRY();
+
+        INSTR_TIME_SET_CURRENT(duration);
+        INSTR_TIME_SUBTRACT(duration, start);
+
+        /* parse command tag to retrieve the number of affected rows. */
+        if (completionTag &&
+            sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
+            rows = 0;
+
+        /* calc differences of buffer counters. */
+        bufusage.shared_blks_hit =
+            pgBufferUsage.shared_blks_hit - bufusage.shared_blks_hit;
+        bufusage.shared_blks_read =
+            pgBufferUsage.shared_blks_read - bufusage.shared_blks_read;
+        bufusage.shared_blks_written =
+            pgBufferUsage.shared_blks_written - bufusage.shared_blks_written;
+        bufusage.local_blks_hit =
+            pgBufferUsage.local_blks_hit - bufusage.local_blks_hit;
+        bufusage.local_blks_read =
+            pgBufferUsage.local_blks_read - bufusage.local_blks_read;
+        bufusage.local_blks_written =
+            pgBufferUsage.local_blks_written - bufusage.local_blks_written;
+        bufusage.temp_blks_read =
+            pgBufferUsage.temp_blks_read - bufusage.temp_blks_read;
+        bufusage.temp_blks_written =
+            pgBufferUsage.temp_blks_written - bufusage.temp_blks_written;
+
+        pgss_store(queryString, INSTR_TIME_GET_DOUBLE(duration), rows,
+                   &bufusage);
+    }
+    else
+    {
+        if (prev_ProcessUtility)
+            prev_ProcessUtility(parsetree, queryString, params,
+                                isTopLevel, dest, completionTag);
+        else
+            standard_ProcessUtility(parsetree, queryString, params,
+                                    isTopLevel, dest, completionTag);
+    }
+}
+
+/*
+ * Calculate hash value for a key
+ */
+static uint32
+pgss_hash_fn(const void *key, Size keysize)
+{
+    const pgssHashKey *k = (const pgssHashKey *) key;
+
+    /* we don't bother to include encoding in the hash */
+    return hash_uint32((uint32) k->userid) ^
+        hash_uint32((uint32) k->dbid) ^
+        DatumGetUInt32(hash_any((const unsigned char *) k->query_ptr,
+                                k->query_len));
+}
+
+/*
+ * Compare two keys - zero means match
+ */
+static int
+pgss_match_fn(const void *key1, const void *key2, Size keysize)
+{
+    const pgssHashKey *k1 = (const pgssHashKey *) key1;
+    const pgssHashKey *k2 = (const pgssHashKey *) key2;
+
+    if (k1->userid == k2->userid &&
+        k1->dbid == k2->dbid &&
+        k1->encoding == k2->encoding &&
+        k1->query_len == k2->query_len &&
+        memcmp(k1->query_ptr, k2->query_ptr, k1->query_len) == 0)
+        return 0;
+    else
+        return 1;
+}
+
+/*
+ * Store some statistics for a statement.
+ */
+static void
+pgss_store(const char *query, double total_time, uint64 rows,
+           const BufferUsage *bufusage)
+{
+    pgssHashKey key;
+    double        usage;
+    pgssEntry  *entry;
+
+    Assert(query != NULL);
+
+    /* Safety check... */
+    if (!pgss || !pgss_hash)
+        return;
+
+    /* Set up key for hashtable search */
+    key.userid = GetUserId();
+    key.dbid = MyDatabaseId;
+    key.encoding = GetDatabaseEncoding();
+    key.query_len = strlen(query);
+    if (key.query_len >= pgss->query_size)
+        key.query_len = pg_encoding_mbcliplen(key.encoding,
+                                              query,
+                                              key.query_len,
+                                              pgss->query_size - 1);
+    key.query_ptr = query;
+
+    usage = USAGE_EXEC(duration);
+
+    /* Lookup the hash table entry with shared lock. */
+    LWLockAcquire(pgss->lock, LW_SHARED);
+
+    entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);
+    if (!entry)
+    {
+        /* Must acquire exclusive lock to add a new entry. */
+        LWLockRelease(pgss->lock);
+        LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
+        entry = entry_alloc(&key);
+    }
+
+    /* Grab the spinlock while updating the counters. */
+    {
+        volatile pgssEntry *e = (volatile pgssEntry *) entry;
+
+        SpinLockAcquire(&e->mutex);
+        e->counters.calls += 1;
+        e->counters.total_time += total_time;
+        e->counters.rows += rows;
+        e->counters.shared_blks_hit += bufusage->shared_blks_hit;
+        e->counters.shared_blks_read += bufusage->shared_blks_read;
+        e->counters.shared_blks_written += bufusage->shared_blks_written;
+        e->counters.local_blks_hit += bufusage->local_blks_hit;
+        e->counters.local_blks_read += bufusage->local_blks_read;
+        e->counters.local_blks_written += bufusage->local_blks_written;
+        e->counters.temp_blks_read += bufusage->temp_blks_read;
+        e->counters.temp_blks_written += bufusage->temp_blks_written;
+        e->counters.usage += usage;
+        SpinLockRelease(&e->mutex);
+    }
+
+    LWLockRelease(pgss->lock);
+}
+
+/*
+ * Reset all statement statistics.
+ */
+Datum
+pg_stat_statements_reset(PG_FUNCTION_ARGS)
+{
+    if (!pgss || !pgss_hash)
+        ereport(ERROR,
+                (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+                 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+    entry_reset();
+    PG_RETURN_VOID();
+}
+
+#define PG_STAT_STATEMENTS_COLS        14
+
+/*
+ * Retrieve statement statistics.
+ */
+Datum
+pg_stat_statements(PG_FUNCTION_ARGS)
+{
+    ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+    TupleDesc    tupdesc;
+    Tuplestorestate *tupstore;
+    MemoryContext per_query_ctx;
+    MemoryContext oldcontext;
+    Oid            userid = GetUserId();
+    bool        is_superuser = superuser();
+    HASH_SEQ_STATUS hash_seq;
+    pgssEntry  *entry;
+
+    if (!pgss || !pgss_hash)
+        ereport(ERROR,
+                (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+                 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+
+    /* check to see if caller supports us returning a tuplestore */
+    if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("set-valued function called in context that cannot accept a set")));
+    if (!(rsinfo->allowedModes & SFRM_Materialize))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("materialize mode required, but it is not " \
+                        "allowed in this context")));
+
+    /* Build a tuple descriptor for our result type */
+    if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+        elog(ERROR, "return type must be a row type");
+
+    per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+    oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+    tupstore = tuplestore_begin_heap(true, false, work_mem);
+    rsinfo->returnMode = SFRM_Materialize;
+    rsinfo->setResult = tupstore;
+    rsinfo->setDesc = tupdesc;
+
+    MemoryContextSwitchTo(oldcontext);
+
+    LWLockAcquire(pgss->lock, LW_SHARED);
+
+    hash_seq_init(&hash_seq, pgss_hash);
+    while ((entry = hash_seq_search(&hash_seq)) != NULL)
+    {
+        Datum        values[PG_STAT_STATEMENTS_COLS];
+        bool        nulls[PG_STAT_STATEMENTS_COLS];
+        int            i = 0;
+        Counters    tmp;
+
+        memset(values, 0, sizeof(values));
+        memset(nulls, 0, sizeof(nulls));
+
+        values[i++] = ObjectIdGetDatum(entry->key.userid);
+        values[i++] = ObjectIdGetDatum(entry->key.dbid);
+
+        if (is_superuser || entry->key.userid == userid)
+        {
+            char       *qstr;
+
+            qstr = (char *)
+                pg_do_encoding_conversion((unsigned char *) entry->query,
+                                          entry->key.query_len,
+                                          entry->key.encoding,
+                                          GetDatabaseEncoding());
+            values[i++] = CStringGetTextDatum(qstr);
+            if (qstr != entry->query)
+                pfree(qstr);
+        }
+        else
+            values[i++] = CStringGetTextDatum("<insufficient privilege>");
+
+        /* copy counters to a local variable to keep locking time short */
+        {
+            volatile pgssEntry *e = (volatile pgssEntry *) entry;
+
+            SpinLockAcquire(&e->mutex);
+            tmp = e->counters;
+            SpinLockRelease(&e->mutex);
+        }
+
+        values[i++] = Int64GetDatumFast(tmp.calls);
+        values[i++] = Float8GetDatumFast(tmp.total_time);
+        values[i++] = Int64GetDatumFast(tmp.rows);
+        values[i++] = Int64GetDatumFast(tmp.shared_blks_hit);
+        values[i++] = Int64GetDatumFast(tmp.shared_blks_read);
+        values[i++] = Int64GetDatumFast(tmp.shared_blks_written);
+        values[i++] = Int64GetDatumFast(tmp.local_blks_hit);
+        values[i++] = Int64GetDatumFast(tmp.local_blks_read);
+        values[i++] = Int64GetDatumFast(tmp.local_blks_written);
+        values[i++] = Int64GetDatumFast(tmp.temp_blks_read);
+        values[i++] = Int64GetDatumFast(tmp.temp_blks_written);
+
+        Assert(i == PG_STAT_STATEMENTS_COLS);
+
+        tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+    }
+
+    LWLockRelease(pgss->lock);
+
+    /* clean up and return the tuplestore */
+    tuplestore_donestoring(tupstore);
+
+    return (Datum) 0;
+}
+
+/*
+ * Estimate shared memory space needed.
+ */
+static Size
+pgss_memsize(void)
+{
+    Size        size;
+    Size        entrysize;
+
+    size = MAXALIGN(sizeof(pgssSharedState));
+    entrysize = offsetof(pgssEntry, query) +pgstat_track_activity_query_size;
+    size = add_size(size, hash_estimate_size(pgss_max, entrysize));
+
+    return size;
+}
+
+/*
+ * Allocate a new hashtable entry.
+ * caller must hold an exclusive lock on pgss->lock
+ *
+ * Note: despite needing exclusive lock, it's not an error for the target
+ * entry to already exist.    This is because pgss_store releases and
+ * reacquires lock after failing to find a match; so someone else could
+ * have made the entry while we waited to get exclusive lock.
+ */
+static pgssEntry *
+entry_alloc(pgssHashKey *key)
+{
+    pgssEntry  *entry;
+    bool        found;
+
+    /* Caller must have clipped query properly */
+    Assert(key->query_len < pgss->query_size);
+
+    /* Make space if needed */
+    while (hash_get_num_entries(pgss_hash) >= pgss_max)
+        entry_dealloc();
+
+    /* Find or create an entry with desired hash code */
+    entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
+
+    if (!found)
+    {
+        /* New entry, initialize it */
+
+        /* dynahash tried to copy the key for us, but must fix query_ptr */
+        entry->key.query_ptr = entry->query;
+        /* reset the statistics */
+        memset(&entry->counters, 0, sizeof(Counters));
+        entry->counters.usage = USAGE_INIT;
+        /* re-initialize the mutex each time ... we assume no one using it */
+        SpinLockInit(&entry->mutex);
+        /* ... and don't forget the query text */
+        memcpy(entry->query, key->query_ptr, key->query_len);
+        entry->query[key->query_len] = '\0';
+    }
+
+    return entry;
+}
+
+/*
+ * qsort comparator for sorting into increasing usage order
+ */
+static int
+entry_cmp(const void *lhs, const void *rhs)
+{
+    double        l_usage = (*(const pgssEntry **) lhs)->counters.usage;
+    double        r_usage = (*(const pgssEntry **) rhs)->counters.usage;
+
+    if (l_usage < r_usage)
+        return -1;
+    else if (l_usage > r_usage)
+        return +1;
+    else
+        return 0;
+}
+
+/*
+ * Deallocate least used entries.
+ * Caller must hold an exclusive lock on pgss->lock.
+ */
+static void
+entry_dealloc(void)
+{
+    HASH_SEQ_STATUS hash_seq;
+    pgssEntry **entries;
+    pgssEntry  *entry;
+    int            nvictims;
+    int            i;
+
+    /* Sort entries by usage and deallocate USAGE_DEALLOC_PERCENT of them. */
+
+    entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
+
+    i = 0;
+    hash_seq_init(&hash_seq, pgss_hash);
+    while ((entry = hash_seq_search(&hash_seq)) != NULL)
+    {
+        entries[i++] = entry;
+        entry->counters.usage *= USAGE_DECREASE_FACTOR;
+    }
+
+    qsort(entries, i, sizeof(pgssEntry *), entry_cmp);
+    nvictims = Max(10, i * USAGE_DEALLOC_PERCENT / 100);
+    nvictims = Min(nvictims, i);
+
+    for (i = 0; i < nvictims; i++)
+    {
+        hash_search(pgss_hash, &entries[i]->key, HASH_REMOVE, NULL);
+    }
+
+    pfree(entries);
+}
+
+/*
+ * Release all entries.
+ */
+static void
+entry_reset(void)
+{
+    HASH_SEQ_STATUS hash_seq;
+    pgssEntry  *entry;
+
+    LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
+
+    hash_seq_init(&hash_seq, pgss_hash);
+    while ((entry = hash_seq_search(&hash_seq)) != NULL)
+    {
+        hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
+    }
+
+    LWLockRelease(pgss->lock);
+}
diff --git a/src/extension/pg_stat_statements/pg_stat_statements.control
b/src/extension/pg_stat_statements/pg_stat_statements.control
new file mode 100644
index 0000000..6f9a947
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements.control
@@ -0,0 +1,5 @@
+# pg_stat_statements extension
+comment = 'track execution statistics of all SQL statements executed'
+default_version = '1.0'
+module_pathname = '$libdir/pg_stat_statements'
+relocatable = true
diff --git a/src/extension/pgrowlocks/Makefile b/src/extension/pgrowlocks/Makefile
new file mode 100644
index 0000000..a4191fb
--- /dev/null
+++ b/src/extension/pgrowlocks/Makefile
@@ -0,0 +1,19 @@
+# contrib/pgrowlocks/Makefile
+
+MODULE_big    = pgrowlocks
+OBJS        = pgrowlocks.o
+MODULEDIR   = extension
+
+EXTENSION = pgrowlocks
+DATA = pgrowlocks--1.0.sql pgrowlocks--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pgrowlocks
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pgrowlocks/pgrowlocks--1.0.sql b/src/extension/pgrowlocks/pgrowlocks--1.0.sql
new file mode 100644
index 0000000..0b60fdc
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks--1.0.sql
@@ -0,0 +1,12 @@
+/* contrib/pgrowlocks/pgrowlocks--1.0.sql */
+
+CREATE FUNCTION pgrowlocks(IN relname text,
+    OUT locked_row TID,        -- row TID
+    OUT lock_type TEXT,        -- lock type
+    OUT locker XID,        -- locking XID
+    OUT multi bool,        -- multi XID?
+    OUT xids xid[],        -- multi XIDs
+    OUT pids INTEGER[])        -- locker's process id
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'pgrowlocks'
+LANGUAGE C STRICT;
diff --git a/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
b/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
new file mode 100644
index 0000000..90d7088
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
@@ -0,0 +1,3 @@
+/* src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql */
+
+ALTER EXTENSION pgrowlocks ADD function pgrowlocks(text);
diff --git a/src/extension/pgrowlocks/pgrowlocks.c b/src/extension/pgrowlocks/pgrowlocks.c
new file mode 100644
index 0000000..aa41491
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks.c
@@ -0,0 +1,220 @@
+/*
+ * src/extension/pgrowlocks/pgrowlocks.c
+ *
+ * Copyright (c) 2005-2006    Tatsuo Ishii
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/multixact.h"
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "catalog/namespace.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "storage/procarray.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/tqual.h"
+
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(pgrowlocks);
+
+extern Datum pgrowlocks(PG_FUNCTION_ARGS);
+
+/* ----------
+ * pgrowlocks:
+ * returns tids of rows being locked
+ * ----------
+ */
+
+#define NCHARS 32
+
+typedef struct
+{
+    Relation    rel;
+    HeapScanDesc scan;
+    int            ncolumns;
+} MyData;
+
+Datum
+pgrowlocks(PG_FUNCTION_ARGS)
+{
+    FuncCallContext *funcctx;
+    HeapScanDesc scan;
+    HeapTuple    tuple;
+    TupleDesc    tupdesc;
+    AttInMetadata *attinmeta;
+    Datum        result;
+    MyData       *mydata;
+    Relation    rel;
+
+    if (SRF_IS_FIRSTCALL())
+    {
+        text       *relname;
+        RangeVar   *relrv;
+        MemoryContext oldcontext;
+        AclResult    aclresult;
+
+        funcctx = SRF_FIRSTCALL_INIT();
+        oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+        /* Build a tuple descriptor for our result type */
+        if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+            elog(ERROR, "return type must be a row type");
+
+        attinmeta = TupleDescGetAttInMetadata(tupdesc);
+        funcctx->attinmeta = attinmeta;
+
+        relname = PG_GETARG_TEXT_P(0);
+        relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+        rel = heap_openrv(relrv, AccessShareLock);
+
+        /* check permissions: must have SELECT on table */
+        aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),
+                                      ACL_SELECT);
+        if (aclresult != ACLCHECK_OK)
+            aclcheck_error(aclresult, ACL_KIND_CLASS,
+                           RelationGetRelationName(rel));
+
+        scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
+        mydata = palloc(sizeof(*mydata));
+        mydata->rel = rel;
+        mydata->scan = scan;
+        mydata->ncolumns = tupdesc->natts;
+        funcctx->user_fctx = mydata;
+
+        MemoryContextSwitchTo(oldcontext);
+    }
+
+    funcctx = SRF_PERCALL_SETUP();
+    attinmeta = funcctx->attinmeta;
+    mydata = (MyData *) funcctx->user_fctx;
+    scan = mydata->scan;
+
+    /* scan the relation */
+    while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+    {
+        /* must hold a buffer lock to call HeapTupleSatisfiesUpdate */
+        LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
+
+        if (HeapTupleSatisfiesUpdate(tuple->t_data,
+                                     GetCurrentCommandId(false),
+                                     scan->rs_cbuf) == HeapTupleBeingUpdated)
+        {
+
+            char      **values;
+            int            i;
+
+            values = (char **) palloc(mydata->ncolumns * sizeof(char *));
+
+            i = 0;
+            values[i++] = (char *) DirectFunctionCall1(tidout, PointerGetDatum(&tuple->t_self));
+
+            if (tuple->t_data->t_infomask & HEAP_XMAX_SHARED_LOCK)
+                values[i++] = pstrdup("Shared");
+            else
+                values[i++] = pstrdup("Exclusive");
+            values[i] = palloc(NCHARS * sizeof(char));
+            snprintf(values[i++], NCHARS, "%d", HeapTupleHeaderGetXmax(tuple->t_data));
+            if (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
+            {
+                TransactionId *xids;
+                int            nxids;
+                int            j;
+                int            isValidXid = 0;        /* any valid xid ever exists? */
+
+                values[i++] = pstrdup("true");
+                nxids = GetMultiXactIdMembers(HeapTupleHeaderGetXmax(tuple->t_data), &xids);
+                if (nxids == -1)
+                {
+                    elog(ERROR, "GetMultiXactIdMembers returns error");
+                }
+
+                values[i] = palloc(NCHARS * nxids);
+                values[i + 1] = palloc(NCHARS * nxids);
+                strcpy(values[i], "{");
+                strcpy(values[i + 1], "{");
+
+                for (j = 0; j < nxids; j++)
+                {
+                    char        buf[NCHARS];
+
+                    if (TransactionIdIsInProgress(xids[j]))
+                    {
+                        if (isValidXid)
+                        {
+                            strcat(values[i], ",");
+                            strcat(values[i + 1], ",");
+                        }
+                        snprintf(buf, NCHARS, "%d", xids[j]);
+                        strcat(values[i], buf);
+                        snprintf(buf, NCHARS, "%d", BackendXidGetPid(xids[j]));
+                        strcat(values[i + 1], buf);
+
+                        isValidXid = 1;
+                    }
+                }
+
+                strcat(values[i], "}");
+                strcat(values[i + 1], "}");
+                i++;
+            }
+            else
+            {
+                values[i++] = pstrdup("false");
+                values[i] = palloc(NCHARS * sizeof(char));
+                snprintf(values[i++], NCHARS, "{%d}", HeapTupleHeaderGetXmax(tuple->t_data));
+
+                values[i] = palloc(NCHARS * sizeof(char));
+                snprintf(values[i++], NCHARS, "{%d}", BackendXidGetPid(HeapTupleHeaderGetXmax(tuple->t_data)));
+            }
+
+            LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+
+            /* build a tuple */
+            tuple = BuildTupleFromCStrings(attinmeta, values);
+
+            /* make the tuple into a datum */
+            result = HeapTupleGetDatum(tuple);
+
+            /* Clean up */
+            for (i = 0; i < mydata->ncolumns; i++)
+                pfree(values[i]);
+            pfree(values);
+
+            SRF_RETURN_NEXT(funcctx, result);
+        }
+        else
+        {
+            LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+        }
+    }
+
+    heap_endscan(scan);
+    heap_close(mydata->rel, AccessShareLock);
+
+    SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/extension/pgrowlocks/pgrowlocks.control b/src/extension/pgrowlocks/pgrowlocks.control
new file mode 100644
index 0000000..a6ba164
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks.control
@@ -0,0 +1,5 @@
+# pgrowlocks extension
+comment = 'show row-level locking information'
+default_version = '1.0'
+module_pathname = '$libdir/pgrowlocks'
+relocatable = true
diff --git a/src/extension/pgstattuple/Makefile b/src/extension/pgstattuple/Makefile
new file mode 100644
index 0000000..296ca57
--- /dev/null
+++ b/src/extension/pgstattuple/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pgstattuple/Makefile
+
+MODULE_big    = pgstattuple
+OBJS        = pgstattuple.o pgstatindex.o
+MODULEDIR   = extension
+
+EXTENSION = pgstattuple
+DATA = pgstattuple--1.0.sql pgstattuple--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pgstattuple
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pgstattuple/pgstatindex.c b/src/extension/pgstattuple/pgstatindex.c
new file mode 100644
index 0000000..77ca208
--- /dev/null
+++ b/src/extension/pgstattuple/pgstatindex.c
@@ -0,0 +1,282 @@
+/*
+ * src/extension/pgstattuple/pgstatindex.c
+ *
+ *
+ * pgstatindex
+ *
+ * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/nbtree.h"
+#include "catalog/namespace.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "utils/builtins.h"
+
+
+extern Datum pgstatindex(PG_FUNCTION_ARGS);
+extern Datum pg_relpages(PG_FUNCTION_ARGS);
+
+PG_FUNCTION_INFO_V1(pgstatindex);
+PG_FUNCTION_INFO_V1(pg_relpages);
+
+#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
+#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
+
+#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
+        if ( !(FirstOffsetNumber <= (offnum) && \
+                        (offnum) <= PageGetMaxOffsetNumber(pg)) ) \
+             elog(ERROR, "page offset number out of range"); }
+
+/* note: BlockNumber is unsigned, hence can't be negative */
+#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
+        if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
+             elog(ERROR, "block number out of range"); }
+
+/* ------------------------------------------------
+ * A structure for a whole btree index statistics
+ * used by pgstatindex().
+ * ------------------------------------------------
+ */
+typedef struct BTIndexStat
+{
+    uint32        version;
+    uint32        level;
+    BlockNumber root_blkno;
+
+    uint64        root_pages;
+    uint64        internal_pages;
+    uint64        leaf_pages;
+    uint64        empty_pages;
+    uint64        deleted_pages;
+
+    uint64        max_avail;
+    uint64        free_space;
+
+    uint64        fragments;
+} BTIndexStat;
+
+/* ------------------------------------------------------
+ * pgstatindex()
+ *
+ * Usage: SELECT * FROM pgstatindex('t1_pkey');
+ * ------------------------------------------------------
+ */
+Datum
+pgstatindex(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    Relation    rel;
+    RangeVar   *relrv;
+    Datum        result;
+    BlockNumber nblocks;
+    BlockNumber blkno;
+    BTIndexStat indexStat;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pgstattuple functions"))));
+
+    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+    rel = relation_openrv(relrv, AccessShareLock);
+
+    if (!IS_INDEX(rel) || !IS_BTREE(rel))
+        elog(ERROR, "relation \"%s\" is not a btree index",
+             RelationGetRelationName(rel));
+
+    /*
+     * Reject attempts to read non-local temporary relations; we would be
+     * likely to get wrong data since we have no visibility into the owning
+     * session's local buffers.
+     */
+    if (RELATION_IS_OTHER_TEMP(rel))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("cannot access temporary tables of other sessions")));
+
+    /*
+     * Read metapage
+     */
+    {
+        Buffer        buffer = ReadBuffer(rel, 0);
+        Page        page = BufferGetPage(buffer);
+        BTMetaPageData *metad = BTPageGetMeta(page);
+
+        indexStat.version = metad->btm_version;
+        indexStat.level = metad->btm_level;
+        indexStat.root_blkno = metad->btm_root;
+
+        ReleaseBuffer(buffer);
+    }
+
+    /* -- init counters -- */
+    indexStat.root_pages = 0;
+    indexStat.internal_pages = 0;
+    indexStat.leaf_pages = 0;
+    indexStat.empty_pages = 0;
+    indexStat.deleted_pages = 0;
+
+    indexStat.max_avail = 0;
+    indexStat.free_space = 0;
+
+    indexStat.fragments = 0;
+
+    /*
+     * Scan all blocks except the metapage
+     */
+    nblocks = RelationGetNumberOfBlocks(rel);
+
+    for (blkno = 1; blkno < nblocks; blkno++)
+    {
+        Buffer        buffer;
+        Page        page;
+        BTPageOpaque opaque;
+
+        /* Read and lock buffer */
+        buffer = ReadBuffer(rel, blkno);
+        LockBuffer(buffer, BUFFER_LOCK_SHARE);
+
+        page = BufferGetPage(buffer);
+        opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+
+        /* Determine page type, and update totals */
+
+        if (P_ISLEAF(opaque))
+        {
+            int            max_avail;
+
+            max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
+            indexStat.max_avail += max_avail;
+            indexStat.free_space += PageGetFreeSpace(page);
+
+            indexStat.leaf_pages++;
+
+            /*
+             * If the next leaf is on an earlier block, it means a
+             * fragmentation.
+             */
+            if (opaque->btpo_next != P_NONE && opaque->btpo_next < blkno)
+                indexStat.fragments++;
+        }
+        else if (P_ISDELETED(opaque))
+            indexStat.deleted_pages++;
+        else if (P_IGNORE(opaque))
+            indexStat.empty_pages++;
+        else if (P_ISROOT(opaque))
+            indexStat.root_pages++;
+        else
+            indexStat.internal_pages++;
+
+        /* Unlock and release buffer */
+        LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+        ReleaseBuffer(buffer);
+    }
+
+    relation_close(rel, AccessShareLock);
+
+    /*----------------------------
+     * Build a result tuple
+     *----------------------------
+     */
+    {
+        TupleDesc    tupleDesc;
+        int            j;
+        char       *values[10];
+        HeapTuple    tuple;
+
+        /* Build a tuple descriptor for our result type */
+        if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+            elog(ERROR, "return type must be a row type");
+
+        j = 0;
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%d", indexStat.version);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%d", indexStat.level);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, INT64_FORMAT,
+                 (indexStat.root_pages +
+                  indexStat.leaf_pages +
+                  indexStat.internal_pages +
+                  indexStat.deleted_pages +
+                  indexStat.empty_pages) * BLCKSZ);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%u", indexStat.root_blkno);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, INT64_FORMAT, indexStat.internal_pages);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, INT64_FORMAT, indexStat.leaf_pages);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, INT64_FORMAT, indexStat.empty_pages);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, INT64_FORMAT, indexStat.deleted_pages);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%.2f", 100.0 - (double) indexStat.free_space / (double) indexStat.max_avail *
100.0);
+        values[j] = palloc(32);
+        snprintf(values[j++], 32, "%.2f", (double) indexStat.fragments / (double) indexStat.leaf_pages * 100.0);
+
+        tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+                                       values);
+
+        result = HeapTupleGetDatum(tuple);
+    }
+
+    PG_RETURN_DATUM(result);
+}
+
+/* --------------------------------------------------------
+ * pg_relpages()
+ *
+ * Get the number of pages of the table/index.
+ *
+ * Usage: SELECT pg_relpages('t1');
+ *          SELECT pg_relpages('t1_pkey');
+ * --------------------------------------------------------
+ */
+Datum
+pg_relpages(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    int64        relpages;
+    Relation    rel;
+    RangeVar   *relrv;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pgstattuple functions"))));
+
+    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+    rel = relation_openrv(relrv, AccessShareLock);
+
+    /* note: this will work OK on non-local temp tables */
+
+    relpages = RelationGetNumberOfBlocks(rel);
+
+    relation_close(rel, AccessShareLock);
+
+    PG_RETURN_INT64(relpages);
+}
diff --git a/src/extension/pgstattuple/pgstattuple--1.0.sql b/src/extension/pgstattuple/pgstattuple--1.0.sql
new file mode 100644
index 0000000..7b78905
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple--1.0.sql
@@ -0,0 +1,46 @@
+/* src/extension/pgstattuple/pgstattuple--1.0.sql */
+
+CREATE FUNCTION pgstattuple(IN relname text,
+    OUT table_len BIGINT,        -- physical table length in bytes
+    OUT tuple_count BIGINT,        -- number of live tuples
+    OUT tuple_len BIGINT,        -- total tuples length in bytes
+    OUT tuple_percent FLOAT8,        -- live tuples in %
+    OUT dead_tuple_count BIGINT,    -- number of dead tuples
+    OUT dead_tuple_len BIGINT,        -- total dead tuples length in bytes
+    OUT dead_tuple_percent FLOAT8,    -- dead tuples in %
+    OUT free_space BIGINT,        -- free space in bytes
+    OUT free_percent FLOAT8)        -- free space in %
+AS 'MODULE_PATHNAME', 'pgstattuple'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION pgstattuple(IN reloid oid,
+    OUT table_len BIGINT,        -- physical table length in bytes
+    OUT tuple_count BIGINT,        -- number of live tuples
+    OUT tuple_len BIGINT,        -- total tuples length in bytes
+    OUT tuple_percent FLOAT8,        -- live tuples in %
+    OUT dead_tuple_count BIGINT,    -- number of dead tuples
+    OUT dead_tuple_len BIGINT,        -- total dead tuples length in bytes
+    OUT dead_tuple_percent FLOAT8,    -- dead tuples in %
+    OUT free_space BIGINT,        -- free space in bytes
+    OUT free_percent FLOAT8)        -- free space in %
+AS 'MODULE_PATHNAME', 'pgstattuplebyid'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION pgstatindex(IN relname text,
+    OUT version INT,
+    OUT tree_level INT,
+    OUT index_size BIGINT,
+    OUT root_block_no BIGINT,
+    OUT internal_pages BIGINT,
+    OUT leaf_pages BIGINT,
+    OUT empty_pages BIGINT,
+    OUT deleted_pages BIGINT,
+    OUT avg_leaf_density FLOAT8,
+    OUT leaf_fragmentation FLOAT8)
+AS 'MODULE_PATHNAME', 'pgstatindex'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION pg_relpages(IN relname text)
+RETURNS BIGINT
+AS 'MODULE_PATHNAME', 'pg_relpages'
+LANGUAGE C STRICT;
diff --git a/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
b/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
new file mode 100644
index 0000000..6a1474a
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
@@ -0,0 +1,6 @@
+/* src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql */
+
+ALTER EXTENSION pgstattuple ADD function pgstattuple(text);
+ALTER EXTENSION pgstattuple ADD function pgstattuple(oid);
+ALTER EXTENSION pgstattuple ADD function pgstatindex(text);
+ALTER EXTENSION pgstattuple ADD function pg_relpages(text);
diff --git a/src/extension/pgstattuple/pgstattuple.c b/src/extension/pgstattuple/pgstattuple.c
new file mode 100644
index 0000000..76357ee
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple.c
@@ -0,0 +1,518 @@
+/*
+ * src/extension/pgstattuple/pgstattuple.c
+ *
+ * Copyright (c) 2001,2002    Tatsuo Ishii
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/gist_private.h"
+#include "access/hash.h"
+#include "access/nbtree.h"
+#include "access/relscan.h"
+#include "catalog/namespace.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "storage/lmgr.h"
+#include "utils/builtins.h"
+#include "utils/tqual.h"
+
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(pgstattuple);
+PG_FUNCTION_INFO_V1(pgstattuplebyid);
+
+extern Datum pgstattuple(PG_FUNCTION_ARGS);
+extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
+
+/*
+ * struct pgstattuple_type
+ *
+ * tuple_percent, dead_tuple_percent and free_percent are computable,
+ * so not defined here.
+ */
+typedef struct pgstattuple_type
+{
+    uint64        table_len;
+    uint64        tuple_count;
+    uint64        tuple_len;
+    uint64        dead_tuple_count;
+    uint64        dead_tuple_len;
+    uint64        free_space;        /* free/reusable space in bytes */
+} pgstattuple_type;
+
+typedef void (*pgstat_page) (pgstattuple_type *, Relation, BlockNumber);
+
+static Datum build_pgstattuple_type(pgstattuple_type *stat,
+                       FunctionCallInfo fcinfo);
+static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
+static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
+static void pgstat_btree_page(pgstattuple_type *stat,
+                  Relation rel, BlockNumber blkno);
+static void pgstat_hash_page(pgstattuple_type *stat,
+                 Relation rel, BlockNumber blkno);
+static void pgstat_gist_page(pgstattuple_type *stat,
+                 Relation rel, BlockNumber blkno);
+static Datum pgstat_index(Relation rel, BlockNumber start,
+             pgstat_page pagefn, FunctionCallInfo fcinfo);
+static void pgstat_index_page(pgstattuple_type *stat, Page page,
+                  OffsetNumber minoff, OffsetNumber maxoff);
+
+/*
+ * build_pgstattuple_type -- build a pgstattuple_type tuple
+ */
+static Datum
+build_pgstattuple_type(pgstattuple_type *stat, FunctionCallInfo fcinfo)
+{
+#define NCOLUMNS    9
+#define NCHARS        32
+
+    HeapTuple    tuple;
+    char       *values[NCOLUMNS];
+    char        values_buf[NCOLUMNS][NCHARS];
+    int            i;
+    double        tuple_percent;
+    double        dead_tuple_percent;
+    double        free_percent;    /* free/reusable space in % */
+    TupleDesc    tupdesc;
+    AttInMetadata *attinmeta;
+
+    /* Build a tuple descriptor for our result type */
+    if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+        elog(ERROR, "return type must be a row type");
+
+    /*
+     * Generate attribute metadata needed later to produce tuples from raw C
+     * strings
+     */
+    attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+    if (stat->table_len == 0)
+    {
+        tuple_percent = 0.0;
+        dead_tuple_percent = 0.0;
+        free_percent = 0.0;
+    }
+    else
+    {
+        tuple_percent = 100.0 * stat->tuple_len / stat->table_len;
+        dead_tuple_percent = 100.0 * stat->dead_tuple_len / stat->table_len;
+        free_percent = 100.0 * stat->free_space / stat->table_len;
+    }
+
+    /*
+     * Prepare a values array for constructing the tuple. This should be an
+     * array of C strings which will be processed later by the appropriate
+     * "in" functions.
+     */
+    for (i = 0; i < NCOLUMNS; i++)
+        values[i] = values_buf[i];
+    i = 0;
+    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->table_len);
+    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_count);
+    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_len);
+    snprintf(values[i++], NCHARS, "%.2f", tuple_percent);
+    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_count);
+    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_len);
+    snprintf(values[i++], NCHARS, "%.2f", dead_tuple_percent);
+    snprintf(values[i++], NCHARS, INT64_FORMAT, stat->free_space);
+    snprintf(values[i++], NCHARS, "%.2f", free_percent);
+
+    /* build a tuple */
+    tuple = BuildTupleFromCStrings(attinmeta, values);
+
+    /* make the tuple into a datum */
+    return HeapTupleGetDatum(tuple);
+}
+
+/* ----------
+ * pgstattuple:
+ * returns live/dead tuples info
+ *
+ * C FUNCTION definition
+ * pgstattuple(text) returns pgstattuple_type
+ * see pgstattuple.sql for pgstattuple_type
+ * ----------
+ */
+
+Datum
+pgstattuple(PG_FUNCTION_ARGS)
+{
+    text       *relname = PG_GETARG_TEXT_P(0);
+    RangeVar   *relrv;
+    Relation    rel;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pgstattuple functions"))));
+
+    /* open relation */
+    relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+    rel = relation_openrv(relrv, AccessShareLock);
+
+    PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
+}
+
+Datum
+pgstattuplebyid(PG_FUNCTION_ARGS)
+{
+    Oid            relid = PG_GETARG_OID(0);
+    Relation    rel;
+
+    if (!superuser())
+        ereport(ERROR,
+                (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+                 (errmsg("must be superuser to use pgstattuple functions"))));
+
+    /* open relation */
+    rel = relation_open(relid, AccessShareLock);
+
+    PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
+}
+
+/*
+ * pgstat_relation
+ */
+static Datum
+pgstat_relation(Relation rel, FunctionCallInfo fcinfo)
+{
+    const char *err;
+
+    /*
+     * Reject attempts to read non-local temporary relations; we would be
+     * likely to get wrong data since we have no visibility into the owning
+     * session's local buffers.
+     */
+    if (RELATION_IS_OTHER_TEMP(rel))
+        ereport(ERROR,
+                (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+                 errmsg("cannot access temporary tables of other sessions")));
+
+    switch (rel->rd_rel->relkind)
+    {
+        case RELKIND_RELATION:
+        case RELKIND_TOASTVALUE:
+        case RELKIND_UNCATALOGED:
+        case RELKIND_SEQUENCE:
+            return pgstat_heap(rel, fcinfo);
+        case RELKIND_INDEX:
+            switch (rel->rd_rel->relam)
+            {
+                case BTREE_AM_OID:
+                    return pgstat_index(rel, BTREE_METAPAGE + 1,
+                                        pgstat_btree_page, fcinfo);
+                case HASH_AM_OID:
+                    return pgstat_index(rel, HASH_METAPAGE + 1,
+                                        pgstat_hash_page, fcinfo);
+                case GIST_AM_OID:
+                    return pgstat_index(rel, GIST_ROOT_BLKNO + 1,
+                                        pgstat_gist_page, fcinfo);
+                case GIN_AM_OID:
+                    err = "gin index";
+                    break;
+                default:
+                    err = "unknown index";
+                    break;
+            }
+            break;
+        case RELKIND_VIEW:
+            err = "view";
+            break;
+        case RELKIND_COMPOSITE_TYPE:
+            err = "composite type";
+            break;
+        case RELKIND_FOREIGN_TABLE:
+            err = "foreign table";
+            break;
+        default:
+            err = "unknown";
+            break;
+    }
+
+    ereport(ERROR,
+            (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+             errmsg("\"%s\" (%s) is not supported",
+                    RelationGetRelationName(rel), err)));
+    return 0;                    /* should not happen */
+}
+
+/*
+ * pgstat_heap -- returns live/dead tuples info in a heap
+ */
+static Datum
+pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
+{
+    HeapScanDesc scan;
+    HeapTuple    tuple;
+    BlockNumber nblocks;
+    BlockNumber block = 0;        /* next block to count free space in */
+    BlockNumber tupblock;
+    Buffer        buffer;
+    pgstattuple_type stat = {0};
+
+    /* Disable syncscan because we assume we scan from block zero upwards */
+    scan = heap_beginscan_strat(rel, SnapshotAny, 0, NULL, true, false);
+
+    nblocks = scan->rs_nblocks; /* # blocks to be scanned */
+
+    /* scan the relation */
+    while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+    {
+        CHECK_FOR_INTERRUPTS();
+
+        /* must hold a buffer lock to call HeapTupleSatisfiesVisibility */
+        LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
+
+        if (HeapTupleSatisfiesVisibility(tuple, SnapshotNow, scan->rs_cbuf))
+        {
+            stat.tuple_len += tuple->t_len;
+            stat.tuple_count++;
+        }
+        else
+        {
+            stat.dead_tuple_len += tuple->t_len;
+            stat.dead_tuple_count++;
+        }
+
+        LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+
+        /*
+         * To avoid physically reading the table twice, try to do the
+         * free-space scan in parallel with the heap scan.    However,
+         * heap_getnext may find no tuples on a given page, so we cannot
+         * simply examine the pages returned by the heap scan.
+         */
+        tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+
+        while (block <= tupblock)
+        {
+            CHECK_FOR_INTERRUPTS();
+
+            buffer = ReadBuffer(rel, block);
+            LockBuffer(buffer, BUFFER_LOCK_SHARE);
+            stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
+            UnlockReleaseBuffer(buffer);
+            block++;
+        }
+    }
+    heap_endscan(scan);
+
+    while (block < nblocks)
+    {
+        CHECK_FOR_INTERRUPTS();
+
+        buffer = ReadBuffer(rel, block);
+        LockBuffer(buffer, BUFFER_LOCK_SHARE);
+        stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
+        UnlockReleaseBuffer(buffer);
+        block++;
+    }
+
+    relation_close(rel, AccessShareLock);
+
+    stat.table_len = (uint64) nblocks *BLCKSZ;
+
+    return build_pgstattuple_type(&stat, fcinfo);
+}
+
+/*
+ * pgstat_btree_page -- check tuples in a btree page
+ */
+static void
+pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+{
+    Buffer        buf;
+    Page        page;
+
+    buf = ReadBuffer(rel, blkno);
+    LockBuffer(buf, BT_READ);
+    page = BufferGetPage(buf);
+
+    /* Page is valid, see what to do with it */
+    if (PageIsNew(page))
+    {
+        /* fully empty page */
+        stat->free_space += BLCKSZ;
+    }
+    else
+    {
+        BTPageOpaque opaque;
+
+        opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+        if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD))
+        {
+            /* recyclable page */
+            stat->free_space += BLCKSZ;
+        }
+        else if (P_ISLEAF(opaque))
+        {
+            pgstat_index_page(stat, page, P_FIRSTDATAKEY(opaque),
+                              PageGetMaxOffsetNumber(page));
+        }
+        else
+        {
+            /* root or node */
+        }
+    }
+
+    _bt_relbuf(rel, buf);
+}
+
+/*
+ * pgstat_hash_page -- check tuples in a hash page
+ */
+static void
+pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+{
+    Buffer        buf;
+    Page        page;
+
+    _hash_getlock(rel, blkno, HASH_SHARE);
+    buf = _hash_getbuf(rel, blkno, HASH_READ, 0);
+    page = BufferGetPage(buf);
+
+    if (PageGetSpecialSize(page) == MAXALIGN(sizeof(HashPageOpaqueData)))
+    {
+        HashPageOpaque opaque;
+
+        opaque = (HashPageOpaque) PageGetSpecialPointer(page);
+        switch (opaque->hasho_flag)
+        {
+            case LH_UNUSED_PAGE:
+                stat->free_space += BLCKSZ;
+                break;
+            case LH_BUCKET_PAGE:
+            case LH_OVERFLOW_PAGE:
+                pgstat_index_page(stat, page, FirstOffsetNumber,
+                                  PageGetMaxOffsetNumber(page));
+                break;
+            case LH_BITMAP_PAGE:
+            case LH_META_PAGE:
+            default:
+                break;
+        }
+    }
+    else
+    {
+        /* maybe corrupted */
+    }
+
+    _hash_relbuf(rel, buf);
+    _hash_droplock(rel, blkno, HASH_SHARE);
+}
+
+/*
+ * pgstat_gist_page -- check tuples in a gist page
+ */
+static void
+pgstat_gist_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+{
+    Buffer        buf;
+    Page        page;
+
+    buf = ReadBuffer(rel, blkno);
+    LockBuffer(buf, GIST_SHARE);
+    gistcheckpage(rel, buf);
+    page = BufferGetPage(buf);
+
+    if (GistPageIsLeaf(page))
+    {
+        pgstat_index_page(stat, page, FirstOffsetNumber,
+                          PageGetMaxOffsetNumber(page));
+    }
+    else
+    {
+        /* root or node */
+    }
+
+    UnlockReleaseBuffer(buf);
+}
+
+/*
+ * pgstat_index -- returns live/dead tuples info in a generic index
+ */
+static Datum
+pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
+             FunctionCallInfo fcinfo)
+{
+    BlockNumber nblocks;
+    BlockNumber blkno;
+    pgstattuple_type stat = {0};
+
+    blkno = start;
+    for (;;)
+    {
+        /* Get the current relation length */
+        LockRelationForExtension(rel, ExclusiveLock);
+        nblocks = RelationGetNumberOfBlocks(rel);
+        UnlockRelationForExtension(rel, ExclusiveLock);
+
+        /* Quit if we've scanned the whole relation */
+        if (blkno >= nblocks)
+        {
+            stat.table_len = (uint64) nblocks *BLCKSZ;
+
+            break;
+        }
+
+        for (; blkno < nblocks; blkno++)
+        {
+            CHECK_FOR_INTERRUPTS();
+
+            pagefn(&stat, rel, blkno);
+        }
+    }
+
+    relation_close(rel, AccessShareLock);
+
+    return build_pgstattuple_type(&stat, fcinfo);
+}
+
+/*
+ * pgstat_index_page -- for generic index page
+ */
+static void
+pgstat_index_page(pgstattuple_type *stat, Page page,
+                  OffsetNumber minoff, OffsetNumber maxoff)
+{
+    OffsetNumber i;
+
+    stat->free_space += PageGetFreeSpace(page);
+
+    for (i = minoff; i <= maxoff; i = OffsetNumberNext(i))
+    {
+        ItemId        itemid = PageGetItemId(page, i);
+
+        if (ItemIdIsDead(itemid))
+        {
+            stat->dead_tuple_count++;
+            stat->dead_tuple_len += ItemIdGetLength(itemid);
+        }
+        else
+        {
+            stat->tuple_count++;
+            stat->tuple_len += ItemIdGetLength(itemid);
+        }
+    }
+}
diff --git a/src/extension/pgstattuple/pgstattuple.control b/src/extension/pgstattuple/pgstattuple.control
new file mode 100644
index 0000000..7b5129b
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple.control
@@ -0,0 +1,5 @@
+# pgstattuple extension
+comment = 'show tuple-level statistics'
+default_version = '1.0'
+module_pathname = '$libdir/pgstattuple'
+relocatable = true

Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
Greg Smith wrote:
> Attached is a second patch to move a number of extensions from 
> contrib/ to src/test/.  Extensions there are built by the default 
> built target, making installation of the postgresql-XX-contrib package 
> unnecessary for them to be available.

That was supposed to be contrib/ to src/extension , no idea where that 
test bit came from.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Magnus Hagander
Date:
On Wed, May 18, 2011 at 10:25, Greg Smith <greg@2ndquadrant.com> wrote:
> Attached is a second patch to move a number of extensions from contrib/ to
> src/test/.  Extensions there are built by the default built target, making
> installation of the postgresql-XX-contrib package unnecessary for them to be
> available.

+1 in general on the concept :-)


> This request--making some of these additions available without the "contrib"
> name/package being involved--has popped up many times before, and it turys
> out to be really easy to resolve with the new extensions infrastructure.  I
> think it's even a reasonable change to consider applying now, between 9.1
> Beta 1 and Beta 2.  The documentation adjustments are the only serious bit
> left here that I've been able to find, the code changes here are all
> internal to the build process and easy.

Does this include regression tests? Or will they need some mods?

> I moved the following extensions:
>
> auto_explain pageinspect pg_buffercache pg_freespacemap pgrowlocks
> pg_stat_statements pgstattuple
>
> My criteria was picking extensions that:
>
> 1) Don't have any special dependencies
> 2) Are in contrib mainly because they don't need to be internal functions,
> not because their code quality is demo/early
> 3) Tend to be installed on a production server for troubleshooting problems,
> rather than being required by development.
> 4) Regularly pop up as necessary/helpful in production deployment

These seem like reasonable criteria.


> Some of my personal discussions of this topic have suggested that some other
> popular extensions like pgcrypto and hstore get converted too.  I think
> those all fail test (3), and I'm not actually sure where pgcrypto adds any
> special dependency/distribution issues were it to be moved to the main
> database package.  If this general idea catches on, a wider discussion of
> what else should get "promoted" to this extensions area would be
> appropriate.  The ones I picked seemed the easiest to justify by this
> criteria set.

pgcrypto would cause trouble for any builds *without* SSL. I don't
think any packagers do that, but people doing manual builds would
certainly get different results.


> Any packager who grabs the shared/postgresql/extension directory in 9.1,
> which I expect to be all of them, shouldn't need any changes to pick up this
> adjustment.  For example, pgstattuple installs these files:
>
> share/postgresql/extension/pgstattuple--1.0.sql
> share/postgresql/extension/pgstattuple--unpackaged--1.0.sql
> share/postgresql/extension/pgstattuple.control
>
> And these are the same locations they were already at.  The location of the
> source and which target built it is the change here, the result isn't any
> different.  This means that this change won't even break extensions already
> installed.
>
> Once the basic directory plumbing is in place, conversion of a single
> extension from contrib/ to src/test/ is, trivial.  The diff view
>
> I did five of them in an hour once I figured out what was needed.  Easiest
> to view the changes at
> https://github.com/greg2ndQuadrant/postgres/commits/move-contrib , the patch
> file is huge because of all the renames.
> https://github.com/greg2ndQuadrant/postgres/commit/d647091b18c4448c5a582d423f8839ef0c717e91
> show a good example of one convert, that changes pg_freespacemap.  There are
> more changes to the comments listing the name of the file than to any code.
>  (Yes, I know there are some whitespace issues I introduced in the new
> Makefile, they should be fixed by a later commit in the series)

This is where the compare view rocks:

https://github.com/greg2ndQuadrant/postgres/compare/postgres:master...greg2ndQuadrant:move-contrib

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: Why not install pgstattuple by default?

From
Marko Kreen
Date:
On Wed, May 18, 2011 at 2:57 PM, Magnus Hagander <magnus@hagander.net> wrote:
> On Wed, May 18, 2011 at 10:25, Greg Smith <greg@2ndquadrant.com> wrote:
>> Some of my personal discussions of this topic have suggested that some other
>> popular extensions like pgcrypto and hstore get converted too.  I think
>> those all fail test (3), and I'm not actually sure where pgcrypto adds any
>> special dependency/distribution issues were it to be moved to the main
>> database package.  If this general idea catches on, a wider discussion of
>> what else should get "promoted" to this extensions area would be
>> appropriate.  The ones I picked seemed the easiest to justify by this
>> criteria set.
>
> pgcrypto would cause trouble for any builds *without* SSL. I don't
> think any packagers do that, but people doing manual builds would
> certainly get different results.

What kind of trouble?  It should work fine without SSL.

--
marko


Re: Why not install pgstattuple by default?

From
Magnus Hagander
Date:
On Wed, May 18, 2011 at 15:29, Marko Kreen <markokr@gmail.com> wrote:
> On Wed, May 18, 2011 at 2:57 PM, Magnus Hagander <magnus@hagander.net> wrote:
>> On Wed, May 18, 2011 at 10:25, Greg Smith <greg@2ndquadrant.com> wrote:
>>> Some of my personal discussions of this topic have suggested that some other
>>> popular extensions like pgcrypto and hstore get converted too.  I think
>>> those all fail test (3), and I'm not actually sure where pgcrypto adds any
>>> special dependency/distribution issues were it to be moved to the main
>>> database package.  If this general idea catches on, a wider discussion of
>>> what else should get "promoted" to this extensions area would be
>>> appropriate.  The ones I picked seemed the easiest to justify by this
>>> criteria set.
>>
>> pgcrypto would cause trouble for any builds *without* SSL. I don't
>> think any packagers do that, but people doing manual builds would
>> certainly get different results.
>
> What kind of trouble?  It should work fine without SSL.

Oh, you're right - it does. But it does provide different
functionalties? Or does it actually do exactly the same stuff, just in
different ways?


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: Why not install pgstattuple by default?

From
Greg Smith
Date:
Greg Smith wrote:
> Any packager who grabs the shared/postgresql/extension directory in 
> 9.1, which I expect to be all of them, shouldn't need any changes to 
> pick up this adjustment.  For example, pgstattuple installs these files:
>
> share/postgresql/extension/pgstattuple--1.0.sql
> share/postgresql/extension/pgstattuple--unpackaged--1.0.sql
> share/postgresql/extension/pgstattuple.control
>
> And these are the same locations they were already at.

...and the bit I missed here is that there's a fourth file here:

lib/postgresql/pgstattuple.so

If you look at a 9.1 spec file, such as 
http://svn.pgrpms.org/browser/rpm/redhat/9.1/postgresql/EL-6/postgresql-9.1.spec 
, you'll find:

%files contrib
...
%{pgbaseinstdir}/lib/pgstattuple.so

Which *does* require a packager change to relocate from the 
postgresql-91-package to the main server one.  So the theory that a 
change here might happen without pushing a repackaging suggestion toward 
packagers is busted.  This does highlight that some packaging guidelines 
would be needed here to completely this work.

-- 
Greg Smith   2ndQuadrant US    greg@2ndQuadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us




Re: Why not install pgstattuple by default?

From
Marko Kreen
Date:
On Wed, May 18, 2011 at 3:37 PM, Magnus Hagander <magnus@hagander.net> wrote:
> On Wed, May 18, 2011 at 15:29, Marko Kreen <markokr@gmail.com> wrote:
>> On Wed, May 18, 2011 at 2:57 PM, Magnus Hagander <magnus@hagander.net> wrote:
>>> On Wed, May 18, 2011 at 10:25, Greg Smith <greg@2ndquadrant.com> wrote:
>>>> Some of my personal discussions of this topic have suggested that some other
>>>> popular extensions like pgcrypto and hstore get converted too.  I think
>>>> those all fail test (3), and I'm not actually sure where pgcrypto adds any
>>>> special dependency/distribution issues were it to be moved to the main
>>>> database package.  If this general idea catches on, a wider discussion of
>>>> what else should get "promoted" to this extensions area would be
>>>> appropriate.  The ones I picked seemed the easiest to justify by this
>>>> criteria set.
>>>
>>> pgcrypto would cause trouble for any builds *without* SSL. I don't
>>> think any packagers do that, but people doing manual builds would
>>> certainly get different results.
>>
>> What kind of trouble?  It should work fine without SSL.
>
> Oh, you're right - it does. But it does provide different
> functionalties? Or does it actually do exactly the same stuff, just in
> different ways?

Same stuff, assuming you use recommended algorithms
(Blowfish, AES, MD5, SHA1, SHA2)

OpenSSL brings in more speedy implementations (maybe),
and additional algorithms (ripemd160, 3des, cast5, twofish).

--
marko