Thread: Think I see a btree vacuuming bug

Think I see a btree vacuuming bug

From
Tom Lane
Date:
If a VACUUM running concurrently with someone else's indexscan were to
delete the index tuple that the indexscan is currently stopped on, then
we'd get a failure when the indexscan resumes and tries to re-find its
place.  (This is the infamous "my bits moved right off the end of the
world" error condition.)  What is supposed to prevent that from
happening is that the indexscan retains a buffer pin (but not a read
lock) on the index page containing the tuple it's stopped on.  VACUUM
will not delete any tuple until it can get a "super exclusive" lock on
the page (cf. LockBufferForCleanup), and the pin prevents it from doing
so.

However: suppose that some other activity causes the index page to be
split while the indexscan is stopped, and that the tuple it's stopped
on gets relocated into the new righthand page of the pair.  Then the
indexscan is holding a pin on the wrong page --- not the one its tuple
is in.  It would then be possible for the VACUUM to arrive at the tuple
and delete it before the indexscan is resumed.

This is a pretty low-probability scenario, especially given the new
index-tuple-killing mechanism (which renders it less likely that an
indexscan will stop on a vacuum-able tuple).  But it could happen.

The only solution I've thought of is to make btbulkdelete acquire
"super exclusive" lock on *every* leaf page of the index as it scans,
rather than only locking the pages it actually needs to delete something
from.  And we'd need to tweak _bt_restscan to chain its pins (pin the
next page to the right before releasing pin on the previous page).
This would prevent a btbulkdelete scan from overtaking ordinary
indexscans, and thereby ensure that it couldn't arrive at the tuple
on which an indexscan is stopped, even with splitting.

I'm somewhat concerned that the more stringent locking will slow down
VACUUM a good deal when there's lots of concurrent activity, but I don't
see another answer.  Ideas anyone?
        regards, tom lane


Re: Think I see a btree vacuuming bug

From
"Christopher Kings-Lynne"
Date:
Well, given that vacuum does its work in the background now - I think you'll
be hard pressed to find a sys admin who'll vote for leaving it as is, no
matter how small the chance of corruption.

However - this isn't my area of expertise...

Chris

----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: <pgsql-hackers@postgresql.org>
Sent: Saturday, May 25, 2002 11:21 AM
Subject: [HACKERS] Think I see a btree vacuuming bug


> If a VACUUM running concurrently with someone else's indexscan were to
> delete the index tuple that the indexscan is currently stopped on, then
> we'd get a failure when the indexscan resumes and tries to re-find its
> place.  (This is the infamous "my bits moved right off the end of the
> world" error condition.)  What is supposed to prevent that from
> happening is that the indexscan retains a buffer pin (but not a read
> lock) on the index page containing the tuple it's stopped on.  VACUUM
> will not delete any tuple until it can get a "super exclusive" lock on
> the page (cf. LockBufferForCleanup), and the pin prevents it from doing
> so.
>
> However: suppose that some other activity causes the index page to be
> split while the indexscan is stopped, and that the tuple it's stopped
> on gets relocated into the new righthand page of the pair.  Then the
> indexscan is holding a pin on the wrong page --- not the one its tuple
> is in.  It would then be possible for the VACUUM to arrive at the tuple
> and delete it before the indexscan is resumed.
>
> This is a pretty low-probability scenario, especially given the new
> index-tuple-killing mechanism (which renders it less likely that an
> indexscan will stop on a vacuum-able tuple).  But it could happen.
>
> The only solution I've thought of is to make btbulkdelete acquire
> "super exclusive" lock on *every* leaf page of the index as it scans,
> rather than only locking the pages it actually needs to delete something
> from.  And we'd need to tweak _bt_restscan to chain its pins (pin the
> next page to the right before releasing pin on the previous page).
> This would prevent a btbulkdelete scan from overtaking ordinary
> indexscans, and thereby ensure that it couldn't arrive at the tuple
> on which an indexscan is stopped, even with splitting.
>
> I'm somewhat concerned that the more stringent locking will slow down
> VACUUM a good deal when there's lots of concurrent activity, but I don't
> see another answer.  Ideas anyone?
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>



Re: Think I see a btree vacuuming bug

From
Manfred Koizar
Date:
On Sat, 25 May 2002 14:21:52 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>I'm somewhat concerned that the more stringent locking will slow down
>VACUUM a good deal when there's lots of concurrent activity, but I don't
>see another answer.  Ideas anyone?

Ideas?  Always! :-)  Don't know if this one is so bright, but at least
we have something to vote on:

On leaf pages order index tuples by heap item pointer, if otherwise
equal.  In IndexScanDescData remember the whole index tuple (including
the heap item pointer) instead of ItemPointerData.  Then depending on
scan direction _bt_next() would look for the first index tuple greater
or less than currentItem respectively.

Implications:
(+) higher concurrency: normal write locks
(+) robust: can always start from the root, if nothing else helps
(though I can't think of a case making this necesary)
(-) need heap item pointer in internal nodes (could partly be
compensated by omitting unused(?) t_tid.ip_posid)
(+) btinsert knows, where to insert a new tuple, even if there are
lots of duplicates (no random())
(-) this could result in more half-empty leaf pages?
(+) dead index tuples can be removed on the fly
(?) ...

ServusManfred


Re: Think I see a btree vacuuming bug

From
Tom Lane
Date:
Manfred Koizar <mkoi-pg@aon.at> writes:
> On leaf pages order index tuples by heap item pointer, if otherwise
> equal.  In IndexScanDescData remember the whole index tuple (including
> the heap item pointer) instead of ItemPointerData.  Then depending on
> scan direction _bt_next() would look for the first index tuple greater
> or less than currentItem respectively.

Doesn't help, I fear.  Finding your place again is only one part
of the problem.  The other part is being sure that VACUUM won't delete
the heap tuple before you get to it.  The interlock at the index stage
is partly a proxy to protect heap tuples that are about to be visited
by indexscans (ie, indexscan has read an index tuple but hasn't yet
acquired pin on the referenced heap page).

> (+) btinsert knows, where to insert a new tuple, even if there are
> lots of duplicates (no random())

This is not a (+) but a (-), I think.  Given the current CVS tip
behavior it is better for a new tuple to be inserted at the front of
the series of matching keys --- in unique indexes this allows repeated
updates without degrading search time.  We are not currently exploiting
that as much as we should --- I suspect btree insertion should be more
willing to split pages than it now is.
        regards, tom lane


Re: Think I see a btree vacuuming bug

From
Manfred Koizar
Date:
On Mon, 27 May 2002 13:48:43 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>Manfred Koizar <mkoi-pg@aon.at> writes:
>> On leaf pages order index tuples by heap item pointer, if otherwise
>> equal.  [blah, blah, ...]
>
>Doesn't help, I fear.  Finding your place again is only one part
>of the problem.  The other part is being sure that VACUUM won't delete
>the heap tuple before you get to it.

Ok, throwing away this one.

Another idea:  As far as I understand, the problem arises from
"recent" page splits.  Let's store a transaction id in each leaf page.
On a page split the currently highest active xid (*not* the backend's
current xid) is stored into the new right page.  btbulkdelete has a
notion of "old" and "young" pages:  If the page xid is less than the
oldest active transaction, then the index page is considered old,
otherwise young.

"Old" pages can savely be treated like up to yesterday:  get a "super
exclusive" lock just on this page, do the changes, release the lock.

Whenever btbulkdelete is about to change a "young" index page, it has
to get "super exclusive" locks on all pages from the last preceding
"old" page (*) up to the current page.  It does not have to hold all
these locks at the same time, it just has to get the locks in
"non-overtaking" mode.

To avoid deadlocks it might be necessary to release the read lock held
on the current page, while approaching it with "super exclusive" locks
from the left.  Then the current page has to be rescanned.

(*) or the page following the last page we already had a "super
exclusive" lock on

ServusManfred


Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Is this fixed, and if not, can I have some TODO text?

---------------------------------------------------------------------------

Tom Lane wrote:
> If a VACUUM running concurrently with someone else's indexscan were to
> delete the index tuple that the indexscan is currently stopped on, then
> we'd get a failure when the indexscan resumes and tries to re-find its
> place.  (This is the infamous "my bits moved right off the end of the
> world" error condition.)  What is supposed to prevent that from
> happening is that the indexscan retains a buffer pin (but not a read
> lock) on the index page containing the tuple it's stopped on.  VACUUM
> will not delete any tuple until it can get a "super exclusive" lock on
> the page (cf. LockBufferForCleanup), and the pin prevents it from doing
> so.
> 
> However: suppose that some other activity causes the index page to be
> split while the indexscan is stopped, and that the tuple it's stopped
> on gets relocated into the new righthand page of the pair.  Then the
> indexscan is holding a pin on the wrong page --- not the one its tuple
> is in.  It would then be possible for the VACUUM to arrive at the tuple
> and delete it before the indexscan is resumed.
> 
> This is a pretty low-probability scenario, especially given the new
> index-tuple-killing mechanism (which renders it less likely that an
> indexscan will stop on a vacuum-able tuple).  But it could happen.
> 
> The only solution I've thought of is to make btbulkdelete acquire
> "super exclusive" lock on *every* leaf page of the index as it scans,
> rather than only locking the pages it actually needs to delete something
> from.  And we'd need to tweak _bt_restscan to chain its pins (pin the
> next page to the right before releasing pin on the previous page).
> This would prevent a btbulkdelete scan from overtaking ordinary
> indexscans, and thereby ensure that it couldn't arrive at the tuple
> on which an indexscan is stopped, even with splitting.
> 
> I'm somewhat concerned that the more stringent locking will slow down
> VACUUM a good deal when there's lots of concurrent activity, but I don't
> see another answer.  Ideas anyone?
> 
>             regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Think I see a btree vacuuming bug

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Is this fixed, and if not, can I have some TODO text?

It's not fixed.  I'd like to fix it for 7.3, but I was hoping someone
would think of a better way to fix it than I did ...
        regards, tom lane

> ---------------------------------------------------------------------------

> Tom Lane wrote:
>> If a VACUUM running concurrently with someone else's indexscan were to
>> delete the index tuple that the indexscan is currently stopped on, then
>> we'd get a failure when the indexscan resumes and tries to re-find its
>> place.  (This is the infamous "my bits moved right off the end of the
>> world" error condition.)  What is supposed to prevent that from
>> happening is that the indexscan retains a buffer pin (but not a read
>> lock) on the index page containing the tuple it's stopped on.  VACUUM
>> will not delete any tuple until it can get a "super exclusive" lock on
>> the page (cf. LockBufferForCleanup), and the pin prevents it from doing
>> so.
>> 
>> However: suppose that some other activity causes the index page to be
>> split while the indexscan is stopped, and that the tuple it's stopped
>> on gets relocated into the new righthand page of the pair.  Then the
>> indexscan is holding a pin on the wrong page --- not the one its tuple
>> is in.  It would then be possible for the VACUUM to arrive at the tuple
>> and delete it before the indexscan is resumed.
>> 
>> This is a pretty low-probability scenario, especially given the new
>> index-tuple-killing mechanism (which renders it less likely that an
>> indexscan will stop on a vacuum-able tuple).  But it could happen.
>> 
>> The only solution I've thought of is to make btbulkdelete acquire
>> "super exclusive" lock on *every* leaf page of the index as it scans,
>> rather than only locking the pages it actually needs to delete something
>> from.  And we'd need to tweak _bt_restscan to chain its pins (pin the
>> next page to the right before releasing pin on the previous page).
>> This would prevent a btbulkdelete scan from overtaking ordinary
>> indexscans, and thereby ensure that it couldn't arrive at the tuple
>> on which an indexscan is stopped, even with splitting.
>> 
>> I'm somewhat concerned that the more stringent locking will slow down
>> VACUUM a good deal when there's lots of concurrent activity, but I don't
>> see another answer.  Ideas anyone?
>> 
>> regards, tom lane


Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Tom Lane wrote:
> If a VACUUM running concurrently with someone else's indexscan were to
> delete the index tuple that the indexscan is currently stopped on, then
> we'd get a failure when the indexscan resumes and tries to re-find its
> place.  (This is the infamous "my bits moved right off the end of the
> world" error condition.)  What is supposed to prevent that from

Certainly anything that makes this error message less likely is bad. 
Perhaps we need to hook up a random() call to output the error message
to balance this.  :-)

> happening is that the indexscan retains a buffer pin (but not a read
> lock) on the index page containing the tuple it's stopped on.  VACUUM
> will not delete any tuple until it can get a "super exclusive" lock on
> the page (cf. LockBufferForCleanup), and the pin prevents it from doing
> so.
> 
> However: suppose that some other activity causes the index page to be
> split while the indexscan is stopped, and that the tuple it's stopped
> on gets relocated into the new righthand page of the pair.  Then the
> indexscan is holding a pin on the wrong page --- not the one its tuple
> is in.  It would then be possible for the VACUUM to arrive at the tuple
> and delete it before the indexscan is resumed.

OK, let me see if I can summarize:
Index scan stops on dead index tuple, holds pinSomeone inserts/updates and the page splitsVacuum comes along and
removesthe index tuple
 

And the problem is that after the split, the index scan doesn't hold a
pin on the newly split page.

This seems more like a problem that the index scan pin doesn't prevent
it from losing the pin after a split.  Could we just block splits of
pages containing pins?  If the page splits, how does the index scan
find the new page to start again?  Could the index scan be made to
handle cases where the index tuple it was stopped on is gone?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Think I see a btree vacuuming bug

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Could we just block splits of
> pages containing pins?

That's not an improvement IMHO.  The objection to the fix I suggested is
that it makes it harder for VACUUM to make progress in the presence of
contention.  Replacing that with an approach that blocks foreground
processes from making progress is not better.

> If the page splits, how does the index scan
> find the new page to start again?

It moves right until it finds the tuple it was on.  That will either be
in the pinned page, or some page to its right.

> Could the index scan be made to
> handle cases where the index tuple it was stopped on is gone?

Don't see how.  With no equal keys, you could test each tuple you scan
over to see if it's > the expected key; but that would slow things down
tremendously I fear.  In any case it fails completely when there are
equal keys, since you could not tell where in a run of equal keys to
resume scanning.  You really have to find the exact index tuple you
stopped on, AFAICS.
        regards, tom lane


Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Could we just block splits of
> > pages containing pins?
> 
> That's not an improvement IMHO.  The objection to the fix I suggested is
> that it makes it harder for VACUUM to make progress in the presence of
> contention.  Replacing that with an approach that blocks foreground
> processes from making progress is not better.

Yes.  Considering there are splits going on where backends are losing
their pins, it seems you have to either prevent the backends from losing
their pins, prevent the splits, or prevent vacuum from removing tuples
on split pages that hold pins.

Rather than having vacuum pin all the pages, could vacuum block in cases
where pins exist in pages that _could_ contain tuples caused by a recent
split, meaning there are pins in pre-split locations?


> > If the page splits, how does the index scan
> > find the new page to start again?
> 
> It moves right until it finds the tuple it was on.  That will either be
> in the pinned page, or some page to its right.
> 
> > Could the index scan be made to
> > handle cases where the index tuple it was stopped on is gone?
> 
> Don't see how.  With no equal keys, you could test each tuple you scan
> over to see if it's > the expected key; but that would slow things down
> tremendously I fear.  In any case it fails completely when there are
> equal keys, since you could not tell where in a run of equal keys to
> resume scanning.  You really have to find the exact index tuple you
> stopped on, AFAICS.

So it uses the tid to find the old spot.  Got it.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Think I see a btree vacuuming bug

From
"Zeugswetter Andreas SB SD"
Date:
> > Could the index scan be made to
> > handle cases where the index tuple it was stopped on is gone?
>
> Don't see how.  With no equal keys, you could test each tuple you scan
> over to see if it's > the expected key; but that would slow things down
> tremendously I fear.  In any case it fails completely when there are
> equal keys, since you could not tell where in a run of equal keys to
> resume scanning.  You really have to find the exact index tuple you
> stopped on, AFAICS.

Won't it still point to the same heap page and slot ? That additional info
should be sufficient to find the exact index tuple.
And it usually won't be that far away, no ?

Andreas


Re: Think I see a btree vacuuming bug

From
Tom Lane
Date:
"Zeugswetter Andreas SB SD" <ZeugswetterA@spardat.at> writes:
>>> Could the index scan be made to
>>> handle cases where the index tuple it was stopped on is gone?

> Won't it still point to the same heap page and slot ?

Not if it's gone ...
        regards, tom lane


Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Any status on this?  I know we talked about it but never came to a
good solution.  Is it TODO?

---------------------------------------------------------------------------

Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Is this fixed, and if not, can I have some TODO text?
> 
> It's not fixed.  I'd like to fix it for 7.3, but I was hoping someone
> would think of a better way to fix it than I did ...
> 
>             regards, tom lane
> 
> > ---------------------------------------------------------------------------
> 
> > Tom Lane wrote:
> >> If a VACUUM running concurrently with someone else's indexscan were to
> >> delete the index tuple that the indexscan is currently stopped on, then
> >> we'd get a failure when the indexscan resumes and tries to re-find its
> >> place.  (This is the infamous "my bits moved right off the end of the
> >> world" error condition.)  What is supposed to prevent that from
> >> happening is that the indexscan retains a buffer pin (but not a read
> >> lock) on the index page containing the tuple it's stopped on.  VACUUM
> >> will not delete any tuple until it can get a "super exclusive" lock on
> >> the page (cf. LockBufferForCleanup), and the pin prevents it from doing
> >> so.
> >> 
> >> However: suppose that some other activity causes the index page to be
> >> split while the indexscan is stopped, and that the tuple it's stopped
> >> on gets relocated into the new righthand page of the pair.  Then the
> >> indexscan is holding a pin on the wrong page --- not the one its tuple
> >> is in.  It would then be possible for the VACUUM to arrive at the tuple
> >> and delete it before the indexscan is resumed.
> >> 
> >> This is a pretty low-probability scenario, especially given the new
> >> index-tuple-killing mechanism (which renders it less likely that an
> >> indexscan will stop on a vacuum-able tuple).  But it could happen.
> >> 
> >> The only solution I've thought of is to make btbulkdelete acquire
> >> "super exclusive" lock on *every* leaf page of the index as it scans,
> >> rather than only locking the pages it actually needs to delete something
> >> from.  And we'd need to tweak _bt_restscan to chain its pins (pin the
> >> next page to the right before releasing pin on the previous page).
> >> This would prevent a btbulkdelete scan from overtaking ordinary
> >> indexscans, and thereby ensure that it couldn't arrive at the tuple
> >> on which an indexscan is stopped, even with splitting.
> >> 
> >> I'm somewhat concerned that the more stringent locking will slow down
> >> VACUUM a good deal when there's lots of concurrent activity, but I don't
> >> see another answer.  Ideas anyone?
> >> 
> >> regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Think I see a btree vacuuming bug

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> Any status on this?  I know we talked about it but never came to a
> good solution.  Is it TODO?

I think it's more like MUSTFIX ... but it's a bug not a feature
addition, so it'd fair game to work on in beta, no?
        regards, tom lane


Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > Any status on this?  I know we talked about it but never came to a
> > good solution.  Is it TODO?
> 
> I think it's more like MUSTFIX ... but it's a bug not a feature
> addition, so it'd fair game to work on in beta, no?

Yes.  I will add it to the open items list.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Bruce Momjian wrote:
> Tom Lane wrote:
> > Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > > Any status on this?  I know we talked about it but never came to a
> > > good solution.  Is it TODO?
> > 
> > I think it's more like MUSTFIX ... but it's a bug not a feature
> > addition, so it'd fair game to work on in beta, no?
> 
> Yes.  I will add it to the open items list.

Actually, the open items list is pretty small now:

---------------------------------------------------------------------------

                             P O S T G R E S Q L
                         7 . 3  O P E N    I T E M S


Current at ftp://candle.pha.pa.us/pub/postgresql/open_items.

Source Code Changes
-------------------
Schema handling - ready? interfaces? client apps?
Drop column handling - ready for all clients, apps?
Allow easy display of usernames in a group (pg_hba.conf uses groups now)
Fix BeOS and QNX4 ports
Get bison upgrade on postgresql.org
Fix vacuum btree bug (Tom)

On Hold
-------
Point-in-time recovery
Win32 port
Security audit

Documentation Changes
---------------------
Document need to add permissions to loaded functions and languages
Move documation to gborg for moved projects


--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073
 


Re: Think I see a btree vacuuming bug

From
Joe Conway
Date:
Bruce Momjian wrote:
> Allow easy display of usernames in a group (pg_hba.conf uses groups now)

Hows this:

parts=# select * from pg_group ; groname | grosysid |    grolist
---------+----------+--------------- grp     |      100 | {100,101,102} grp2    |      101 | {102}
(2 rows)

parts=# select usename,usesysid from pg_user; usename  | usesysid
----------+---------- postgres |        1 user1    |      100 user2    |      101 user3    |      102
(4 rows)

CREATE FUNCTION show_group(text) RETURNS SETOF text AS '
DECLARE  loginname text;  low int;  high int;
BEGIN  SELECT INTO low    replace(split(array_dims(grolist),'':'',1),''['','''')::int    FROM pg_group WHERE groname =
$1; SELECT INTO high    replace(split(array_dims(grolist),'':'',2),'']'','''')::int    FROM pg_group WHERE groname =
$1;
  FOR i IN low..high LOOP    SELECT INTO loginname s.usename      FROM pg_shadow s join pg_group g on s.usesysid =
g.grolist[i];   RETURN NEXT loginname;  END LOOP;  RETURN;
 
END;
' LANGUAGE 'plpgsql';

parts=# select * from show_group('grp'); show_group
------------ user1 user2 user3
(3 rows)

parts=# select * from show_group('grp2'); show_group
------------ user1
(1 row)


--Joe



Re: Think I see a btree vacuuming bug

From
Bruce Momjian
Date:
Yep, that's it!

---------------------------------------------------------------------------

Joe Conway wrote:
> Bruce Momjian wrote:
> > Allow easy display of usernames in a group (pg_hba.conf uses groups now)
> 
> Hows this:
> 
> parts=# select * from pg_group ;
>   groname | grosysid |    grolist
> ---------+----------+---------------
>   grp     |      100 | {100,101,102}
>   grp2    |      101 | {102}
> (2 rows)
> 
> parts=# select usename,usesysid from pg_user;
>   usename  | usesysid
> ----------+----------
>   postgres |        1
>   user1    |      100
>   user2    |      101
>   user3    |      102
> (4 rows)
> 
> CREATE FUNCTION show_group(text) RETURNS SETOF text AS '
> DECLARE
>    loginname text;
>    low int;
>    high int;
> BEGIN
>    SELECT INTO low
>      replace(split(array_dims(grolist),'':'',1),''['','''')::int
>      FROM pg_group WHERE groname = $1;
>    SELECT INTO high
>      replace(split(array_dims(grolist),'':'',2),'']'','''')::int
>      FROM pg_group WHERE groname = $1;
> 
>    FOR i IN low..high LOOP
>      SELECT INTO loginname s.usename
>        FROM pg_shadow s join pg_group g on s.usesysid = g.grolist[i];
>      RETURN NEXT loginname;
>    END LOOP;
>    RETURN;
> END;
> ' LANGUAGE 'plpgsql';
> 
> parts=# select * from show_group('grp');
>   show_group
> ------------
>   user1
>   user2
>   user3
> (3 rows)
> 
> parts=# select * from show_group('grp2');
>   show_group
> ------------
>   user1
> (1 row)
> 
> 
> --Joe
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
359-1001+  If your life is a hard drive,     |  13 Roberts Road +  Christ can be your backup.        |  Newtown Square,
Pennsylvania19073