Thread: Re: [PATCHES] GIN improvements

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
Sync with current CVS HEAD and post in hackers- too because patches- close to
the closing.

http://www.sigaev.ru/misc/fast_insert_gin-0.7.gz
http://www.sigaev.ru/misc/multicolumn_gin-0.3.gz

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Multi-column GIN

From
"Heikki Linnakangas"
Date:
Dumb question:

What's the benefit of a multi-column GIN index over multiple
single-column GIN indexes?

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: [PATCHES] Multi-column GIN

From
Teodor Sigaev
Date:
 > What's the benefit of a multi-column GIN index over multiple
 > single-column GIN indexes?

Page 12 from presentation on PgCon
(http://www.sigaev.ru/gin/fastinsert_and_multicolumn_GIN.pdf):

Multicolumn index vs.  2 single column indexes

Size:    539 Mb        538 Mb
Speed:   *1.885* ms    4.994 ms
Index:   ~340 s        ~200 s
Insert:  72 s/10000    66 s/10000



--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Re: [PATCHES] Multi-column GIN

From
Oleg Bartunov
Date:
On Fri, 4 Jul 2008, Teodor Sigaev wrote:

>> What's the benefit of a multi-column GIN index over multiple
>> single-column GIN indexes?
>
> Page 12 from presentation on PgCon
> (http://www.sigaev.ru/gin/fastinsert_and_multicolumn_GIN.pdf):
>
> Multicolumn index vs.  2 single column indexes
>
> Size:    539 Mb        538 Mb
> Speed:   *1.885* ms    4.994 ms
> Index:   ~340 s        ~200 s
> Insert:  72 s/10000    66 s/10000

Well, another reason is a index feature-completeness

     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

Re: [PATCHES] GIN improvements

From
Alvaro Herrera
Date:
Teodor Sigaev wrote:
> Sync with current CVS HEAD and post in hackers- too because patches-
> close to the closing.
>
> http://www.sigaev.ru/misc/multicolumn_gin-0.3.gz

I looked this over and it looks good in general.  I was only wondering
about for single-column indexes -- we're storing attribute numbers too,
right?  Would it be too difficult to strip them out in that case?

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> I looked this over and it looks good in general.  I was only wondering
> about for single-column indexes -- we're storing attribute numbers too,
> right?
No, GinState->oneCol field signals to GinFormTuple and
gin_index_getattr/gintuple_get_attrnum about actual storage.

Single column index is binary compatible with current index :)

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Re: [PATCHES] GIN improvements

From
Alvaro Herrera
Date:
Teodor Sigaev wrote:
>> I looked this over and it looks good in general.  I was only wondering
>> about for single-column indexes -- we're storing attribute numbers too,
>> right?
> No, GinState->oneCol field signals to GinFormTuple and
> gin_index_getattr/gintuple_get_attrnum about actual storage.
>
> Single column index is binary compatible with current index :)

Ah, neat!

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
>>> I looked this over and it looks good in general.
May I think that patch passed review and commit it?

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
>> I looked this over and it looks good in general.

> May I think that patch passed review and commit it?

I'd still like to take a look.

            regards, tom lane

Re: [PATCHES] GIN improvements

From
Neil Conway
Date:
On Tue, 2008-07-08 at 14:51 -0400, Tom Lane wrote:
> I'd still like to take a look.

I was tasked with reviewing this for the current commit fest, although
so far I've just been working on grokking the rest of the GIN code. But
if you'd like to review it instead, that's fine with me.

-Neil



Re: [PATCHES] GIN improvements

From
Josh Berkus
Date:
Neil,

> I was tasked with reviewing this for the current commit fest, although
> so far I've just been working on grokking the rest of the GIN code. But
> if you'd like to review it instead, that's fine with me.

I have plenty of other stuff I could assign you if you're not needed on
GIN.

--
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
> http://www.sigaev.ru/misc/fast_insert_gin-0.7.gz
> http://www.sigaev.ru/misc/multicolumn_gin-0.3.gz

I've committed the multicolumn one with minor revisions (fix some poor
English in docs and comments, add regression-test coverage).  Do you
need more review of fast_insert yet?  It looked like a number of people
commented on it already.

            regards, tom lane

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> I've committed the multicolumn one with minor revisions (fix some poor
> English in docs and comments, add regression-test coverage).  Do you
Thank you very much.

> need more review of fast_insert yet?  It looked like a number of people
> commented on it already.
I should modify it to support/synchronize with multicolumn GIN - both patches
touch the same pieces of code, and I didn't make a single patch to simplify review.

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
Updated: http://www.sigaev.ru/misc/fast_insert_gin-0.9.gz

> need more review of fast_insert yet?  It looked like a number of people
> commented on it already.

I still havn't clearness of acceptability for suggested aminsertcleanup  calling.




--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
> Updated: http://www.sigaev.ru/misc/fast_insert_gin-0.9.gz
> I still havn't clearness of acceptability for suggested aminsertcleanup  calling.

I started to look at this.  I don't understand why VACUUM does an insert
cleanup before starting to vacuum, but VACUUM FULL doesn't?

I don't particularly like the idea of adding aminsertcleanup calls
immediately before other AM operations such as ambulkdelete.  It seems
to me that those operations ought to include the cleanup subroutine
themselves, if they need it; they shouldn't depend on callers to get
this right.  Offhand it looks to me like the only new index AM call
needed is the one at vacuum startup, which tempts me to propose that
the new AM entry point should be called "amvacuumstartup", instead of
wiring in the assumption that what it's for is specifically cleanup
of insertions.

Comments?  I can make the change if you think it's okay --- I'm busy
cleaning up docs and comments at the moment.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> I started to look at this.  I don't understand why VACUUM does an insert
> cleanup before starting to vacuum, but VACUUM FULL doesn't?

Hmm. May be I missed something, but I don't understand where and what... I tried 
to track all places of ambultdelete call. aminsertcleanup should be called 
before any ambulkdelete, because ambulkdelete doesn't scan pending list which 
can store items to be deleted and hence index will store item pointers to absent 
tuples.

> needed is the one at vacuum startup, which tempts me to propose that
> the new AM entry point should be called "amvacuumstartup", instead of
> wiring in the assumption that what it's for is specifically cleanup
> of insertions.

That's possible but inserts into index should be forbidden between 
amvacuumstartup and last call of ambulkdelete.


> 
> Comments?  I can make the change if you think it's okay --- I'm busy
> cleaning up docs and comments at the moment.


-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
> That's close to trivial to revert this piece to add cleanup call to 
> ginbulkdelete/ginvacuumcleanup. Early variants used this variant.

Yeah, I think we should do it that way.

On reflection I don't think you even need the amvacuumstartup call,
because it is *not* safe to assume that an index cleanup operation
there will guarantee that vacuum won't try to remove pending tuples.
Remember that a tuple inserted by a transaction that later aborted
is DEAD and can be reclaimed instantly by VACUUM.  So while in the
case of VACUUM FULL it might be okay to call index_cleanup only
once, for regular VACUUM I think you really have to call it within
each bulkdelete operation.  There's probably no point in optimizing
it away in VACUUM FULL either, since surely it'll be fast to call
index_cleanup when there's nothing in the pending list?

>   - I thought about statistic-based trigger for separate call of insertcleanup.
>     Trigger should be fired on massive insert/update events very similar to
>     trigger on massive delete for ambulkdelete. I'm very sorry but I didn't do it
>     yet, and definitely I need some help here.

Yeah, I was going to complain about that next :-).  Autovacuum isn't
going to trigger as a result of INSERT operations; somehow we have
to teach it what to do for GIN indexes.  I remember we discussed this
at PGCon but I don't think we decided exactly what to do...

> Do I revert that piece?

I've already made a number of changes to the patch; let me keep working
on it and send it back to you later.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
>> I started to look at this.  I don't understand why VACUUM does an insert
>> cleanup before starting to vacuum, but VACUUM FULL doesn't?

> Hmm. May be I missed something, but I don't understand where and what... I tried 
> to track all places of ambultdelete call. aminsertcleanup should be called 
> before any ambulkdelete, because ambulkdelete doesn't scan pending list which 
> can store items to be deleted and hence index will store item pointers to absent 
> tuples.

>> needed is the one at vacuum startup, which tempts me to propose that
>> the new AM entry point should be called "amvacuumstartup", instead of
>> wiring in the assumption that what it's for is specifically cleanup
>> of insertions.

> That's possible but inserts into index should be forbidden between 
> amvacuumstartup and last call of ambulkdelete.

Well, if that is required to be true then this whole design is pretty
broken, because VACUUM doesn't hold any lock that would guarantee that
no insert happens between the two calls.  If we fold the two AM calls
into one call then it'd be okay for the index AM to take such a lock
transiently during the single index-cleanup-plus-bulkdelete call.

For VACUUM FULL there's no such issue because the whole table is locked,
but I still don't see any real point in having two successive index AM
calls when the AM could perfectly well do all the work in one call.

Maybe it'd be better if ambulkdelete *did* scan the pending list?
You'd still need at least page-level locking but perhaps not anything
stronger.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> Well, if that is required to be true then this whole design is pretty
> broken, because VACUUM doesn't hold any lock that would guarantee that
> no insert happens between the two calls.  If we fold the two AM calls
> into one call then it'd be okay for the index AM to take such a lock
> transiently during the single index-cleanup-plus-bulkdelete call.
Actually, lock doesn't needed. Just bulkdelete should not try to remove not yet 
"insertcleanuped" items pointer. That's easy because VacPageList is prepared 
before insertcleanup call.


> Maybe it'd be better if ambulkdelete *did* scan the pending list?

I don't like that idea because it requires to add a lot of code (concurrent 
deletion of pages in list), much simpler to call insertcleanup inside 
ginbulkdelete/ginvacuumcleanup.

> You'd still need at least page-level locking but perhaps not anything
> stronger.

That's close to trivial to revert this piece to add cleanup call to 
ginbulkdelete/ginvacuumcleanup. Early variants used this variant.
Reasons for new variant was: - defining needing of call of insertcleanup, and stats argument was used for   it in both
function.If it's a NULL then call cleanup. - I thought about statistic-based trigger for separate call of
insertcleanup.  Trigger should be fired on massive insert/update events very similar to   trigger on massive delete for
ambulkdelete.I'm very sorry but I didn't do it   yet, and definitely I need some help here.
 

Do I revert that piece?

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> once, for regular VACUUM I think you really have to call it within
> each bulkdelete operation.  

Exactly what I did in last patch.

> There's probably no point in optimizing
> it away in VACUUM FULL either, since surely it'll be fast to call
> index_cleanup when there's nothing in the pending list?

Sure, with empty pending list insertcleanup will just lock/unlock metapage.

> Yeah, I was going to complain about that next :-).  Autovacuum isn't
> going to trigger as a result of INSERT operations; somehow we have
> to teach it what to do for GIN indexes.  I remember we discussed this
> at PGCon but I don't think we decided exactly what to do...
So, may be we just move insertcleanup call to   ginbulkdelete/ginvacuumcleanup 
but leave aminsertcleanup field in pg_proc for a future.

> I've already made a number of changes to the patch; let me keep working
> on it and send it back to you later.
ok

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
> So, may be we just move insertcleanup call to   ginbulkdelete/ginvacuumcleanup 
> but leave aminsertcleanup field in pg_proc for a future.

I'd be inclined not to add the extra AM call if we aren't going to use
it now.  There's no very good reason to think that a definition we
settled on today would be exactly the right thing for whatever future
need might appear.  Better to wait till we have a concrete example to
design around.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
I wrote:
>> Yeah, I was going to complain about that next :-).  Autovacuum isn't
>> going to trigger as a result of INSERT operations; somehow we have
>> to teach it what to do for GIN indexes.  I remember we discussed this
>> at PGCon but I don't think we decided exactly what to do...

One simple idea is to call aminsertcleanup (probably renamed to
something else like amanalyzehook) during ANALYZE.  This seems a bit
grotty, but it has the very attractive property that we don't need to
give the autovacuum control logic any special knowledge about GIN
indexes.  Either inserts or updates will lead it to trigger either
auto-ANALYZE or auto-VACUUM, and either way GIN gets a cleanup
opportunity.

A possible argument against this is that if we later fix things so
that VACUUM and ANALYZE can happen concurrently on the same table,
amanalyzehook could get called concurrently with ambulkdelete or
other vacuum-support operations.  So the AM author would have to
take care to interlock that safely.  But this doesn't seem like
a big deal to me --- interlocks against regular inserts/updates
are probably a harder problem anyway.

Thoughts?
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
> Updated: http://www.sigaev.ru/misc/fast_insert_gin-0.9.gz

Here is the GIN fast-insert patch back again.  Changes:

* Sync with CVS HEAD
* Clean up documentation and some of the code comments
* Fix up custom reloptions code
* Suppress some compiler warnings

I didn't get much further than that because I got discouraged after
looking at the locking issues around the pending-insertions list.
It's a mess:

* shiftList() holds an exclusive lock on metapage throughout its run,
which means that it's impossible for two of them to run concurrently.
So why bother with "concurrent deletion" detection?

* shiftList does LockBufferForCleanup, which means that it can be blocked
for an indefinitely long time by a concurrent scan, and since it's holding
exclusive lock on metapage no new scans or insertions can start meanwhile.
This is not only horrid from a performance standpoint but it very probably
can result in deadlocks --- which will be deadlocks on LWLocks and thus
not even detected by the system.

* GIN index scans release lock and pin on one pending-list page before
acquiring pin and lock on the next, which means there's a race condition:
shiftList could visit and delete the next page before we get to it,
because there's a window where we're holding no buffer lock at all.
I think this isn't fatal in itself, since presumably the data in the next
page has been moved into the main index and we can scan it later, but the
scan code isn't checking whether the page has been deleted out from under
it.

* It seems also possible that once a list page has been marked
GIN_DELETED, it could be re-used for some other purpose before a
scan-in-flight reaches it -- reused either as a regular index page or as a
new list page.  Neither case is being defended against.  It might be that
the new-list-page case isn't a problem, or it might not.

* There is a bigger race condition, which is that after a scan has
returned a tuple from a pending page, vacuum could move the index entry
into the main index structure, and then that same scan could return that
same index entry a second time.  This is a no-no, and I don't see any easy
fix.

I haven't really finished reviewing this code, but I'm going to bounce it
back to you to see if you can solve the locking problems.  Unless that can
be made safe there is no point doing any more work on this patch.

            regards, tom lane


Attachment

Re: [PATCHES] GIN improvements

From
Alvaro Herrera
Date:
Tom Lane wrote:
> Teodor Sigaev <teodor@sigaev.ru> writes:

> I didn't get much further than that because I got discouraged after
> looking at the locking issues around the pending-insertions list.
> It's a mess:

These are rather severe problems.  Maybe there's a better solution, but
perhaps it would be good enough to lock out concurrent access to the
index while the bulkinsert procedure is working.




-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Alvaro Herrera <alvherre@commandprompt.com> writes:
> Tom Lane wrote:
>> It's a mess:

> These are rather severe problems.  Maybe there's a better solution, but
> perhaps it would be good enough to lock out concurrent access to the
> index while the bulkinsert procedure is working.

Ugh...

The idea I was toying with was to not allow GIN scans to "stop" on
pending-insertion pages; rather, they should suck out all the matching
tuple IDs into backend-local memory as fast as they can, and then return
the TIDs to the caller one at a time from that internal array.  Then,
when the scan is later visiting the main part of the index, it could
check each matching TID against that array to see if it'd already
returned the TID.  (So it might be an idea to sort the TID array after
gathering it, to make those subsequent checks fast via binary search.)

This would cost in backend-local memory, of course, but hopefully not
very much.  The advantages are the elimination of the deadlock risk
from scan-blocks-insertcleanup-blocks-insert, and fixing the race
condition when a TID previously seen in the pending list is moved to
the main index.  There were still a number of locking issues to fix
but I think they're all relatively easy to deal with.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> * shiftList() holds an exclusive lock on metapage throughout its run,
> which means that it's impossible for two of them to run concurrently.
> So why bother with "concurrent deletion" detection?
Because metapage is locked immediately before shiftList call, while  metapage is  unlocked another process could
producelocking metapage and execution of 
 
shiftList. So, when shiftList starts it should check of already deleted page. If 
shiftList sees already deleted page then it doesn't do anything and reports to 
the caller.

> * shiftList does LockBufferForCleanup, which means that it can be blocked
> for an indefinitely long time by a concurrent scan, and since it's holding
> exclusive lock on metapage no new scans or insertions can start meanwhile.
> This is not only horrid from a performance standpoint but it very probably
> can result in deadlocks --- which will be deadlocks on LWLocks and thus
> not even detected by the system.
Ops, I see possible scenario: UPDATE tbl SET gin_indexed_field = ... where 
gin_indexed_field ....  with concurrent shiftList. Will fix. Thank you.

Nevertheless,  shiftList should be fast in typical scenario: it doesn't do 
complicated work but just marks as deleted pages which already was readed before.

> * GIN index scans release lock and pin on one pending-list page before
> acquiring pin and lock on the next, which means there's a race condition:
> shiftList could visit and delete the next page before we get to it,
> because there's a window where we're holding no buffer lock at all.
Agree, will fix.

> * It seems also possible that once a list page has been marked
> GIN_DELETED, it could be re-used for some other purpose before a
> scan-in-flight reaches it -- reused either as a regular index page or as a
Impossible - because deletion is running from the head of list and scan too. But 
deletion locks metapage and locks pages for cleanup. So, scan may start only 
from not yet deleted page and will go through the list before deletion process.


> * There is a bigger race condition, which is that after a scan has
> returned a tuple from a pending page, vacuum could move the index entry
> into the main index structure, and then that same scan could return that
> same index entry a second time.  This is a no-no, and I don't see any easy
> fix.
Hmm, isn't it allowed for indexes? At least GiST has this behaviour from its 
birth date.



-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
>> * There is a bigger race condition, which is that after a scan has
>> returned a tuple from a pending page, vacuum could move the index entry
>> into the main index structure, and then that same scan could return that
>> same index entry a second time.  This is a no-no, and I don't see any easy
>> fix.

> Hmm, isn't it allowed for indexes? At least GiST has this behaviour from its 
> birth date.

Really?  Then GiST needs to be fixed too.  Otherwise you risk having
queries return the same row twice.  A bitmap indexscan plan would mask
such an index bug ... but a plain indexscan won't.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
I wrote:
> Really?  Then GiST needs to be fixed too.  Otherwise you risk having
> queries return the same row twice.  A bitmap indexscan plan would mask
> such an index bug ... but a plain indexscan won't.

BTW, there's another issue I forgot about yesterday, which is that
the planner assumes that all index AMs work correctly for backwards
scan.  The place where the rubber meets the road here is that
if you DECLARE SCROLL CURSOR for a plan implemented as a plain
indexscan, then FETCH BACKWARDS is supposed to reliably generate
results consistent with previous FETCH FORWARDS, to wit the same
tuples in the reverse order.

We can assume that the query is using an MVCC snapshot, which means
that at the index level it's okay for the index to return newly-inserted
entries that weren't returned in the previous forward scan, or to not
return entries that were removed meanwhile by VACUUM.  But re-ordering
live tuples is bad news.

The idea of copying the pending-tuples list into local scan state would
make this work as expected as far as the proposed patch goes, but I'm
wondering whether the behavior isn't completely broken anyway by
operations such as page splits.  Do we need to change the planner to
assume that this only works nicely for btree?
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> operations such as page splits.  Do we need to change the planner to
> assume that this only works nicely for btree?

It seems to that direction (backward or forward) has meaning only for indexes 
with amcanorder = true. With amcanorder=false results will be occasionally for 
any direction.

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
>> operations such as page splits.  Do we need to change the planner to
>> assume that this only works nicely for btree?

> It seems to that direction (backward or forward) has meaning only for
> indexes with amcanorder = true. With amcanorder=false results will be
> occasionally for any direction.

Well, no; amcanorder specifies that the index can return results that
are sorted according to some externally meaningful ordering.  The
question at hand is just whether the results of a single indexscan
are self-consistent.  That's a property that can reasonably be expected
to hold regardless of amcanorder; it does hold for hash indexes for
instance.  (In the case of hash we have to forbid splitting a bucket
that's actively being scanned in order to make it true.)
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> queries return the same row twice.  A bitmap indexscan plan would mask
> such an index bug ... but a plain indexscan won't.

Fuh. :(. Well. Will fix.

GiST: - GiST already supports both scan directions in theory, but page split may 
change order between forward and backward scans (user-defined pageSplit doesn't 
preserve order of tuples). Holding of split until end of scan will produce 
unacceptable concurrency level. - GiST  can return one itempointer twice. It's fixable by storing content of 
current page in memory instead of just keeping page pinned. Will do (backpatches 
too).

GIN: - GIN doesn't support backward scan direction and will not support in close 
future. - Right now GIN doesn't return twice the same itempointer, but with current 
fast_insert patch it might return. So, suppose, to fix that it's enough just to 
remember itempointers returned from pending list and use it as filter for 
results from  regular structure. Will do.
-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
>   - GiST already supports both scan directions in theory, but page split may 
> change order between forward and backward scans (user-defined pageSplit doesn't 
> preserve order of tuples). Holding of split until end of scan will produce 
> unacceptable concurrency level.

>   - GIN doesn't support backward scan direction and will not support in close 
> future.

Okay.  I'll see about fixing the planner to not assume that GIST or GIN
indexscans are scrollable.

The cleanest way to do this is to introduce a new bool column in pg_am
rather than hard-wiring assumptions about which AMs can do it.  However
(a) that's not back-patchable and (b) it'll create a merge conflict with
your patch, if you're still going to add a new AM function column.
I think that aminsertcleanup per se isn't needed, but if we want an
"amanalyze" there'd still be a conflict.  Where are we on that?
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:

> (a) that's not back-patchable and (b) it'll create a merge conflict with
> your patch, if you're still going to add a new AM function column.
> I think that aminsertcleanup per se isn't needed, but if we want an
> "amanalyze" there'd still be a conflict.  Where are we on that?

I'll revert aminsertcleanup framework but leave gininsertcleanup function as is, 
because I'll not have enough time until end of summer - I'd like to finalize 
patch and fixes first.

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
Reworked version of fast insertion patch for GIN.

>> * shiftList does LockBufferForCleanup, which means that it can be blocked
>> for an indefinitely long time by a concurrent scan, and since it's
>> holding
>> exclusive lock on metapage no new scans or insertions can start
>> meanwhile.
>> This is not only horrid from a performance standpoint but it very
>> probably
>> can result in deadlocks --- which will be deadlocks on LWLocks and thus
>> not even detected by the system.
> Ops, I see possible scenario: UPDATE tbl SET gin_indexed_field = ...
> where gin_indexed_field ....  with concurrent shiftList. Will fix. Thank
> you.
Fixed, see below

>> * GIN index scans release lock and pin on one pending-list page before
>> acquiring pin and lock on the next, which means there's a race condition:
>> shiftList could visit and delete the next page before we get to it,
>> because there's a window where we're holding no buffer lock at all.
> Agree, will fix.
Fixed

>> * There is a bigger race condition, which is that after a scan has
>> returned a tuple from a pending page, vacuum could move the index entry
>> into the main index structure, and then that same scan could return that
>> same index entry a second time.  This is a no-no, and I don't see any
>> easy
>> fix.
Fixed. TIDBitmap is used for that and for preventing deadlock mentioned above.
TIDBitmap is used for collectiong matched tuples from pending pages and after
that it used as filter for results from regular GIN's scan.

Patch extends TIDBitmap interface by 2 functions:
bool tbm_check_tuple(TIDBitmap *tbm, const ItemPointer tid);
    returns true if tid already exists in bitmap
bool   tbm_has_lossy(TIDBitmap *tbm);
    returns true if bitmap becomes lossy


Also, sequential scan on pending page is replaced to binary search for
performance. It's not a big win but it might improve performance.
--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Heikki Linnakangas
Date:
There's a pretty fundamental issue with this patch, which is that while 
buffering the inserts in the "list pages" makes the inserts fast, all 
subsequent queries become slower until the tuples have been properly 
inserted into the index. I'm sure it's a good tradeoff in many cases, 
but there has got to be a limit to it. Currently, if you create an empty 
table, and load millions of tuples into it using INSERTs, the index 
degenerates into  just a pile of "fast" tuples that every query needs to 
grovel through. The situation will only be rectified at the next vacuum, 
but if there's no deletes or updates on the table, just inserts, 
autovacuum won't happen until the next anti-wraparound vacuum.

To make things worse, a query will fail if all the matching 
fast-inserted tuples don't fit in the non-lossy tid bitmap. That's 
another reason to limit the number of list pages; queries will start 
failing otherwise.

Yet another problem is that if so much work is offloaded to autovacuum, 
it can tie up autovacuum workers for a very long time. And the work can 
happen on an unfortunate time, when the system is busy, and affect other 
queries. There's no vacuum_delay_point()s in gininsertcleanup, so 
there's no way to throttle that work.

I think we need a hard limit on the number of list pages, before we can 
consider accepting this patch. After the limit is full, the next 
inserter can flush the list, inserting the tuples in the list into the 
tree, or just fall back to regular, slow, inserts.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: [PATCHES] GIN improvements

From
Gregory Stark
Date:
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:

> I think we need a hard limit on the number of list pages, before we can
> consider accepting this patch. After the limit is full, the next inserter can
> flush the list, inserting the tuples in the list into the tree, or just fall
> back to regular, slow, inserts.

I do like the idea of having the work fall to vacuum though. Perhaps we need
some way for autovacuum to ask an access method what shape an index is in and
whether it needs vacuuming? Or more likely a separate command from vacuum that
specifically cleans up an index.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production
Tuning


Re: [PATCHES] GIN improvements

From
Alvaro Herrera
Date:
Gregory Stark wrote:
> Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
> 
> > I think we need a hard limit on the number of list pages, before we can
> > consider accepting this patch. After the limit is full, the next inserter can
> > flush the list, inserting the tuples in the list into the tree, or just fall
> > back to regular, slow, inserts.
> 
> I do like the idea of having the work fall to vacuum though. Perhaps we need
> some way for autovacuum to ask an access method what shape an index is in and
> whether it needs vacuuming? Or more likely a separate command from vacuum that
> specifically cleans up an index.

Yeah, this is what we agreed to on Ottawa.  We need to collect some
different stats for the GIN indexes where this is active, and ensure
that autovacuum checks them.

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> grovel through. The situation will only be rectified at the next vacuum, 
> but if there's no deletes or updates on the table, just inserts, 
> autovacuum won't happen until the next anti-wraparound vacuum.
There is not agreement here, see 
http://archives.postgresql.org/message-id/2818.1216753264@sss.pgh.pa.us

> Yet another problem is that if so much work is offloaded to autovacuum, 
> it can tie up autovacuum workers for a very long time. And the work can 
> happen on an unfortunate time, when the system is busy, and affect other 
> queries. There's no vacuum_delay_point()s in gininsertcleanup, so 
> there's no way to throttle that work.
Will insert.


> I think we need a hard limit on the number of list pages, before we can 
> consider accepting this patch. After the limit is full, the next 
> inserter can flush the list, inserting the tuples in the list into the 
> tree, or just fall back to regular, slow, inserts.

Hard limit is not very good decision
- If it will make a flush when limit is reached then sometimes insert or update 
will take unacceptable amount of time. Small limit is not very helpful, large 
will takes  a lot of time. Although if we calculate limit using work_mem setting 
then, may  be, it will be useful. Bulk insert will collect all pending pages in 
memory at once.
- Falling back to regular insert will take long time for update of whole table - 
and that was one of reasons of that patch. Users forget to drop GIN index before 
a global update and query runs forever.

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Heikki Linnakangas
Date:
Teodor Sigaev wrote:
> - Falling back to regular insert will take long time for update of whole 
> table - and that was one of reasons of that patch. Users forget to drop 
> GIN index before a global update and query runs forever.

If *that* is a use case we're interested in, the incoming tuples could 
be accumulated in backend-private memory, and inserted into the index at 
commit. That would be a lot simpler, with no need to worry about 
concurrent inserts or vacuums.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
> Teodor Sigaev wrote:
>> - Falling back to regular insert will take long time for update of whole 
>> table - and that was one of reasons of that patch. Users forget to drop 
>> GIN index before a global update and query runs forever.

> If *that* is a use case we're interested in, the incoming tuples could 
> be accumulated in backend-private memory, and inserted into the index at 
> commit. That would be a lot simpler, with no need to worry about 
> concurrent inserts or vacuums.

Doesn't work --- the index would yield wrong answers for later queries
in the same transaction.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Heikki Linnakangas
Date:
Tom Lane wrote:
> Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
>> If *that* is a use case we're interested in, the incoming tuples could 
>> be accumulated in backend-private memory, and inserted into the index at 
>> commit. That would be a lot simpler, with no need to worry about 
>> concurrent inserts or vacuums.
> 
> Doesn't work --- the index would yield wrong answers for later queries
> in the same transaction.

Queries would still need to check the backend-private list.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: [PATCHES] GIN improvements

From
Greg Stark
Date:

On 3 Dec 2008, at 06:57 AM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com > wrote:

> Tom Lane wrote:
>> Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
>>> If *that* is a use case we're interested in, the incoming tuples  
>>> could be accumulated in backend-private memory, and inserted into  
>>> the index at commit. That would be a lot simpler, with no need to  
>>> worry about concurrent inserts or vacuums.
>> Doesn't work --- the index would yield wrong answers for later  
>> queries
>> in the same transaction.
>
> Queries would still need to check the backend-private list.
>

More to the point -- at least if I'm guessing right about tom's  
thoughts --queries would still have to check the heap. That is the  
backend private list would just be a proxy for buffered *index* tuples.

If we do this though it would be really nice to do it at a higher  
level than the indexam. If we could do it for any indexam that  
provides a kind of bulk insert method that would be great.

I'm just not sure how to support all the indexable operators for the  
various indexams on the local buffered list.

Incidentally buffering btree index inserts was originally Heikki's idea.


> -- 
>  Heikki Linnakangas
>  EnterpriseDB   http://www.enterprisedb.com
>
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Greg Stark <greg.stark@enterprisedb.com> writes:
> If we do this though it would be really nice to do it at a higher  
> level than the indexam. If we could do it for any indexam that  
> provides a kind of bulk insert method that would be great.

> I'm just not sure how to support all the indexable operators for the  
> various indexams on the local buffered list.

In principle, just return all those TIDs marked "lossy, please recheck".
This is a bit brute-force but I'm not sure any useful optimization is
possible.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Heikki Linnakangas
Date:
Tom Lane wrote:
> Greg Stark <greg.stark@enterprisedb.com> writes:
>> If we do this though it would be really nice to do it at a higher  
>> level than the indexam. If we could do it for any indexam that  
>> provides a kind of bulk insert method that would be great.
> 
>> I'm just not sure how to support all the indexable operators for the  
>> various indexams on the local buffered list.
> 
> In principle, just return all those TIDs marked "lossy, please recheck".
> This is a bit brute-force but I'm not sure any useful optimization is
> possible.

You could flush the local buffer to the index whenever the index is 
queried. Not sure if it's better than returning them for recheck, though.

This wouldn't work for unique indexes, BTW, but that's not a problem for 
GIN.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
Changes:
  - added vacuum_delay_point() in gininsertcleanup
  - add trigger to run vacuum by number of inserted tuples from
    last vacuum. Number of inserted tuples represents number
    of really inserted tuples in index and it is calculated as
    tuples_inserted + tuples_updated - tuples_hot_updated.
    Trigger fires on instuples > vac_base_thresh because search time is linear
    on number of pending pages (tuples)
--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Jeff Davis
Date:
On Fri, 2008-12-12 at 20:36 +0300, Teodor Sigaev wrote:
> Changes:
>   - added vacuum_delay_point() in gininsertcleanup
>   - add trigger to run vacuum by number of inserted tuples from
>     last vacuum. Number of inserted tuples represents number
>     of really inserted tuples in index and it is calculated as
>     tuples_inserted + tuples_updated - tuples_hot_updated.
>     Trigger fires on instuples > vac_base_thresh because search time is linear
>     on number of pending pages (tuples)

Hi,

Comments:

1.

You use something like the following in a few places:

START_CRIT_SECTION();
...
l = PageAddItem(...);
if (l == InvalidOffsetNumber)
    elog(ERROR, "failed to add item to index page in \"%s\"",
         RelationGetRelationName(index));

It's no use using ERROR, because it will turn into PANIC, which is
obviously unacceptable. It looks to me like those conditions can't
happen anyway, so it's probably better to add a comment explaining why
it can't happen, and Assert().

2. It appears to be properly triggering autovacuum when only inserts
happen, so I think that issue is solved.

3. Simple performance result with autovacuum off:

create table random(i int[]);
insert into random select ARRAY[(random() * 10)::int, (random() *
10)::int, (random() * 10)::int, (random() * 10)::int, (random() *
10)::int, (random() * 10)::int, (random() * 10)::int, (random() *
10)::int, (random() * 10)::int, (random() * 10)::int] from
generate_series(1, 1000000);

\timing on
drop table foo;
create table foo(i int[]);
create index foogin on foo using gin (i);
insert into foo select i from random;
vacuum foo;

Without patch:
  INSERT: 71s
  VACUUM:  2s
  total:  73s

With patch:
  INSERT: 33s
  VACUUM: 12s
  total:  45s

So, there is a performance advantage. This was just a quick test to make
sure the numbers matched my expectations.

4. Heikki mentioned:
http://archives.postgresql.org/pgsql-hackers/2008-11/msg01832.php

"To make things worse, a query will fail if all the matching
fast-inserted tuples don't fit in the non-lossy tid bitmap."

That issue still remains, correct? Is there a resolution to that?

5. I attached a newer version merged with HEAD.

6. You defined:

#define GinPageHasFullRow(page) ( GinPageGetOpaque(page)->flags &
GIN_LIST_FULLROW )

But in many places you still do the same check without using that macro.
The macro has only one call site, so I suggest either removing the macro
entirely, or using it every place you check that flag.

7.  I don't understand this chunk of code:

ItemPointerData item = pos->item;

if ( scanGetCandidate(scan, pos) == false || !
ItemPointerEquals(&pos->item, &item) )
    elog(ERROR,"Could not process tuple");  /* XXX should not be here !
*/

How can (!ItemPointerEquals(&pos->item, &item)) ever happen?

And how can (scanGetCandidate(scan, pos) == false) ever happen? Should
that be an Assert() instead?

If those can happen during normal operation, then we need a better error
message there.

Regards,
    Jeff Davis

Attachment

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
New version. Changes:
  - synced with current CVS
  - added all your changes
  - autovacuum will run if fast update mode is turned on and
    trigger of fresh tuple is fired
  - gincostestimate now tries to calculate cost of scan of pending pages.
    gincostestimate set disable_cost if it believe that tidbitmap will
    become lossy. So, tidbitmap has new method - estimation of
    maximum number of tuples with guaranteed non-lossy mode.


> START_CRIT_SECTION();
> ...
> l = PageAddItem(...);
> if (l == InvalidOffsetNumber)
>     elog(ERROR, "failed to add item to index page in \"%s\"",
>          RelationGetRelationName(index));
>
> It's no use using ERROR, because it will turn into PANIC, which is
I did that similar to other GIN/GiST places. BTW, BTree  directly emits PANIC if
PageAddItem fails


>
> 4. Heikki mentioned:
> http://archives.postgresql.org/pgsql-hackers/2008-11/msg01832.php
>
> "To make things worse, a query will fail if all the matching
> fast-inserted tuples don't fit in the non-lossy tid bitmap."
>
> That issue still remains, correct? Is there a resolution to that?

Now gincostestimate can forbid index scan by disable_cost (see Changes). Of
course, it doesn't prevent failure in case of large update (for example), but it
prevents in most cases. BTW, because of sequential scan of pending list cost of
scan grows up fast and index scan becomes non-optimal.

>
> 5. I attached a newer version merged with HEAD.
Thank you

> 6. You defined:
>
> #define GinPageHasFullRow(page) ( GinPageGetOpaque(page)->flags &
> GIN_LIST_FULLROW )
>
Fixed


> 7.  I don't understand this chunk of code:
>
> How can (!ItemPointerEquals(&pos->item, &item)) ever happen?
>
> And how can (scanGetCandidate(scan, pos) == false) ever happen? Should
> that be an Assert() instead?
>
> If those can happen during normal operation, then we need a better error
> message there.

It should be assert, but assert enabled and disabled code will be different :(.
In both cases, scanGetCandidate() should be called, but in assert enabled code
we need to check return value and pos->item.

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/


Attachment

Re: [PATCHES] GIN improvements

From
Jeff Davis
Date:
On Fri, 2009-01-16 at 15:39 +0300, Teodor Sigaev wrote:
> > START_CRIT_SECTION();
> > ...
> > l = PageAddItem(...);
> > if (l == InvalidOffsetNumber)
> >     elog(ERROR, "failed to add item to index page in \"%s\"",
> >          RelationGetRelationName(index));
> > 
> > It's no use using ERROR, because it will turn into PANIC, which is
> I did that similar to other GIN/GiST places. BTW, BTree  directly emits PANIC if
> PageAddItem fails
> 

I'd still prefer PANIC over an ERROR that will always turn into a PANIC.
I'll leave it as you did, though.

> > 
> > 4. Heikki mentioned:
> > http://archives.postgresql.org/pgsql-hackers/2008-11/msg01832.php
> > 
> > "To make things worse, a query will fail if all the matching 
> > fast-inserted tuples don't fit in the non-lossy tid bitmap."
> > 
> > That issue still remains, correct? Is there a resolution to that?
> 
> Now gincostestimate can forbid index scan by disable_cost (see Changes). Of 
> course, it doesn't prevent failure in case of large update (for example), but it 
> prevents in most cases. BTW, because of sequential scan of pending list cost of 
> scan grows up fast and index scan becomes non-optimal.

Is this a 100% bulletproof solution, or is it still possible for a query
to fail due to the pending list? It relies on the stats collector, so
perhaps in rare cases it could still fail?

It might be surprising though, that after an UPDATE and before a VACUUM,
the gin index just stops working (if work_mem is too low). For many
use-cases, if GIN is not used, it's just as bad as the query failing,
because it would be so slow.

Can you explain why the tbm must not be lossy?

Also, can you clarify why a large update can cause a problem? In the
previous discussion, you suggested that it force normal index inserts
after a threshold based on work_mem:

http://archives.postgresql.org/pgsql-hackers/2008-12/msg00065.php

Regards,Jeff Davis



Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
Changes:
  Results of pernding list's scan now are placed directly in resulting
tidbitmap. This saves cycles for filtering results and reduce memory usage.
Also, it allows to not check losiness of tbm.


> Is this a 100% bulletproof solution, or is it still possible for a query
> to fail due to the pending list? It relies on the stats collector, so
> perhaps in rare cases it could still fail?
Yes :(

> Can you explain why the tbm must not be lossy?

The problem with lossy tbm has two aspects:
  - amgettuple interface hasn't possibility to work with page-wide result instead
    of exact ItemPointer. amgettuple can not return just a block number as
    amgetbitmap can.
  - Because of concurrent vacuum process: while we scan pending list, it's
    content could be transferred into regular structure of index and then we will
    find the same tuple twice. Again, amgettuple hasn't protection from that,
    only amgetbitmap has it. So, we need to filter results from regular GIN
    by results from pending list. ANd for filtering we can't use lossy tbm.

v0.21 prevents from that fail on call of gingetbitmap, because now all results
are collected in single resulting tidbitmap.



> Also, can you clarify why a large update can cause a problem? In the

If query looks like
UPDATE tbl SET col=... WHERE col ... and planner choose GIN indexscan over col
then there is a probability of increasing of pending list over non-lossy limit.


> previous discussion, you suggested that it force normal index inserts
> after a threshold based on work_mem:
>
> http://archives.postgresql.org/pgsql-hackers/2008-12/msg00065.php

I see only two guaranteed solution of the problem:
- after limit is reached, force normal index inserts. One of the motivation of
patch was frequent question from users: why update of whole table with GIN index
is so slow? So this way will not resolve this question.
- after limit is reached, force cleanup of pending list by calling
gininsertcleanup. Not very good, because users sometimes will see a huge
execution time of simple insert. Although users who runs a huge update should be
satisfied.

I have difficulties in a choice of way. Seems to me, the better will be second
way: if user gets very long time of insertion then (auto)vacuum of his
installation should tweaked.


--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Alvaro Herrera
Date:
Teodor Sigaev wrote:
> New version. Changes:
>  - synced with current CVS

I notice you added a fillfactor reloption in ginoptions, but does it
really make sense?  I recall removing it because the original code
contained a comment that says "this is here because default_reloptions
wants it, but it has no effect".

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> I notice you added a fillfactor reloption in ginoptions, but does it
> really make sense?  I recall removing it because the original code
> contained a comment that says "this is here because default_reloptions
> wants it, but it has no effect".

I didn't change a recognition of fillfactor value, although GIN doesn't use it 
for now.

-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/
 


Re: [PATCHES] GIN improvements

From
Alvaro Herrera
Date:
Teodor Sigaev wrote:
>> I notice you added a fillfactor reloption in ginoptions, but does it
>> really make sense?  I recall removing it because the original code
>> contained a comment that says "this is here because default_reloptions
>> wants it, but it has no effect".
>
> I didn't change a recognition of fillfactor value, although GIN doesn't 
> use it for now.

I suggest you take StdRdOptions out of the GinOptions struct, and leave
fillfactor out of ginoptions.  I don't think there's much point in
supporting options that don't actually do anything.  If the user tries
to set fillfactor for a gin index, he will get an error.  Which is a
good thing IMHO.

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Alvaro Herrera <alvherre@commandprompt.com> writes:
> Teodor Sigaev wrote:
>> I didn't change a recognition of fillfactor value, although GIN doesn't 
>> use it for now.

> I suggest you take StdRdOptions out of the GinOptions struct, and leave
> fillfactor out of ginoptions.  I don't think there's much point in
> supporting options that don't actually do anything.  If the user tries
> to set fillfactor for a gin index, he will get an error.  Which is a
> good thing IMHO.

+1 ... appearing to accept an option that doesn't really do anything is
likely to confuse users.  We didn't have much choice in the previous
incarnation of reloptions, but I think now we should throw errors when
we can.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> I suggest you take StdRdOptions out of the GinOptions struct, and leave
> fillfactor out of ginoptions.  I don't think there's much point in
> supporting options that don't actually do anything.  If the user tries
> to set fillfactor for a gin index, he will get an error.  Which is a
> good thing IMHO.
Oh, I see. Fixed.

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Jeff Davis
Date:
On Mon, 2009-01-19 at 19:53 +0300, Teodor Sigaev wrote:
> I see only two guaranteed solution of the problem:
> - after limit is reached, force normal index inserts. One of the motivation of 
> patch was frequent question from users: why update of whole table with GIN index 
> is so slow? So this way will not resolve this question.
> - after limit is reached, force cleanup of pending list by calling 
> gininsertcleanup. Not very good, because users sometimes will see a huge 
> execution time of simple insert. Although users who runs a huge update should be 
> satisfied.
> 
> I have difficulties in a choice of way. Seems to me, the better will be second 
> way: if user gets very long time of insertion then (auto)vacuum of his 
> installation should tweaked.
> 

I agree that the second solution sounds better to me.

With the new Visibility Map, it's more reasonable to run VACUUM more
often, so those that are inserting single tuples at a time should not
encounter the long insert time.

I'm still looking at the rest of the patch.

Regards,Jeff Davis



New pg_dump patch, --no-stats flag, disables sending to statistics collector

From
Bryce Nesbitt
Date:
This patch adds another flag to pg_dump, this time to disable statistics
collection.  This is useful if your don't want pg_dump activity to show
(or clutter) your normal statistics.  This may be appropriate for an
organization that regularly dumps a database for backup purposes, but
wants to analyze only the application's database use.

This is patched against CVS HEAD from today, though the code is quite
version independent.  This patch is unsolicited, and as far as I know
has not been discussed on the list prior to now.

Comments?

Index: pg_dump.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v
retrieving revision 1.514
diff -c -2 -r1.514 pg_dump.c
*** pg_dump.c    18 Jan 2009 20:44:45 -0000    1.514
--- pg_dump.c    20 Jan 2009 20:47:25 -0000
***************
*** 236,239 ****
--- 236,240 ----
      static int  outputNoTablespaces = 0;
      static int    use_setsessauth = 0;
+     static int  noStatsCollection = 0;

      static struct option long_options[] = {
***************
*** 278,281 ****
--- 279,283 ----
          {"role", required_argument, NULL, 3},
          {"use-set-session-authorization", no_argument, &use_setsessauth, 1},
+         {"no-stats", no_argument, &noStatsCollection, 1},

          {NULL, 0, NULL, 0}
***************
*** 430,433 ****
--- 432,437 ----
                  else if (strcmp(optarg, "no-tablespaces") == 0)
                      outputNoTablespaces = 1;
+                 else if (strcmp(optarg, "no-stats") == 0)
+                     noStatsCollection = 1;
                  else if (strcmp(optarg, "use-set-session-authorization") == 0)
                      use_setsessauth = 1;
***************
*** 613,616 ****
--- 617,629 ----
          do_sql_command(g_conn, "SET statement_timeout = 0");

+    /*
+     * Disable collection of statistics.  pg_dump's activity may be very different
+     * from what you are trying to analyze in the stats tables.
+     */
+     if( noStatsCollection ) {
+         do_sql_command(g_conn, "SET stats_block_level = false");
+         do_sql_command(g_conn, "SET stats_row_level   = false");
+     }
+
      /*
       * Start serializable transaction to dump consistent data.
***************
*** 833,836 ****
--- 846,850 ----
      printf(_("  -U, --username=NAME      connect as specified database user\n"));
      printf(_("  -W, --password           force password prompt (should happen automatically)\n"));
+     printf(_("  --no-stats               disable statistics collection (superuser only)\n"));

      printf(_("\nIf no database name is supplied, then the PGDATABASE environment\n"

Re: New pg_dump patch, --no-stats flag, disables sending to statistics collector

From
"Jaime Casanova"
Date:
On Tue, Jan 20, 2009 at 4:04 PM, Bryce Nesbitt <bryce2@obviously.com> wrote:
> This patch adds another flag to pg_dump, this time to disable statistics
> collection.  This is useful if your don't want pg_dump activity to show (or
> clutter) your normal statistics.  This may be appropriate for an
> organization that regularly dumps a database for backup purposes, but wants
> to analyze only the application's database use.
>

i haven't looked at the patch nor it's functional use... but from the
top of my head jumps a question: is there a reason to not make this
the default behaviour?

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157


Re: New pg_dump patch, --no-stats flag, disables sending to statistics collector

From
Bruce Momjian
Date:
Jaime Casanova wrote:
> On Tue, Jan 20, 2009 at 4:04 PM, Bryce Nesbitt <bryce2@obviously.com> wrote:
> > This patch adds another flag to pg_dump, this time to disable statistics
> > collection.  This is useful if your don't want pg_dump activity to show (or
> > clutter) your normal statistics.  This may be appropriate for an
> > organization that regularly dumps a database for backup purposes, but wants
> > to analyze only the application's database use.
> >
> 
> i haven't looked at the patch nor it's functional use... but from the
> top of my head jumps a question: is there a reason to not make this
> the default behaviour?

If this is a generally desired feature (and I question that), I think we
need a more general solution.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com
 + If your life is a hard drive, Christ can be your backup. +


On Tue, 20 Jan 2009, Bryce Nesbitt wrote:

> This patch adds another flag to pg_dump, this time to disable statistics 
> collection.

You can pass session parameters to anything that uses the standard libpq 
library using PGOPTIONS.  See 
http://www.postgresql.org/docs/8.3/static/config-setting.html for a 
sample.  I suspect that something like:

PGOPTIONS='-c stats_block_level=false -c stats_row_level=false' pg_dump

would do the same thing as your patch without having to touch the code.

That's a pretty obscure bit of information though, and it would be 
worthwhile to update the documentation suggesting such a syntax because I 
think this would be handy for a lot of people.  I was already planning to 
do that for another use case (pgbench) once the 8.4 work here shifts from 
development to testing and I have some more time for writing.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD


Re: New pg_dump patch, --no-stats flag, disables sending to statistics collector

From
Bryce Nesbitt
Date:
Bruce Momjian wrote:
> Jaime Casanova wrote:
>   
>> i haven't looked at the patch nor it's functional use... but from the
>> top of my head jumps a question: is there a reason to not make this
>> the default behaviour?
>>     
> If this is a generally desired feature (and I question that), I think we
> need a more general solution.
>   
I'm not a big fan of flags, preferring good defaults.  But I was not 
bold enough to suggest this as a new default, as someone would probably 
want the opposite flag.  If you're measuring total server load (rather 
than analyzing an application), you may want to see pg_dump activity.

As for a "general" solution: one could add the ability to inject 
arbitrary sql just prior to a dump run.  That would let someone roll 
their own by injecting "SET stats_block_level = false", or make any 
other arbitrary settings changes.

Or one might slice the statistics collector by  role or user (so your 
'backup' role would keep a separate tally).

On the other hand, the flag's advantage is simplicity and directness.


Bruce,

> If this is a generally desired feature (and I question that), I think we
> need a more general solution.

I'd argue that it is generally desired (or some convenient workaround) 
but not urgently so.  I'd also argue that if we're going to have a 
--no-stats flag, it should exist for the other client ultilites as well; 
if I don't want pg_dump showing up, I probably don't want Vacuum showing 
up, or various other already-debugged maintenance routines.

I'd suggest putting this into the first patch review for 8.5.

--Josh



Josh Berkus <josh@agliodbs.com> writes:
> Bruce,
>> If this is a generally desired feature (and I question that), I think we
>> need a more general solution.

> I'd argue that it is generally desired (or some convenient workaround) 
> but not urgently so.

One person asking for it does not make it "generally desired".  I think
that the use-case is more than adequately served already by using
PGOPTIONS, or by running pg_dump under a user id that has the
appropriate GUC settings applied via ALTER USER.
        regards, tom lane


Re: New pg_dump patch, --no-stats flag, disables sending to statistics collector

From
"Joshua D. Drake"
Date:
On Tue, 2009-01-20 at 18:37 -0500, Tom Lane wrote:
> Josh Berkus <josh@agliodbs.com> writes:
> > Bruce,
> >> If this is a generally desired feature (and I question that), I think we
> >> need a more general solution.
> 
> > I'd argue that it is generally desired (or some convenient workaround) 
> > but not urgently so.
> 
> One person asking for it does not make it "generally desired".  I think
> that the use-case is more than adequately served already by using
> PGOPTIONS, or by running pg_dump under a user id that has the
> appropriate GUC settings applied via ALTER USER.

+1

Sincerely,

Joshua D. Drake


> 
>             regards, tom lane
> 
-- 
PostgreSQL - XMPP: jdrake@jabber.postgresql.org  Consulting, Development, Support, Training  503-667-4564 -
http://www.commandprompt.com/ The PostgreSQL Company, serving since 1997
 



Re: New pg_dump patch, --no-stats flag, disables sending to statistics collector

From
Bryce Nesbitt
Date:
Josh Berkus wrote:
> I'd argue that it is generally desired (or some convenient workaround) 
> but not urgently so.  I'd also argue that if we're going to have a 
> --no-stats flag, it should exist for the other client ultilites as 
> well; if I don't want pg_dump showing up, I probably don't want Vacuum 
> showing up, or various other already-debugged maintenance routines.
>
> I'd suggest putting this into the first patch review for 8.5.
>
> --Josh

As pg_dumpall calls pg_dump, I think this is covered or at least 
coverable. For vaccum, I've never seen that activity in stats? Can you 
supply a more specific scenario where routine maintenance is harmfully 
cluttering the stats table?  A specific utility that needs attention?


For this feature I'm not so sure about "generally desired" -- I'll bet 
most people don't even think about this.   The question is among those 
who DO think about it, what's the best behavior?  Can it be argued that 
excluding pg_dump is "generally desirable", for the average use case?

If there is not enough demand for a dedicated flag, I may submit a man 
page patch documenting the Do-It-Yourself solution proposed by Greg 
Smith, or the one proposed by Tom Lane.
                     G'day                     -Bryce

PS: Note that no respondent on the psql user's lists thought excluding 
pg_dump was even possible -- so that at least argues for desirability of 
instructional material :-).


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
>> - after limit is reached, force cleanup of pending list by calling
>> gininsertcleanup. Not very good, because users sometimes will see a huge
>> execution time of simple insert. Although users who runs a huge update should be
>> satisfied.
>>
>> I have difficulties in a choice of way. Seems to me, the better will be second
>> way: if user gets very long time of insertion then (auto)vacuum of his
>> installation should tweaked.
> I agree that the second solution sounds better to me.


Done. Now GIN counts number of pending tuples and pages and stores they on
metapage. Index cleanup could start during normal insertion in two cases:
- number of pending tuples is too high to keep guaranteed non-lossy tidbitmap
- pending page's content doesn't fit into work_mem.

BTW, gincostestimate could use that information for cost estimation, but is
index opening and metapge reading in amcostestimate acceptable?



--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Jeff Davis
Date:
On Wed, 2009-01-21 at 15:06 +0300, Teodor Sigaev wrote:
> Done. Now GIN counts number of pending tuples and pages and stores they on 
> metapage. Index cleanup could start during normal insertion in two cases:
> - number of pending tuples is too high to keep guaranteed non-lossy tidbitmap
> - pending page's content doesn't fit into work_mem.

Great, thanks. I will take a look at this version tonight.

Because time is short, I will mark it as "Ready for committer review"
now. I think all of the major issues have been addressed, and I'll just
be looking at the code and testing it.

> BTW, gincostestimate could use that information for cost estimation, but is 
> index opening and metapge reading in amcostestimate acceptable?

That sounds reasonable to me. I think that's what the index-specific
cost estimators are for. Do you expect a performance impact?

Regards,Jeff Davis



Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
>> BTW, gincostestimate could use that information for cost estimation, but is
>> index opening and metapge reading in amcostestimate acceptable?
>
> That sounds reasonable to me. I think that's what the index-specific
> cost estimators are for.

Done.

> Do you expect a performance impact?

I'm afraid for that and will test tomorrow. But statistic from index is exact.

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
I'm very sorry, but v0.24 has a silly bug with not initialized value :(.
New version is attached

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Teodor Sigaev <teodor@sigaev.ru> writes:
> I'm very sorry, but v0.24 has a silly bug with not initialized value :(.
> New version is attached

I looked at this a little bit --- it needs proofreading ("VACUUME"?).

Do we really need an additional column in pgstat table entries in
order to store something that looks like it can be derived from the
other columns?  The stats tables are way too big already.

Also, I really think it's a pretty bad idea to make index cost
estimation depend on the current state of the index's pending list
--- that state seems far too transient to base plan choices on.
It's particularly got to be nuts to turn off indexscans entirely
if the pending list is "too full".  Having some lossy pages might
not be great but I don't believe it can be so bad that you should
go to a seqscan all the time.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> I looked at this a little bit --- it needs proofreading ("VACUUME"?).
Sorry, VACUUME fixed

>
> Do we really need an additional column in pgstat table entries in
> order to store something that looks like it can be derived from the
> other columns?  The stats tables are way too big already.

It's not derived, because vacuum resets n_inserted_tuples to zero, but it
doesn't reset tuples_inserted, tuples_updated and tuples_hot_updated. So,
n_inserted_tuples is calculable until first vacuum occurs.



> Also, I really think it's a pretty bad idea to make index cost
> estimation depend on the current state of the index's pending list
> --- that state seems far too transient to base plan choices on.

I asked for that. v0.23 used statistic data by calling
pg_stat_get_fresh_inserted_tuples(), so revert to that.
It's possible to add pending list information to IndexOptInfo, if it's acceptable.

> It's particularly got to be nuts to turn off indexscans entirely
> if the pending list is "too full".  Having some lossy pages might
> not be great but I don't believe it can be so bad that you should
> go to a seqscan all the time.

It's impossible to return "lossy page" via amgettuple interface. Although, with
amgetbitmap it works well - and GIN will not emit error even bitmap becomes lossy.

In attached version, gincostestimate will disable index scan if estimation of
number of matched tuples in pending list is greater than non-lossy limit of
tidbitmap. That estimation is a product of indexSelectivity and number of tuples
in pending list.

--
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
                                                    WWW: http://www.sigaev.ru/

Attachment

Re: [PATCHES] GIN improvements

From
Jeff Davis
Date:
On Mon, 2009-02-02 at 20:38 -0500, Tom Lane wrote:
> Also, I really think it's a pretty bad idea to make index cost
> estimation depend on the current state of the index's pending list
> --- that state seems far too transient to base plan choices on.

I'm confused by this. Don't we want to base the plan choice on the most
current data, even if it is transient?

Regards,Jeff Davis



Re: [PATCHES] GIN improvements

From
Robert Haas
Date:
On Wed, Feb 4, 2009 at 1:39 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> On Mon, 2009-02-02 at 20:38 -0500, Tom Lane wrote:
>> Also, I really think it's a pretty bad idea to make index cost
>> estimation depend on the current state of the index's pending list
>> --- that state seems far too transient to base plan choices on.
>
> I'm confused by this. Don't we want to base the plan choice on the most
> current data, even if it is transient?
>
> Regards,
>        Jeff Davis

Well, there's nothing to force that plan to be invalidated when the
state of the pending list changes, is there?

...Robert


Re: [PATCHES] GIN improvements

From
Jeff Davis
Date:
On Wed, 2009-02-04 at 14:40 -0500, Robert Haas wrote:
> Well, there's nothing to force that plan to be invalidated when the
> state of the pending list changes, is there?
> 

Would it be unreasonable to invalidate cached plans during the pending
list cleanup?

Anyway, it just strikes me as strange to expect a plan to be a good plan
for very long. Can you think of an example where we applied this rule
before?

Regards,Jeff Davis



Re: [PATCHES] GIN improvements

From
Robert Haas
Date:
On Wed, Feb 4, 2009 at 4:23 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> On Wed, 2009-02-04 at 14:40 -0500, Robert Haas wrote:
>> Well, there's nothing to force that plan to be invalidated when the
>> state of the pending list changes, is there?
>>
>
> Would it be unreasonable to invalidate cached plans during the pending
> list cleanup?
>
> Anyway, it just strikes me as strange to expect a plan to be a good plan
> for very long. Can you think of an example where we applied this rule
> before?

Well, I am not an expert on this topic.

But, plans for prepared statements and statements within PL/pgsql
functions are cached for the lifetime of the session, which in some
situations could be quite long.

I would think that invalidating significantly more often would be bad
for performance.

...Robert


Re: [PATCHES] GIN improvements

From
Tom Lane
Date:
Jeff Davis <pgsql@j-davis.com> writes:
> On Wed, 2009-02-04 at 14:40 -0500, Robert Haas wrote:
>> Well, there's nothing to force that plan to be invalidated when the
>> state of the pending list changes, is there?

> Would it be unreasonable to invalidate cached plans during the pending
> list cleanup?

If the pending list cleanup is done by VACUUM then such an invalidation
already happens (VACUUM forces it after updating pg_class.reltuples/
relpages).  What's bothering me is the lack of any reasonable mechanism
for invalidating plans in the other direction, ie when the list grows
past the threshold where this code wants to turn off indexscans.  Since
the threshold depends on parameters that can vary across sessions, you'd
more or less have to send a global invalidation after every addition to
the list, in case that addition put it over the threshold in some other
session's view.  That's unreasonably often, in my book.

Also, as mentioned earlier, I'm pretty down on the idea of a threshold
where indexscans suddenly turn off entirely; that's not my idea of how
the planner ought to work.

But the real bottom line is: if autovacuum is working properly, it
should clean up the index before the list ever gets to the point where
it'd be sane to turn off indexscans.  So I don't see why we need to hack
the planner for this at all.  If any hacking is needed, it should be
in the direction of making sure autovacuum puts sufficient priority
on this task.
        regards, tom lane


Re: [PATCHES] GIN improvements

From
Teodor Sigaev
Date:
> But the real bottom line is: if autovacuum is working properly, it
> should clean up the index before the list ever gets to the point where
> it'd be sane to turn off indexscans.  So I don't see why we need to hack
> the planner for this at all.  If any hacking is needed, it should be
> in the direction of making sure autovacuum puts sufficient priority
> on this task.

Autovacuum will start if table has GIN index with fastupdate=on and number of 
inserted tuple since last vacuum > autovacuum_vacuum_threshold.
-- 
Teodor Sigaev                                   E-mail: teodor@sigaev.ru
  WWW: http://www.sigaev.ru/