Thread: GiST indexes and concurrency (tsearch2)

GiST indexes and concurrency (tsearch2)

From
"Marinos J. Yannikos"
Date:
Hi,

according to
http://www.postgresql.org/docs/8.0/interactive/limitations.html ,
concurrent access to GiST indexes isn't possible at the moment. I
haven't read the thesis mentioned there, but I presume that concurrent
read access is also impossible. Is there any workaround for this, esp.
if the index is usually only read and not written to?

It seems to be a big problem with tsearch2, when multiple clients are
hammering the db (we have a quad opteron box here that stays 75% idle
despite an apachebench with concurrency 10 stressing the php script that
uses tsearch2, with practically no disk accesses)

Regards,
  Marinos
--
Dipl.-Ing. Marinos Yannikos, CEO
Preisvergleich Internet Services AG
Obere Donaustraße 63/2, A-1020 Wien
Tel./Fax: (+431) 5811609-52/-55

Re: GiST indexes and concurrency (tsearch2)

From
Tom Lane
Date:
"Marinos J. Yannikos" <mjy@geizhals.at> writes:
> according to
> http://www.postgresql.org/docs/8.0/interactive/limitations.html ,
> concurrent access to GiST indexes isn't possible at the moment. I
> haven't read the thesis mentioned there, but I presume that concurrent
> read access is also impossible.

You presume wrong ...

            regards, tom lane

Re: GiST indexes and concurrency (tsearch2)

From
Christopher Kings-Lynne
Date:
> It seems to be a big problem with tsearch2, when multiple clients are
> hammering the db (we have a quad opteron box here that stays 75% idle
> despite an apachebench with concurrency 10 stressing the php script that
> uses tsearch2, with practically no disk accesses)

Concurrency with READs is fine - but you can only have one WRITE going
at once.

Chris

Re: GiST indexes and concurrency (tsearch2)

From
Oleg Bartunov
Date:
On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:

> Hi,
>
> according to http://www.postgresql.org/docs/8.0/interactive/limitations.html
> , concurrent access to GiST indexes isn't possible at the moment. I haven't
> read the thesis mentioned there, but I presume that concurrent read access is
> also impossible. Is there any workaround for this, esp. if the index is
> usually only read and not written to?

there are should no problem with READ access.

>
> It seems to be a big problem with tsearch2, when multiple clients are
> hammering the db (we have a quad opteron box here that stays 75% idle despite
> an apachebench with concurrency 10 stressing the php script that uses
> tsearch2, with practically no disk accesses)

I'm willing to see some details: version, query, explain analyze.




>
> Regards,
> Marinos
>

     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Re: GiST indexes and concurrency (tsearch2)

From
"Marinos J. Yannikos"
Date:
Oleg Bartunov wrote:
> On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:
>> concurrent access to GiST indexes isn't possible at the moment. I [...]
>
> there are should no problem with READ access.

OK, thanks everyone (perhaps it would make sense to clarify this in the
manual).

> I'm willing to see some details: version, query, explain analyze.

8.0.0

Query while the box is idle:

explain analyze select count(*) from fr_offer o, fr_merchant m where
idxfti @@ to_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;

Aggregate  (cost=2197.48..2197.48 rows=1 width=0) (actual
time=88.052..88.054 rows=1 loops=1)
    ->  Merge Join  (cost=2157.42..2196.32 rows=461 width=0) (actual
time=88.012..88.033 rows=3 loops=1)
          Merge Cond: ("outer".m_id = "inner".m_id)
          ->  Index Scan using fr_merchant_pkey on fr_merchant m
(cost=0.00..29.97 rows=810 width=4) (actual time=0.041..1.233 rows=523
loops=1)
          ->  Sort  (cost=2157.42..2158.57 rows=461 width=4) (actual
time=85.779..85.783 rows=3 loops=1)
                Sort Key: o.m_id
                ->  Index Scan using idxfti_idx on fr_offer o
(cost=0.00..2137.02 rows=461 width=4) (actual time=77.957..85.754 rows=3
loops=1)
                      Index Cond: (idxfti @@ '\'ranz\' & \'mc\''::tsquery)
                      Filter: (eur >= 70::double precision)

  Total runtime: 88.131 ms

now, while using apachebench (-c10), "top" says this:

Cpu0  : 15.3% us, 10.0% sy,  0.0% ni, 74.7% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu1  : 13.3% us, 11.6% sy,  0.0% ni, 75.1% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu2  : 16.9% us,  9.6% sy,  0.0% ni, 73.4% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu3  : 18.7% us, 14.0% sy,  0.0% ni, 67.0% id,  0.0% wa,  0.0% hi,  0.3% si

(this is with shared_buffers = 2000; a larger setting makes almost no
difference for overall performance: although according to "top" system
time goes to ~0 and user time to ~25%, the system still stays 70-75% idle)

vmstat:

  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us
sy id wa
  2  0      0 8654316  64908 4177136    0    0    56    35  279   286  5
  1 94  0
  2  0      0 8646188  64908 4177136    0    0     0     0 1156  2982 15
10 75  0
  2  0      0 8658412  64908 4177136    0    0     0     0 1358  3098 19
11 70  0
  1  0      0 8646508  64908 4177136    0    0     0   104 1145  2070 13
12 75  0

so the script's execution speed is apparently not limited by the CPUs.

The query execution times go up like this while apachebench is running
(and the system is 75% idle):

  Aggregate  (cost=2197.48..2197.48 rows=1 width=0) (actual
time=952.661..952.663 rows=1 loops=1)
    ->  Merge Join  (cost=2157.42..2196.32 rows=461 width=0) (actual
time=952.621..952.641 rows=3 loops=1)
          Merge Cond: ("outer".m_id = "inner".m_id)
          ->  Index Scan using fr_merchant_pkey on fr_merchant m
(cost=0.00..29.97 rows=810 width=4) (actual time=2.078..3.338 rows=523
loops=1)
          ->  Sort  (cost=2157.42..2158.57 rows=461 width=4) (actual
time=948.345..948.348 rows=3 loops=1)
                Sort Key: o.m_id
                ->  Index Scan using idxfti_idx on fr_offer o
(cost=0.00..2137.02 rows=461 width=4) (actual time=875.643..948.301
rows=3 loops=1)
                      Index Cond: (idxfti @@ '\'ranz\' & \'mc\''::tsquery)
                      Filter: (eur >= 70::double precision)
  Total runtime: 952.764 ms

I can't seem to find out where the bottleneck is, but it doesn't seem to
be CPU or disk. "top" shows that postgres processes are frequently in
this state:

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  WCHAN
COMMAND
  6701 postgres  16   0  204m  58m  56m S  9.3  0.2   0:06.96 semtimedo
                                                              ^^^^^^^^^
postmaste

Any hints are appreciated...

Regards,
  Marinos
--
Dipl.-Ing. Marinos Yannikos, CEO
Preisvergleich Internet Services AG
Obere Donaustraße 63/2, A-1020 Wien
Tel./Fax: (+431) 5811609-52/-55

Re: GiST indexes and concurrency (tsearch2)

From
PFC
Date:
    Do you have anything performing any updates or inserts to this table,
even if it does not update the gist column, even if it does not update
anything ?

Re: GiST indexes and concurrency (tsearch2)

From
Oleg Bartunov
Date:
Marinos,

what if you construct "apachebench & Co" free  script and see if
the issue still exists. There are could be many issues doesn't
connected to postgresql and tsearch2.

Oleg

On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:

> Oleg Bartunov wrote:
>> On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:
>>> concurrent access to GiST indexes isn't possible at the moment. I [...]
>>
>> there are should no problem with READ access.
>
> OK, thanks everyone (perhaps it would make sense to clarify this in the
> manual).
>
>> I'm willing to see some details: version, query, explain analyze.
>
> 8.0.0
>
> Query while the box is idle:
>
> explain analyze select count(*) from fr_offer o, fr_merchant m where idxfti
> @@ to_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;
>
> Aggregate  (cost=2197.48..2197.48 rows=1 width=0) (actual time=88.052..88.054
> rows=1 loops=1)
>   ->  Merge Join  (cost=2157.42..2196.32 rows=461 width=0) (actual
> time=88.012..88.033 rows=3 loops=1)
>         Merge Cond: ("outer".m_id = "inner".m_id)
>         ->  Index Scan using fr_merchant_pkey on fr_merchant m
> (cost=0.00..29.97 rows=810 width=4) (actual time=0.041..1.233 rows=523
> loops=1)
>         ->  Sort  (cost=2157.42..2158.57 rows=461 width=4) (actual
> time=85.779..85.783 rows=3 loops=1)
>               Sort Key: o.m_id
>               ->  Index Scan using idxfti_idx on fr_offer o
> (cost=0.00..2137.02 rows=461 width=4) (actual time=77.957..85.754 rows=3
> loops=1)
>                     Index Cond: (idxfti @@ '\'ranz\' & \'mc\''::tsquery)
>                     Filter: (eur >= 70::double precision)
>
> Total runtime: 88.131 ms
>
> now, while using apachebench (-c10), "top" says this:
>
> Cpu0  : 15.3% us, 10.0% sy,  0.0% ni, 74.7% id,  0.0% wa,  0.0% hi,  0.0% si
> Cpu1  : 13.3% us, 11.6% sy,  0.0% ni, 75.1% id,  0.0% wa,  0.0% hi,  0.0% si
> Cpu2  : 16.9% us,  9.6% sy,  0.0% ni, 73.4% id,  0.0% wa,  0.0% hi,  0.0% si
> Cpu3  : 18.7% us, 14.0% sy,  0.0% ni, 67.0% id,  0.0% wa,  0.0% hi,  0.3% si
>
> (this is with shared_buffers = 2000; a larger setting makes almost no
> difference for overall performance: although according to "top" system time
> goes to ~0 and user time to ~25%, the system still stays 70-75% idle)
>
> vmstat:
>
> r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id
> wa
> 2  0      0 8654316  64908 4177136    0    0    56    35  279   286  5  1 94
> 0
> 2  0      0 8646188  64908 4177136    0    0     0     0 1156  2982 15 10 75
> 0
> 2  0      0 8658412  64908 4177136    0    0     0     0 1358  3098 19 11 70
> 0
> 1  0      0 8646508  64908 4177136    0    0     0   104 1145  2070 13 12 75
> 0
>
> so the script's execution speed is apparently not limited by the CPUs.
>
> The query execution times go up like this while apachebench is running (and
> the system is 75% idle):
>
> Aggregate  (cost=2197.48..2197.48 rows=1 width=0) (actual
> time=952.661..952.663 rows=1 loops=1)
>   ->  Merge Join  (cost=2157.42..2196.32 rows=461 width=0) (actual
> time=952.621..952.641 rows=3 loops=1)
>         Merge Cond: ("outer".m_id = "inner".m_id)
>         ->  Index Scan using fr_merchant_pkey on fr_merchant m
> (cost=0.00..29.97 rows=810 width=4) (actual time=2.078..3.338 rows=523
> loops=1)
>         ->  Sort  (cost=2157.42..2158.57 rows=461 width=4) (actual
> time=948.345..948.348 rows=3 loops=1)
>               Sort Key: o.m_id
>               ->  Index Scan using idxfti_idx on fr_offer o
> (cost=0.00..2137.02 rows=461 width=4) (actual time=875.643..948.301 rows=3
> loops=1)
>                     Index Cond: (idxfti @@ '\'ranz\' & \'mc\''::tsquery)
>                     Filter: (eur >= 70::double precision)
> Total runtime: 952.764 ms
>
> I can't seem to find out where the bottleneck is, but it doesn't seem to be
> CPU or disk. "top" shows that postgres processes are frequently in this
> state:
>
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  WCHAN COMMAND
> 6701 postgres  16   0  204m  58m  56m S  9.3  0.2   0:06.96 semtimedo
>                                                             ^^^^^^^^^
> postmaste
>
> Any hints are appreciated...
>
> Regards,
> Marinos
>

     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Re: GiST indexes and concurrency (tsearch2)

From
"Marinos J. Yannikos"
Date:
Oleg Bartunov wrote:
> Marinos,
>
> what if you construct "apachebench & Co" free  script and see if
> the issue still exists. There are could be many issues doesn't
> connected to postgresql and tsearch2.
>

Yes, the problem persists - I wrote a small perl script that forks 10
chils processes and executes the same queries in parallel without any
php/apachebench involved:

--- 8< ---
#!/usr/bin/perl
use DBI;
$n=10;
$nq=100;
$sql="select count(*) from fr_offer o, fr_merchant m where idxfti @@
to_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;";

sub reaper { my $waitedpid = wait; $running--; $SIG{CHLD} = \&reaper; }
$SIG{CHLD} = \&reaper;

for $i (1..$n)
{
     if (fork() > 0) { $running++; }
     else
     {
         my
$dbh=DBI->connect('dbi:Pg:host=daedalus;dbname=<censored>','root','',{
  AutoCommit => 1 }) || die "!db";
         for my $j (1..$nq)
         {
             my $sth=$dbh->prepare($sql);
             $r=$sth->execute() or print STDERR $dbh->errstr();
         }
         exit 0;
     }
}
while ($running > 0)
{
     sleep 1;
     print "Running: $running\n";
}
--- >8 ---

Result (now with shared_buffers = 20000, hence less system and more user
time):

Cpu0  : 25.1% us,  0.0% sy,  0.0% ni, 74.9% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu1  : 18.3% us,  0.0% sy,  0.0% ni, 81.7% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu2  : 27.8% us,  0.3% sy,  0.0% ni, 71.9% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu3  : 23.5% us,  0.3% sy,  0.0% ni, 75.9% id,  0.0% wa,  0.0% hi,  0.3% si

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  WCHAN
COMMAND
  7571 postgres  16   0  204m  62m  61m R 10.6  0.2   0:01.97 -
postmaste
  7583 postgres  16   0  204m  62m  61m S  9.6  0.2   0:02.06 semtimedo
postmaste
  7586 postgres  16   0  204m  62m  61m S  9.6  0.2   0:02.00 semtimedo
postmaste
  7575 postgres  16   0  204m  62m  61m S  9.3  0.2   0:02.12 semtimedo
postmaste
  7578 postgres  16   0  204m  62m  61m R  9.3  0.2   0:02.05 -
postmaste

i.e., virtually no difference. With 1000 queries and 10 in parallel, the
apachebench run takes 60.674 seconds and the perl script 59.392 seconds.

Regards,
  Marinos
--
Dipl.-Ing. Marinos Yannikos, CEO
Preisvergleich Internet Services AG
Obere Donaustraße 63/2, A-1020 Wien
Tel./Fax: (+431) 5811609-52/-55

Re: GiST indexes and concurrency (tsearch2)

From
Tom Lane
Date:
"Marinos J. Yannikos" <mjy@geizhals.at> writes:
> I can't seem to find out where the bottleneck is, but it doesn't seem to
> be CPU or disk. "top" shows that postgres processes are frequently in
> this state:

>   6701 postgres  16   0  204m  58m  56m S  9.3  0.2   0:06.96 semtimedo
>                                                               ^^^^^^^^^

What's the platform exactly (hardware and OS)?

            regards, tom lane

Re: GiST indexes and concurrency (tsearch2)

From
"Marinos J. Yannikos"
Date:
Tom Lane schrieb:
> What's the platform exactly (hardware and OS)?

Hardware: http://www.appro.com/product/server_1142h.asp
- SCSI version, 2 x 146GB 10k rpm disks in software RAID-1
- 32GB RAM

OS: Linux 2.6.10-rc3, x86_64, debian GNU/Linux distribution

- CONFIG_K8_NUMA is currently turned off (no change, but now all CPUs
have ~25% load, previously one was 100% busy and the others idle)

- CONFIG_GART_IOMMU=y (but no change, tried both settings)
[other kernel options didn't seem to be relevant for tweaking at the
moment, mostly they're "safe defaults"]

The PostgreSQL data directory is on an ext2 filesystem.

Regards,
  Marinos
--
Dipl.-Ing. Marinos Yannikos, CEO
Preisvergleich Internet Services AG
Obere Donaustrasse 63, A-1020 Wien
Tel./Fax: (+431) 5811609-52/-55

Re: GiST indexes and concurrency (tsearch2)

From
Oleg Bartunov
Date:
On Thu, 3 Feb 2005, Tom Lane wrote:

> "Marinos J. Yannikos" <mjy@geizhals.at> writes:
>> I can't seem to find out where the bottleneck is, but it doesn't seem to
>> be CPU or disk. "top" shows that postgres processes are frequently in
>> this state:
>
>>   6701 postgres  16   0  204m  58m  56m S  9.3  0.2   0:06.96 semtimedo
>>                                                               ^^^^^^^^^
>
> What's the platform exactly (hardware and OS)?
>

it should be 'semtimedop'


>             regards, tom lane
>

     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Re: GiST indexes and concurrency (tsearch2)

From
Marinos Yannikos
Date:
Oleg Bartunov schrieb:
> Marinos,
>
> what if you construct "apachebench & Co" free  script and see if
> the issue still exists. There are could be many issues doesn't
> connected to postgresql and tsearch2.
>

Some more things I tried:
- data directory on ramdisk (tmpfs) - no effect
- database connections either over Unix domain sockets or TCP - no effect
- CLUSTER on gist index - approx. 20% faster queries, but CPU usage
still hovers around 25% (75% idle)
- preemptible kernel - no effect

This is really baffling me, it looks like a kernel issue of some sort
(I'm only guessing though). I found this old posting:
http://archives.postgresql.org/pgsql-general/2001-12/msg00836.php - is
this still applicable? I don't see an unusually high number of context
switches, but the processes seem to be spending some time in
"semtimedop" (even though the TAS assembly macros are definetely being
compiled-in).

If you are interested, I can probably provide an account on one of our
identically configured boxes by Monday afternoon (GMT+1) with the same
database and benchmarking utility.

Regards,
  Marinos

Re: GiST indexes and concurrency (tsearch2)

From
Tom Lane
Date:
Marinos Yannikos <mjy@geizhals.at> writes:
> This is really baffling me, it looks like a kernel issue of some sort
> (I'm only guessing though). I found this old posting:
> http://archives.postgresql.org/pgsql-general/2001-12/msg00836.php - is
> this still applicable?

That seems to be an early report of what we now recognize as the
"context swap storm" problem, and no we don't have a solution yet.
I'm not completely convinced that you're seeing the same thing,
but if you're seeing a whole lot of semops then it could well be.

I set up a test case consisting of two backends running the same
tsearch2 query over and over --- nothing fancy, just one of the ones
from the tsearch2 regression test:
SELECT count(*) FROM test_tsvector WHERE a @@ to_tsquery('345&qwerty');
I used gdb to set breakpoints at PGSemaphoreLock and PGSemaphoreTryLock,
which are the only two functions that can possibly block on a semop
call.  On a single-processor machine, I saw maybe one hit every couple
of seconds, all coming from contention for the BufMgrLock or sometimes
the LockMgrLock.  So unless I've missed something, there's not anything
in tsearch2 or gist per se that is causing lock conflicts.  You said
you're testing a quad-processor machine, so it could be that you're
seeing the same lock contention issues that we've been trying to figure
out for the past year ...

            regards, tom lane

Re: GiST indexes and concurrency (tsearch2)

From
Tom Lane
Date:
Marinos Yannikos <mjy@geizhals.at> writes:
> Some more things I tried:

You might try the attached patch (which I just applied to HEAD).
It cuts down the number of acquisitions of the BufMgrLock by merging
adjacent bufmgr calls during a GIST index search.  I'm not hugely
hopeful that this will help, since I did something similar to btree
last spring without much improvement for context swap storms involving
btree searches ... but it seems worth trying.

            regards, tom lane

*** src/backend/access/gist/gistget.c.orig    Fri Dec 31 17:45:27 2004
--- src/backend/access/gist/gistget.c    Sat Feb  5 14:19:52 2005
***************
*** 60,69 ****
      BlockNumber blk;
      IndexTuple    it;

      b = ReadBuffer(s->indexRelation, GISTP_ROOT);
      p = BufferGetPage(b);
      po = (GISTPageOpaque) PageGetSpecialPointer(p);
-     so = (GISTScanOpaque) s->opaque;

      for (;;)
      {
--- 60,70 ----
      BlockNumber blk;
      IndexTuple    it;

+     so = (GISTScanOpaque) s->opaque;
+
      b = ReadBuffer(s->indexRelation, GISTP_ROOT);
      p = BufferGetPage(b);
      po = (GISTPageOpaque) PageGetSpecialPointer(p);

      for (;;)
      {
***************
*** 75,86 ****

          while (n < FirstOffsetNumber || n > maxoff)
          {
!             ReleaseBuffer(b);
!             if (so->s_stack == NULL)
                  return false;

!             stk = so->s_stack;
!             b = ReadBuffer(s->indexRelation, stk->gs_blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);
              maxoff = PageGetMaxOffsetNumber(p);
--- 76,89 ----

          while (n < FirstOffsetNumber || n > maxoff)
          {
!             stk = so->s_stack;
!             if (stk == NULL)
!             {
!                 ReleaseBuffer(b);
                  return false;
+             }

!             b = ReleaseAndReadBuffer(b, s->indexRelation, stk->gs_blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);
              maxoff = PageGetMaxOffsetNumber(p);
***************
*** 89,94 ****
--- 92,98 ----
                  n = OffsetNumberPrev(stk->gs_child);
              else
                  n = OffsetNumberNext(stk->gs_child);
+
              so->s_stack = stk->gs_parent;
              pfree(stk);

***************
*** 116,123 ****
              it = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));
              blk = ItemPointerGetBlockNumber(&(it->t_tid));

!             ReleaseBuffer(b);
!             b = ReadBuffer(s->indexRelation, blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);
          }
--- 120,126 ----
              it = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));
              blk = ItemPointerGetBlockNumber(&(it->t_tid));

!             b = ReleaseAndReadBuffer(b, s->indexRelation, blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);
          }
***************
*** 137,142 ****
--- 140,147 ----
      BlockNumber blk;
      IndexTuple    it;

+     so = (GISTScanOpaque) s->opaque;
+
      blk = ItemPointerGetBlockNumber(&(s->currentItemData));
      n = ItemPointerGetOffsetNumber(&(s->currentItemData));

***************
*** 148,154 ****
      b = ReadBuffer(s->indexRelation, blk);
      p = BufferGetPage(b);
      po = (GISTPageOpaque) PageGetSpecialPointer(p);
-     so = (GISTScanOpaque) s->opaque;

      for (;;)
      {
--- 153,158 ----
***************
*** 157,176 ****

          while (n < FirstOffsetNumber || n > maxoff)
          {
!             ReleaseBuffer(b);
!             if (so->s_stack == NULL)
                  return false;

!             stk = so->s_stack;
!             b = ReadBuffer(s->indexRelation, stk->gs_blk);
              p = BufferGetPage(b);
-             maxoff = PageGetMaxOffsetNumber(p);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);

              if (ScanDirectionIsBackward(dir))
                  n = OffsetNumberPrev(stk->gs_child);
              else
                  n = OffsetNumberNext(stk->gs_child);
              so->s_stack = stk->gs_parent;
              pfree(stk);

--- 161,183 ----

          while (n < FirstOffsetNumber || n > maxoff)
          {
!             stk = so->s_stack;
!             if (stk == NULL)
!             {
!                 ReleaseBuffer(b);
                  return false;
+             }

!             b = ReleaseAndReadBuffer(b, s->indexRelation, stk->gs_blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);
+             maxoff = PageGetMaxOffsetNumber(p);

              if (ScanDirectionIsBackward(dir))
                  n = OffsetNumberPrev(stk->gs_child);
              else
                  n = OffsetNumberNext(stk->gs_child);
+
              so->s_stack = stk->gs_parent;
              pfree(stk);

***************
*** 198,205 ****
              it = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));
              blk = ItemPointerGetBlockNumber(&(it->t_tid));

!             ReleaseBuffer(b);
!             b = ReadBuffer(s->indexRelation, blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);

--- 205,211 ----
              it = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));
              blk = ItemPointerGetBlockNumber(&(it->t_tid));

!             b = ReleaseAndReadBuffer(b, s->indexRelation, blk);
              p = BufferGetPage(b);
              po = (GISTPageOpaque) PageGetSpecialPointer(p);


Re: GiST indexes and concurrency (tsearch2)

From
"Marinos J. Yannikos"
Date:
Tom Lane wrote:
> You might try the attached patch (which I just applied to HEAD).
> It cuts down the number of acquisitions of the BufMgrLock by merging
> adjacent bufmgr calls during a GIST index search.  [...]

Thanks - I applied it successfully against 8.0.0, but it didn't seem to
have a noticeable effect. I'm still seeing more or less exactly 25% CPU
usage by postgres processes and identical query times (measured with the
Perl script I posted earlier).

Regards,
  Marinos
--
Dipl.-Ing. Marinos Yannikos, CEO
Preisvergleich Internet Services AG
Obere Donaustrasse 63, A-1020 Wien
Tel./Fax: (+431) 5811609-52/-55

Re: GiST indexes and concurrency (tsearch2)

From
"Marinos J. Yannikos"
Date:
Tom Lane wrote:
> I'm not completely convinced that you're seeing the same thing,
> but if you're seeing a whole lot of semops then it could well be.

I'm seeing ~280 semops/second with spinlocks enabled and ~80k
semops/second (> 4 mil. for 100 queries) with --disable-spinlocks, which
increases total run time by ~20% only. In both cases, cpu usage stays
around 25%, which is a bit odd.

> [...]You said
> you're testing a quad-processor machine, so it could be that you're
> seeing the same lock contention issues that we've been trying to figure
> out for the past year ...

Are those issues specific to a particular platform (only x86/Linux?) or
is it a problem with SMP systems in general? I guess I'll be following
the current discussion on -hackers closely...

Regards,
  Marinos

Re: GiST indexes and concurrency (tsearch2)

From
Neil Conway
Date:
On Sat, 2005-02-05 at 14:42 -0500, Tom Lane wrote:
> Marinos Yannikos <mjy@geizhals.at> writes:
> > Some more things I tried:
>
> You might try the attached patch (which I just applied to HEAD).
> It cuts down the number of acquisitions of the BufMgrLock by merging
> adjacent bufmgr calls during a GIST index search.

I'm not sure it will help much either, but there is more low-hanging
fruit in this area: GiST currently does a ReadBuffer() for each tuple
produced by the index scan, which is grossly inefficient. I recently
applied a patch to change rtree to keep a pin on the scan's current
buffer in between invocations of the index scan API (which is how btree
and hash already work), and it improved performance by about 10%
(according to contrib/rtree_gist's benchmark). I've made similar changes
for GiST, but unfortunately it is part of a larger GiST improvement
patch that I haven't had a chance to commit to 8.1 yet:

http://archives.postgresql.org/pgsql-patches/2004-11/msg00144.php

I'll try and get this cleaned up for application to HEAD next week.

-Neil