Re: gin performance issue.

From: Marc Mamin
Subject: Re: gin performance issue.
Date: ,
(view: Whole thread, Raw)
In response to: Re: gin performance issue.  (Tom Lane)
Responses: Re: gin performance issue.  (Jeff Janes)
List: pgsql-performance

Tree view

gin performance issue.  (Marc Mamin, )
 Re: gin performance issue.  (Tom Lane, )
  Re: gin performance issue.  (Marc Mamin, )
   Re: gin performance issue.  (Jeff Janes, )

> -----Original Message-----
> From: Tom Lane [mailto:]
> Sent: Freitag, 5. Februar 2016 16:07

> >
> > Postgres Version 9.3.10 (Linux)
> >
> > Hello,
> > this is a large daily table that only get bulk inserts (200-400 /days) with no update.
> > After rebuilding the whole table, the Bitmap Index Scan on
> > r_20160204_ix_toprid falls under 1 second (from 800)
> >
> > Fastupdate is using the default, but autovacuum is disabled on that
> > table which contains 30 Mio rows.

> Pre-9.5, it's a pretty bad idea to disable autovacuum on a GIN index,
> because then the "pending list" only gets flushed when it exceeds
> work_mem.  (Obviously, using a large work_mem setting makes this
> worse.)
>             regards, tom lane

knowing what the problem is don't really help here:

- auto vacuum will not run as these are insert only tables
- according to this post, auto analyze would also do the job:
  It seems that this information is missing in the doc

  but it sadly neither triggers in our case as we have manual analyzes called during the dataprocesssing just following
  Manual vacuum is just too expensive here.

  Hence disabling fast update seems to be our only option.

  I hope this problem will help push up the 9.5 upgrade on our todo list :)

  Ideally, we would then like to flush the pending list inconditionally after the imports.
  I guess we could achieve something approaching while modifying the analyze scale factor  and gin_pending_list_limit
  before/after the (bulk) imports, but having the possibility to flush it per SQL would be better.
  Is this a reasonable feature wish?

  And a last question: how does the index update work with bulk (COPY) inserts:
  without pending list: is it like a per row trigger or will the index be cared of afterwards ?
  with small pending lists : is there a concurrency problem, or can both tasks cleanly work in parallel ?

  best regards,

  Marc mamin

pgsql-performance by date:

From: Merlin Moncure
Subject: Re: bad COPY performance with NOTIFY in a trigger
From: Jeff Janes
Subject: Re: Bitmap and-ing between btree and gin?