Re: [GENERAL] Improve PostGIS performance with 62 million rows? - Mailing list pgsql-general

From Kevin Grittner
Subject Re: [GENERAL] Improve PostGIS performance with 62 million rows?
Date
Msg-id CACjxUsNOmjoHrMjJNmMR+Hso2oHRCr1qosSa6xDmdMB9q-V6VA@mail.gmail.com
Whole thread Raw
In response to Re: [GENERAL] Improve PostGIS performance with 62 million rows?  (Israel Brewster <israel@ravnalaska.net>)
Responses Re: [GENERAL] Improve PostGIS performance with 62 million rows?
List pgsql-general
On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster <israel@ravnalaska.net> wrote:

> [load of new data]

>  Limit  (cost=354643835.82..354643835.83 rows=1 width=9) (actual
> time=225998.319..225998.320 rows=1 loops=1)

> [...] I ran the query again [...]

>  Limit  (cost=354643835.82..354643835.83 rows=1 width=9) (actual
> time=9636.165..9636.166 rows=1 loops=1)

> So from four minutes on the first run to around 9 1/2 seconds on the second.
> Presumably this difference is due to caching?

It is likely to be, at least in part.  Did you run VACUUM on the
data before the first run?  If not, hint bits may be another part
of it.  The first access to each page after the bulk load would
require some extra work for visibility checking and would cause a
page rewrite for the hint bits.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-general by date:

Previous
From: Tom DalPozzo
Date:
Subject: Re: [GENERAL] checkpoint clarifications needed
Next
From: Adrian Klaver
Date:
Subject: Re: [GENERAL] Matching indexe for timestamp