Partitions for large spatial data - Mailing list pgsql-novice

From oisin
Subject Partitions for large spatial data
Date
Msg-id 1362078373063-5746959.post@n5.nabble.com
Whole thread Raw
List pgsql-novice
Hi
I have an existing postgreSQL 8.4/poistGIS1.4 database hosting Ordnance
Survey's MasterMap. A large spatial data set containing around 440M
features. The database serves a bespoke ETL application in which users
search for single properties and extract a surrounding mapbase of say 75m
radius. Performance is just about acceptable. However, I will increasingly
need to extract larger map extracts, in which case performance current,y
degrades quickly.

I am at the same time migrating the database to another server and upgrading
to PostgreSQL 9.1.8 and PostGIS 2.

Currently I have structured the database such that features of different
types are in different tables (8 feature types) and these are split further
by OS grid square (100km x 100km) across different schemes. The client
application then writes VIEWS, with UNIONS as required to retrieve their
required map extract.

I am wondering whether a more elegant solution to multiple schemas would be
to use PARTITIONS - elegant in terms of ease of maintenance, loading map
updates etc., and query speed.

Apart from map updates the database read only. I am working on 4 core Xeon
3GHz win 2003 x64 server with 8GB of ram.

If I do use partitioning I face a potential problem in that the partitioning
constraint would most easily be grid squares of say 20km each, but this
would mean some features would appear in two or more partitions. Is this
possible?

Any advice gratefully received.

Regards O





--
View this message in context: http://postgresql.1045698.n5.nabble.com/Partitions-for-large-spatial-data-tp5746959.html
Sent from the PostgreSQL - novice mailing list archive at Nabble.com.


pgsql-novice by date:

Previous
From: Gavin Flower
Date:
Subject: Re: minimum hardware requirements for small postgres db
Next
From: Sarfraz Nawaz
Date:
Subject: Import CSV date issues