Inserts in 'big' table slowing down the database - Mailing list pgsql-performance

From Stefan Keller
Subject Inserts in 'big' table slowing down the database
Date
Msg-id CAFcOn2_W6v_vqwomCf6DEtXk=N8iB5WU-7XEdcYgX-VFUE32+Q@mail.gmail.com
Whole thread Raw
Responses Re: Inserts in 'big' table slowing down the database
List pgsql-performance
Hi,

I'm having performance issues with a simple table containing 'Nodes'
(points) from OpenStreetMap:

  CREATE TABLE nodes (
      id bigint PRIMARY KEY,
      user_name text NOT NULL,
      tstamp timestamp without time zone NOT NULL,
      geom GEOMETRY(POINT, 4326)
  );
  CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);

The number of rows grows steadily and soon reaches one billion
(1'000'000'000), therefore the bigint id.
Now, hourly inserts (update and deletes) are slowing down the database
(PostgreSQL 9.1) constantly.
Before I'm looking at non-durable settings [1] I'd like to know what
choices I have to tune it while keeping the database productive:
cluster index? partition table? use tablespaces? reduce physical block size?

Stefan

[1] http://www.postgresql.org/docs/9.1/static/non-durability.html


pgsql-performance by date:

Previous
From: Jayadevan M
Date:
Subject: Re: Execution from java - slow
Next
From: Ivan Voras
Date:
Subject: Re: Inserts in 'big' table slowing down the database