PostgreSQL as a local in-memory cache - Mailing list pgsql-performance

From jgardner@jonathangardner.net
Subject PostgreSQL as a local in-memory cache
Date
Msg-id cb0fb58c-9134-4314-a1d0-08fc39f911a6@40g2000pry.googlegroups.com
Whole thread Raw
Responses Re: PostgreSQL as a local in-memory cache
List pgsql-performance
We have a fairly unique need for a local, in-memory cache. This will
store data aggregated from other sources. Generating the data only
takes a few minutes, and it is updated often. There will be some
fairly expensive queries of arbitrary complexity run at a fairly high
rate. We're looking for high concurrency and reasonable performance
throughout.

The entire data set is roughly 20 MB in size. We've tried Carbonado in
front of SleepycatJE only to discover that it chokes at a fairly low
concurrency and that Carbonado's rule-based optimizer is wholly
insufficient for our needs. We've also tried Carbonado's Map
Repository which suffers the same problems.

I've since moved the backend database to a local PostgreSQL instance
hoping to take advantage of PostgreSQL's superior performance at high
concurrency. Of course, at the default settings, it performs quite
poorly compares to the Map Repository and Sleepycat JE.

My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? That
is, the data is updated so often and it can be reproduced fairly
rapidly so that if there is a server crash or random particles from
space mess up memory we'd just restart the machine and move on.

I've never configured PostgreSQL to work like this and I thought maybe
someone here had some ideas on a good approach to this.

pgsql-performance by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: requested shared memory size overflows size_t
Next
From: Tom Wilcox
Date:
Subject: Re: requested shared memory size overflows size_t