Re: Running PostgreSQL as fast as possible no matter the consequences - Mailing list pgsql-performance

From Klaus Ita
Subject Re: Running PostgreSQL as fast as possible no matter the consequences
Date
Msg-id AANLkTikJZipXKC23P6Bzz_2NQ0ySmwRBF3vYYMW_XX9U@mail.gmail.com
Whole thread Raw
In response to Re: Running PostgreSQL as fast as possible no matter the consequences  ("Lello, Nick" <nick.lello@rentrakmail.com>)
List pgsql-performance

Use a replicated setup?

On Nov 8, 2010 4:21 PM, "Lello, Nick" <nick.lello@rentrakmail.com> wrote:

How about either:-

a)   Size the pool so all your data fits into it.

b)   Use a RAM-based filesystem (ie: a memory disk or SSD) for the
data storage [memory disk will be faster] with a Smaller pool
- Your seed data should be a copy of the datastore on disk filesystem;
at startup time copy the storage files from the physical to memory.

A bigger gain can probably be had if you have a tightly controlled
suite of queries that will be run against the database and you can
spend the time to tune each to ensure it performs no sequential scans
(ie: Every query uses index lookups).



On 5 November 2010 11:32, A B <gentosaker@gmail.com> wrote:
>>> If you just wanted PostgreSQL to g...
--


Nick Lello | Web Architect
o +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319
Email: nick.lello at rentrak.com
RENTRAK | www.rentrak.com | NASDAQ: RENT


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to y...

pgsql-performance by date:

Previous
From: Justin Pitts
Date:
Subject: Re: Select * is very slow
Next
From: Greg Smith
Date:
Subject: Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?