Re: Using postgresql in situation with high write/read ratio - Mailing list pgsql-general

From Thom Brown
Subject Re: Using postgresql in situation with high write/read ratio
Date
Msg-id AANLkTikkTsGv9=uZRMNCWVqfVoVJChn4DW_CPs=mzLm8@mail.gmail.com
Whole thread Raw
In response to Using postgresql in situation with high write/read ratio  (Odd Man <valodzka@gmail.com>)
List pgsql-general
On 12 August 2010 21:09, Odd Man <valodzka@gmail.com> wrote:
> Hi,
>
> In my current project we have unusual (at least for me) conditions for
> relation db, namely:
> * high write/read ratio (writes goes from bulk data updates/inserts (every
> couple or minutes or so))
> * loosing some recent part of data (last hour for example) is OK, it can be
> easy restored
>
> First version of app that used plain updates worked too long, it was
> replaced with version two that uses partitions, truncate, copy and cleanup
> of old data once daily. It works reasonably fast with current amount of
> data. But this amount will grow, so I'm looking for possible optimisations.
>
> The main idea (exept using some non relational db) I have is to say postgres
> to make more operation in memory and use fsync and other operations less.
> For example, I have idea to setup partition in memory, corresponding
> tablespace and use it for that data. Main problem here that amount of data
> is big and only part is going to be updated realy frequently.
>
> Are there any ideas, best practies or so in such conditions?
>

You can set synchronous_commit to "off".  That doesn't wait for data
to be written to WAL.

--
Thom Brown
Registered Linux user: #516935

pgsql-general by date:

Previous
From: Odd Man
Date:
Subject: Using postgresql in situation with high write/read ratio
Next
From: Sergey Konoplev
Date:
Subject: Re: Using postgresql in situation with high write/read ratio