Re: optimizing db for small table with tons of updates - Mailing list pgsql-performance

From Josh Berkus
Subject Re: optimizing db for small table with tons of updates
Date
Msg-id 200604031129.42194.josh@agliodbs.com
Whole thread Raw
In response to optimizing db for small table with tons of updates  (Kenji Morishige <kenjim@juniper.net>)
List pgsql-performance
Kenji,

> We used to use MySQL for these tools and we never had any issues, but I
> believe it is due to the transactional nature of Postgres that is adding
> an overhead to this problem.  

You're correct.

> Are there any table options that enables
> the table contents to be maintained in ram only or have delayed writes
> for this particular table?

No.  That's not really the right solution anyway; if you want
non-transactional data, why not just use a flat file?  Or Memcached?

Possible solutions:
1) if the data is non-transactional, consider using pgmemcached.
2) if you want to maintain transactions, use a combination of autovacuum
and vacuum delay to do more-or-less continuous low-level vacuuming of the
table.  Using Postgres 8.1 will help you to be able to manage this.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

pgsql-performance by date:

Previous
From: Kenji Morishige
Date:
Subject: optimizing db for small table with tons of updates
Next
From: "Rajesh Kumar Mallah"
Date:
Subject: Re: optimizing db for small table with tons of updates