Hi Scott
2012/2/26 Scott Marlowe <scott.marlowe@gmail.com>:
> On Sun, Feb 26, 2012 at 1:11 PM, Stefan Keller <sfkeller@gmail.com> wrote:
>
>> So to me the bottom line is, that PG already has reduced overhead at
>> least for issue #2 and perhaps for #4.
>> Remain issues of in-memory optimization (#2) and replication (#3)
>> together with High Availability to be investigated in PG.
>
> Yeah, the real "problem" pg has to deal with is that it writes to
> disk, and expects that to provide durability, while voltdb (Mike's db
> project) writes to multiple machines in memory and expects that to be
> durable. No way a disk subsystem is gonna compete with an in memory
> cluster for performance.
That's the point where I'd like to ask for ideas on how to extend PG
to manage "in-memory tables"!
To me it's obvious that memory becomes cheaper and cheaper while PG
still is designed with low memory in mind.
In my particular scenario I even can set durability aside since I
write once and read 1000 times. My main problem is heavy geometry
calculations on geospatial data (like ST_Relate or ST_Intersection
fns) which I expect to be run close to the data an in-memory. I don't
want PG to let table rows be pushed to disk just because to free
memory before hand (because of the "low memory assumption").
-Stefan