Re: SAN, clustering, MPI, Backplane Re: Postgresql on SAN - Mailing list pgsql-hackers

From Gaetano Mendola
Subject Re: SAN, clustering, MPI, Backplane Re: Postgresql on SAN
Date
Msg-id 40F31A12.4010107@bigfoot.com
Whole thread Raw
In response to Re: SAN, clustering, MPI, Backplane Re: Postgresql on SAN  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Tom Lane wrote:

> Andrew Piskorski <atp@piskorski.com> writes:
> 
>>Another thing I've been wondering about, but haven't been able to find
>>any discussion of:
>>Just how closely tied is PostgreSQL to its use of shared memory?
> 
> 
> Pretty damn closely.  You would not be happy with the performance of
> anything that tried to insert a network communication layer into access
> to what we think of as shared memory.
> 
> For a datapoint, check the list archives for discussions a few months
> ago about performance with multiple Xeons.  We were seeing significant
> performance degradation simply because the communications architecture
> for multiple Xeon chips on one motherboard is badly designed :-(
> The particular issue we were able to document was cache-line swapping
> for spinlock variables, but AFAICS the issue would not go away even
> if we had a magic zero-overhead locking mechanism: the Xeons would
> still suck, because of contention for access to the shared variables
> that the spinlocks are protecting.
> 
> OpenMosix is in the category of "does not work, and would be unusably
> slow if it did work" ... AFAIK any similar design would have the same
> problem.

However shall be nice if the postmaster is not selfish as is it now (two
postmastera are not able to work on the same shared memory segment),
projects like cashmere ( www.cs.rochester.edu/research/cashmere/ ) or
this www.tu-chemnitz.de/informatik/HomePages/RA/projects/VIA_SCI/via_sci_hardware.html

are able to run a single database mananged by a postmaster for each node in a
distributed architecture.

I seen these hardware working at CeBIT some years ago and it's possible to setup
any kind of configuration: linear, triangular, cube, ipercube. Basically each node
share part of the local RAM in order to create a big shared memory segment and the
shared memory is managed "without kernel intervention".



Regards
Gaetano Mendola






pgsql-hackers by date:

Previous
From: Oliver Jowett
Date:
Subject: Re: [subxacts] Open nested xact items
Next
From: Alvaro Herrera
Date:
Subject: Re: [subxacts] Open nested xact items