Thread: Postgresql on SUN Server
My client has some SUN server type SUN Server Enterprise4500 (64bits).
He proposes me to install Postgresql on his server instead of buying a Dell Server Poweredge 4600 (32bits)
What is the difference in terms of installation, running and maintenance of installing postgresql on a 64bit server than on a 32bit server.
Is the difference only at the OS level which is redhat or are there impacts on the database ?
Thank you for you help and experience.
GH
On Fri, 23 May 2003, Guillaume Houssay wrote: > My client has some SUN server type SUN Server Enterprise4500 (64bits). > > He proposes me to install Postgresql on his server instead of buying a > Dell Server Poweredge 4600 (32bits) Depending on the how many CPUs it has and what not, the E4500 may not be much faster, or may even be slower. but it should be rock solid reliable, the E seris are nearly bulletproof hardware. > What is the difference in terms of installation, running and > maintenance of installing postgresql on a 64bit server than on a 32bit > server. None really. You may as well set --enable-integer-datetimes when compiling since there should be no great performance penalty for running 64 bit values for datetime. Other than that, no great difference. note that being on 64 bit hardware means you can likely have much more shared buffer memory than on X86 hardware, where you're limited to ~2 gig. > Is the difference only at the OS level which is redhat or are there > impacts on the database ? Mostly the OS. I know RedHat had dropped their Sparc line, but there is a project out there (can't recall the name, but you can google for it) that 'ports' RedHat's releases to Sparc hardware. Debian maintains a Sparc port if you want to use debian.
On Vie 23 May 2003 17:01, scott.marlowe wrote: > > None really. You may as well set --enable-integer-datetimes when > compiling since there should be no great performance penalty for > running 64 bit values for datetime. > > Other than that, no great difference. note that being on 64 bit > hardware means you can likely have much more shared buffer memory than > on X86 hardware, where you're limited to ~2 gig. Those this mean no more the ~2Gig total shared memory? Or each aplication? > > Is the difference only at the OS level which is redhat or are there > > impacts on the database ? > > Mostly the OS. I know RedHat had dropped their Sparc line, but there > is a project out there (can't recall the name, but you can google for > it) that 'ports' RedHat's releases to Sparc hardware. http://auroralinux.org/ > Debian maintains a Sparc port if you want to use debian. Here we are starting to use Debian, but due to the fact that one of the Admins has switched to it. And you know how fundamentalists they are! :-) Saludos... :-) -- Porqué usar una base de datos relacional cualquiera, si podés usar PostgreSQL? ----------------------------------------------------------------- Martín Marqués | mmarques@unl.edu.ar Programador, Administrador, DBA | Centro de Telematica Universidad Nacional del Litoral -----------------------------------------------------------------
On Sat, 24 May 2003, Martin Marques wrote: > On Vie 23 May 2003 17:01, scott.marlowe wrote: > > > > None really. You may as well set --enable-integer-datetimes when > > compiling since there should be no great performance penalty for > > running 64 bit values for datetime. > > > > Other than that, no great difference. note that being on 64 bit > > hardware means you can likely have much more shared buffer memory than > > on X86 hardware, where you're limited to ~2 gig. > > Those this mean no more the ~2Gig total shared memory? Or each aplication? Correct. The max shared memory segment on 64 bit hardware is larger than any amount of RAM currently installable. I'm pretty sure the limit is so large that the overhead of handling a large segment would become a problem long before you'd be able to hit a hard limit. > > > > Is the difference only at the OS level which is redhat or are there > > > impacts on the database ? > > > > Mostly the OS. I know RedHat had dropped their Sparc line, but there > > is a project out there (can't recall the name, but you can google for > > it) that 'ports' RedHat's releases to Sparc hardware. > > http://auroralinux.org/ Thanks for the link, I was looking for it and couldn't find it just a day or two after seeing it. > > Debian maintains a Sparc port if you want to use debian. > > Here we are starting to use Debian, but due to the fact that one of the > Admins has switched to it. And you know how fundamentalists they are! :-) Oh yes. Right up there with some of the "Solaris on X86" folks I've met.
I am trying to divide the buffer pool into 5 parts and i call these clusters (different from postgres db clusters). What i want to do is to make a mapping from the relations to these clusters. One relation belongs to one cluster and a cluster can obviously have more than one relations. I use a hash table to do this similar to the buf_table.c file. Right now i don't know how many relations there are in total. I have just initialized it to 100. The key is relation and the data is the cluster information. Can anyone see any problems in using this approach. Feed back will be greatly appreciated. PS i am aware that because of OS caching this may not cause any improvements. thanks nailah
On Tue, May 27, 2003 at 09:05:07AM -0600, scott.marlowe wrote: > Correct. The max shared memory segment on 64 bit hardware is larger than > any amount of RAM currently installable. I'm pretty sure the limit is > so large that the overhead of handling a large segment would become a > problem long before you'd be able to hit a hard limit. Given that, in my experience, 1 Gig of shared buffer space totally tanked performance on Solaris 7, the limiti is already so large that handling a large segment is a problem ;-) A ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
On Tue, 27 May 2003, Andrew Sullivan wrote: > On Tue, May 27, 2003 at 09:05:07AM -0600, scott.marlowe wrote: > > Correct. The max shared memory segment on 64 bit hardware is larger than > > any amount of RAM currently installable. I'm pretty sure the limit is > > so large that the overhead of handling a large segment would become a > > problem long before you'd be able to hit a hard limit. > > Given that, in my experience, 1 Gig of shared buffer space totally > tanked performance on Solaris 7, the limiti is already so large that > handling a large segment is a problem ;-) I wonder if that's a performance issue with Solaris and shared memory that isn't a problem with BSD or Linux on Sparc Hardware?
Hi, Having some problems. So i installed 7.3.2. I got up to running the server in the background. I am not sure if what i am doing next is corrext. I cd into the pgsql/bin folder and run createdb mydb. here's my error message: createdb mydb ld.so.1: ./psql: fatal: libgcc_s.so.1: open failed: No such file or directory Killed createdb: database creation failed Can some one explain to me why it is doing this and what i can do to fix it? thanks
On Tue, May 27, 2003 at 12:07:06PM -0600, scott.marlowe wrote: > I wonder if that's a performance issue with Solaris and shared memory that > isn't a problem with BSD or Linux on Sparc Hardware? I wonder that, also. I haven't had a chance to try it out. The problem in our case is that we couldn't use Linux or BSD anyway, because we need the 8- and 10-way scalability we have, and we need the cleverness about disabling processors, &c., that's built into Solaris. So even if Solaris is the problem, we're stuck. The file system buffers, on the other hand, are pretty fast, so it's not too big a penalty. One thing that is very interesting about all of this is that the large shared buffers only exact their performance penalty over time. My hypothesis is that it has something to do with expiring buffers. In one test I performed, I set the buffer to 1G. I then did a bunch of work on a data set that was close to 1G. Speedy. But when I finally went over 1G, everything slowed to a crawl. This makes me believe that the problem is in the way records are added to or expired from the buffer. It was only one test, mind: I didn't have time to repeat it. So it's just a bit of gossip and not a result. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
Looks as if your local or postgres user doesn't have a $LD_LIBRARY_PATH that includes libgcc_s.so.1. Find the library on your system, and modify the profile of the user to make sure they have in in $LD_LIBRARY_PATH. If you are running on the same system that you built postgresql on, the library is there somewhere.... Nailah Ogeer wrote: > Hi, > Having some problems. So i installed 7.3.2. I got up to running the > server in the background. I am not sure if what i am doing next is > corrext. I cd into the pgsql/bin folder and run createdb mydb. here's my > error message: > > createdb mydb > ld.so.1: ./psql: fatal: libgcc_s.so.1: open failed: No such file or > directory > Killed > createdb: database creation failed -- P. J. "Josh" Rovero Sonalysts, Inc. Email: rovero@sonalysts.com www.sonalysts.com 215 Parkway North Work: (860)326-3671 or 442-4355 Waterford CT 06385 ***********************************************************************
On Tue, 27 May 2003, Andrew Sullivan wrote: > On Tue, May 27, 2003 at 12:07:06PM -0600, scott.marlowe wrote: > > I wonder if that's a performance issue with Solaris and shared memory that > > isn't a problem with BSD or Linux on Sparc Hardware? > > I wonder that, also. I haven't had a chance to try it out. > > The problem in our case is that we couldn't use Linux or BSD anyway, > because we need the 8- and 10-way scalability we have, and we need > the cleverness about disabling processors, &c., that's built into > Solaris. So even if Solaris is the problem, we're stuck. > > The file system buffers, on the other hand, are pretty fast, so it's > not too big a penalty. > > One thing that is very interesting about all of this is that the > large shared buffers only exact their performance penalty over time. > My hypothesis is that it has something to do with expiring buffers. > In one test I performed, I set the buffer to 1G. I then did a bunch > of work on a data set that was close to 1G. Speedy. But when I > finally went over 1G, everything slowed to a crawl. This makes me > believe that the problem is in the way records are added to or > expired from the buffer. > > It was only one test, mind: I didn't have time to repeat it. So it's > just a bit of gossip and not a result. Thanks for the gossip. I tested Postgresql with ~1 gig of shared space under RH 7.2 and found that while it WAS slower than say with 512 Meg of ram, it was only slower when you weren't using more than 512Meg. If I was doing something that was larger than 512Meg and smaller than 1 Gig, using 1 gig shared was still faster, but not by much. It didn't seem to slow down a lot going over the max shared_buffers memory, but we don't have any really huge datasets, so I didn't test all that thourougly at that setting, as the performance gain wasn't all that great for large sets, and noticeably slower for medium size datasets. I just think SYSV shared memory isn't built for handling large amounts of memory. I could stand out here at the fence and chat all day... :-)
* scott.marlowe (scott.marlowe@ihs.com) wrote: > On Fri, 23 May 2003, Guillaume Houssay wrote: > > Mostly the OS. I know RedHat had dropped their Sparc line, but there is a > project out there (can't recall the name, but you can google for it) that > 'ports' RedHat's releases to Sparc hardware. > Redhat sparc became aurora linux. I'm currently running it on both 32 and 64bit systems without problems. dan
On Tue, May 27, 2003 at 02:08:12PM -0600, scott.marlowe wrote: > I just think SYSV shared memory isn't built for handling large amounts of > memory. That's sort of my feeling, too, and corresponds with something Tom Lane said recently on -performance (where I guess this oughta move) to the effect that, since OS guys have spent years and (in some cases) millions of dollars optimising the filesystem buffer algorithms; and since he tried one time to alter the PostgreSQL buffer management algorithm, and that attempt was unsuccessful; therefore there is bound to be some point of diminishing returns in increasing the postmaster's shared buffers, and that point is likely to show up sooner rather than later. > I could stand out here at the fence and chat all day... :-) Me, too, but the fields keep a-growin'. A -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
On 27 May 2003 at 15:05, Andrew Sullivan wrote: > On Tue, May 27, 2003 at 12:07:06PM -0600, scott.marlowe wrote: > > I wonder if that's a performance issue with Solaris and shared memory that > > isn't a problem with BSD or Linux on Sparc Hardware? > > I wonder that, also. I haven't had a chance to try it out. > > The problem in our case is that we couldn't use Linux or BSD anyway, > because we need the 8- and 10-way scalability we have, and we need > the cleverness about disabling processors, &c., that's built into > Solaris. So even if Solaris is the problem, we're stuck. I have no experience on solaris or SMP machines so just putting a hypothesis.. You may still run linux on a 8-10 way sparc. With O(1) patches available for 2.4 series, it should scale much more gracefully than earlier. It may not touch solaris in scalability and fine grained resource control like disabling a processor, but all in all, might be faster in performance vis-a-vis solaris. At least it doesn't hurt trying if you can spare time and resources. Many people say that linux outperform solaris on 1way/2way machines. If you could find out what happens at 8 way, it would be great..:-) Bye Shridhar -- H. L. Mencken's Law: Those who can -- do. Those who can't -- teach.Martin's Extension: Those who cannot teach -- administrate.
Andrew Sullivan wrote: > One thing that is very interesting about all of this is that the > large shared buffers only exact their performance penalty over time. > My hypothesis is that it has something to do with expiring buffers. > In one test I performed, I set the buffer to 1G. I then did a bunch > of work on a data set that was close to 1G. Speedy. But when I > finally went over 1G, everything slowed to a crawl. This makes me > believe that the problem is in the way records are added to or > expired from the buffer. > > It was only one test, mind: I didn't have time to repeat it. So it's > just a bit of gossip and not a result. Interesting. We encountered a similar issue, but on Red Hat Advanced Server 2.1. Specs: Linux 2.4.9-e.3smp (Red Hat's tweaks on Linux 2.4.9) Dual P4 Xeon 2.4GHz 1.5 GB of RAM shared_buffers = 32768 (about 256 Megs) PostgreSQL 7.3.2 compiled from source. Never used swap. Performance seems speedy initially, but after, a few days or some large data migration and processing operations, things slowed to a crawl, esp. on INSERTs. Little disk or CPU activity. Did the usual dead chicken waving: VACUUM, ANALYZE, REINDEX, dump&restore, restart postgresql. No luck. Reboot, and everything's back to normal. Annoying. It's a repeatable phenomenon, though we can't figure out the cause: the old-ish O.S., or the shared memory fragmentation(?) or just plain unlucky. We've kept some vmstat and other details. Bug me if anyone's interested. -- Linux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 GNU/Linux 2:59pm up 153 days, 5:46, 11 users, load average: 3.81, 4.73, 5.65