Re: Question on Opteron performance - Mailing list pgsql-general

From nw@codon.com
Subject Re: Question on Opteron performance
Date
Msg-id 20040309070921.9507.qmail@codon.com
Whole thread Raw
In response to Question on Opteron performance  ("Steve Wolfe" <nw@codon.com>)
Responses Re: Question on Opteron performance
List pgsql-general
>>   Right now, we're using a dual 2.8GHz Xeon with 3 gigs of memory, and run
>> without fsync() enabled.  Between disk cache and shared buffers, the disk
>> system isn't an issue - vmstat shows that the disk I/O is nearly always at
>> zero, with the occasional blips of activity rarely being more than a few
>> hundred kilobytes.

> You do know that turning off fsync() means your data will all get
> trashed if you get an OS crash or power problem or H/W crash or ...

  But of course.  : )

  I've been running production servers with fsync() disabled for about
four years now, without a problem.  On the semi-production machine where
that sort of thing is allowed to happen, even abnormal power outtages
haven't produced any data corruption in the few times they've occured.
Of course, I do realize that sooner or later, it may catch up to me and
bite me in the butt.  Because of that, I do have recovery/contingency
plans in place!

> Is this true? Did they really double the size of the memory bus, or is
> it a case of 4 CPUs fighting for the same memory bandwidth that 2 had
> before?

  As another person pointed out, the Opterons are NUMA-style machines.
Each CPU has its own memory controller, so each time you add another CPU,
you're also adding more memory bandwidth.  This is how some of the "bigger"
machines (like Suns) have been doing it for some time.

  As I've taken our real-world data and benchmarked various systems (
4-way P3 Xeon, dual Athlons, dual P4 Xeons), adding CPU cycles tends to
increase performance linearly, and in small increments.  Increasing the
memory bandwidth, however, seems to produce the large performance
improvements.

  In fact, while the dual Athlon smoked the 4xP3 Xeon machine, it was still
very limitted by the "measly" 266 MHz, 64-bit memory subsystem.  When we'd
max out the throughput, the CPUs usually weren't doing a whole lot, but
rather waiting for memory.  With double the memory bandwidth, the Xeons seem
to be able to keep the CPUs doing a bit more than the Athlons could.  If
I'm wrong about the shared-buffer limitation, and PostgreSQL's design will
lend itself well to the Opteron's memory architecture, then a 4-way Opteron
having more than 4 times the memory bandwidth should definitely be good
for what ails us.

>>  If anyone has done tests with PostgreSQL on 2- vs. 4-way machines under
>> heavy load (many simultaneous connections), I would greatly appreciate
>> hearing about the results.

> What sort of load is "heavy load" to you?

  If I recall from today's loads, we were getting about 50 queries per
second from the pool of front-end servers.  Obviously, whether 50 queries
per second is "heavy" depends on the type of queries, these were enough
to push the 5-minute system loads up into the 0.8 range.  In our application,
once we exceed a system load of about 0.9, we start seeing enough slowdown
that it does become noticeable.  Not always very significant, but noticeable.

steve

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: How to tell if a connection is SSL or not
Next
From: Nick Barr
Date:
Subject: Re: Question on Opteron performance