Thread: hardware for a server

hardware for a server

From
A B
Date:
Hello.

It's time to get new hardware for a server that will run both
PostgreSQL and Apache.
The workload will be similar to that of  your standard "PHP forum"
(most selects and logging of stuff that has been read)

The modell I'm looking at right now is

2x Xeon E5520 2,26 GHz 8 MB (8 cores in total)
24 GB 1066 MHz DDR 3 ECC (or more)

When it comes to a RAID controller I have the choice of:

3Ware SAS 9690SA-8i 512 MB BBU
Adaptec SAS Raid 5805  256 MB BBU
LSI MegaRaid SAS 8708 128 MB BBU

Any advice/experience on what raid controller to pick? The 3ware has
the most memory and I've read some good reviews on that one.
The OS will be CentOS 5.4

When it comes to harddrives I think my best optins is to use

* 4 discs (raid 10)  for the database

and

* 4 discs (raid 10) for OS, xlog and other data (images,avatars etc.)
or more likely:
* 2 discs (raid 1) for OS/xlog and
* 2 discs  (raid1) for "other data"


When it comes to choosing the acctual discs I guess this would be
appropriate to use:
"other data":  Barracda ES.2 1000 GB (SATA)  to get a a good GB/$ ratio.
OS/xlog : Barracuda ES.2 500 GB (SAS)
DB: Cheeta 15K.6  146 GB (SAS) (The 300 GB would be better if I can
find some more money)

This of course gives me a headache when it comes to keeping spare discs.

The other option would be to  use OS/xlog and DB on Barracuda ES.2 500
GB (SAS). I have no idea what that mean to the performance. A lot/
barely noticable?

Any comments, advice on this kind of setup?

Re: hardware for a server

From
Scott Marlowe
Date:
On Sat, Mar 13, 2010 at 1:34 AM, A B <gentosaker@gmail.com> wrote:
> Hello.
>
> It's time to get new hardware for a server that will run both
> PostgreSQL and Apache.
> The workload will be similar to that of  your standard "PHP forum"
> (most selects and logging of stuff that has been read)
>
> The modell I'm looking at right now is
>
> 2x Xeon E5520 2,26 GHz 8 MB (8 cores in total)
> 24 GB 1066 MHz DDR 3 ECC (or more)
>
> When it comes to a RAID controller I have the choice of:
>
> 3Ware SAS 9690SA-8i 512 MB BBU
> Adaptec SAS Raid 5805  256 MB BBU
> LSI MegaRaid SAS 8708 128 MB BBU
>
> Any advice/experience on what raid controller to pick? The 3ware has
> the most memory and I've read some good reviews on that one.
> The OS will be CentOS 5.4

I can't comment on any of those particular controllers.  There's a
benchmark here:

http://www.tomshardware.com/reviews/adaptec-serial-controllers,1806-11.html


> When it comes to harddrives I think my best optins is to use
>
> * 4 discs (raid 10)  for the database
>
> and
>
> * 4 discs (raid 10) for OS, xlog and other data (images,avatars etc.)
> or more likely:
> * 2 discs (raid 1) for OS/xlog and
> * 2 discs  (raid1) for "other data"

Leaving out 2 drives for other data is kind of a waste of spindles.
You can network mount terabytes from a file server full of SATA drives
for much less.

Also...

> When it comes to choosing the acctual discs I guess this would be
> appropriate to use:
> "other data":  Barracda ES.2 1000 GB (SATA)  to get a a good GB/$ ratio.
> OS/xlog : Barracuda ES.2 500 GB (SAS)
> DB: Cheeta 15K.6  146 GB (SAS) (The 300 GB would be better if I can
> find some more money)

Mixing SATA and SAS drives on the same controller can be problematic.
Some controllers don't behave well when you mix and match.

I'd suggest building an 8 disk  RAID-10 and a single mirror + 6 disk
RAID-10 and testing both configurations.  If you need more storage
look at 300G SAS drives.  Your two bottleneck are likely to be IO
random write ops and / or CPU horsepower, depending on how your web
app is built. You can always buy another $1500 box with hot cpus and a
pair of big SATA drives if you need more CPU horsepower, but beefing
up IO is a lot hard once your db server is in place.

> This of course gives me a headache when it comes to keeping spare discs.
>
> The other option would be to  use OS/xlog and DB on Barracuda ES.2 500
> GB (SAS). I have no idea what that mean to the performance. A lot/
> barely noticable?

The cost diff now on 500 and 1TB drives is too low to bother with 500
and 1TB mixed, just get 1TB.  And I can't really recommend Seagate
ES.2 or 7200.11 drives right now with the failure rates I've been
seeing.

> Any comments, advice on this kind of setup?

If you've got a lot of reads going on be sure to toss memcached into
this equation.

Re: hardware for a server

From
Vick Khera
Date:
On Sat, Mar 13, 2010 at 3:34 AM, A B <gentosaker@gmail.com> wrote:
> 3Ware SAS 9690SA-8i 512 MB BBU
> Adaptec SAS Raid 5805  256 MB BBU
> LSI MegaRaid SAS 8708 128 MB BBU
>

When faced with the choice of Adaptec vs. anything else, I choose
anything else.  When faced with the choice of LSI vs anything else, I
look really hard for a reason not to choose LSI.  I usually choose
LSI.  See if the LSI has upgradable RAM.  If so, then bump the RAM to
the max.  I have heard good things of 3Ware too, but have never used
them.  We just put a new LSI SAS controller into our main office
mail/file server and it is crazy fast running ZFS in FreeBSD.  I don't
recall the exact model number, though.

As for your "2 disks for other data" you'd be better off putting xlog
on the dedicated pair, and using the OS pair for the "other data".
That way the xlog drive will never have to seek, and your writes will
be as fast as possible.

As for drive choice, just pick the fastest you can get that are big
enough for your data.

Re: hardware for a server

From
Greg Smith
Date:
A B wrote:
> 3Ware SAS 9690SA-8i 512 MB BBU
> Adaptec SAS Raid 5805  256 MB BBU
> LSI MegaRaid SAS 8708 128 MB BBU
>
> When it comes to choosing the acctual discs I guess this would be
> appropriate to use:
> "other data":  Barracda ES.2 1000 GB (SATA)  to get a a good GB/$ ratio.
> OS/xlog : Barracuda ES.2 500 GB (SAS)
> DB: Cheeta 15K.6  146 GB (SAS) (The 300 GB would be better if I can
> find some more money)
>

Here's some things not to do to help narrow this down.

Don't mix SAS and SATA; vendors will tell you it works, but it's
extremely painful when it doesn't, and that happens sometimes.

Don't put SAS drives on a 3ware controller.  They say that works now,
but they haven't really gotten it right yet--their controllers are still
only good with SATA drives.

Don't put an Adaptec card into a Linux system.  They don't take that OS
nearly as seriously as your other choices here.

Don't mix drive capacities unless absolutely necessary for budget
reasons, because you'll hate life the minute you're trying to recover
from your first disaster and have minimal flexibility for rebuilding any
arrays because of that choice.  This is also a secondary reason not to
mix SAS and SATA.

Do not mix the xlog and OS disks onto the same drive.  That defeats the
purpose of having a separate xlog disk on the first place, particularly
because on a Linux system every xlog write is going to require flushing
every OS write in the pipeline to commit.  Ditto for mixing the xlog and
all this image/avatar stuff, if those are producing large writes too.
You'll be better off mixing the xlog with the database disks--at least
then you'll be clogging the RAID card's write cache with mostly database
writes when things get congested, rather than having either sitting
blocked behind OS or "other data" writes that must get flushed first.

Given what you've said about your budget here, I suspect that you're
heading toward either 3ware or LSI and all SATA drives.  I wouldn't
expect that big of a performance difference between the two with only 8
drives on there.  If you had 24, the 3ware controller would likely turn
into the bottleneck, and if this was an all SAS system the LSI one would
also be the only sensible choice.  (Make sure you get the right battery
included with whatever controller you pick)

In your situation, I'd probably get a pair of TB drives for the OS and
"other data", just to get them all out of the way on one place to fight
with each other, then use whatever budget is leftover to get the best
performing drives you can to create a 6-disk RAID10 for the database +
xlog.  If all six of those can only be a smaller drive instead after
that, that's not such a bad combination--you can always grab a larger
capacity drive as your spare and then put it anywhere in the array in an
emergency.

Mind you, that's said from the perspective of a database person.  If
your image data has to be high performance, too, maybe an even 4/4 split
between OS+data and DB+xlog would make more sense for your app.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: hardware for a server

From
A B
Date:
> Don't put SAS drives on a 3ware controller.  They say that works now, but
> they haven't really gotten it right yet--their controllers are still only
> good with SATA drives.

How bad will it be with SAS drives? Is there so little performance
gain witn 3ware+SAS?


Scott Marlowe stated in earlier reply that Seagates ES.2 disks are not
very good, which would leave the SAS Cheetah discs with the LSI card.
The LSI card is down to 128 MB memory.
But LSI+SAS is still the clear winner over 3ware with  512 MB and SAS/SATA?


> In your situation, I'd probably get a pair of TB drives for the OS and
> "other data", just to get them all out of the way on one place to fight with
> each other, then use whatever budget is leftover to get the best performing
> drives you can to create a 6-disk RAID10 for the database + xlog.  If all
> six of those can only be a smaller drive instead after that, that's not such
> a bad combination--you can always grab a larger capacity drive as your spare
> and then put it anywhere in the array in an emergency.

>Don't mix SAS and SATA; vendors will tell you it works, but it's extremely painful when it doesn't, and that happens
sometimes.

Does that also forbid the case when you create two raid arrays, let
say a raid-1 with only SATA discs (huge discs) and a  raid-10 with
only SAS drives? (as your example with the 2/6 split)
There are internal SATA controllers so I don't have to bother the Raid
card with a pair of SATA drives, but I'd prefer to use the BBU  for
all the drives. Data loss is not funny.

Re: hardware for a server

From
Bruce Momjian
Date:
Greg Smith wrote:
> Given what you've said about your budget here, I suspect that you're
> heading toward either 3ware or LSI and all SATA drives.  I wouldn't
> expect that big of a performance difference between the two with only 8
> drives on there.  If you had 24, the 3ware controller would likely turn
> into the bottleneck, and if this was an all SAS system the LSI one would
> also be the only sensible choice.  (Make sure you get the right battery
> included with whatever controller you pick)

Is this documented somewhere, like on our wiki?  It seems we have a
clear consensus on this and we should document this.

--
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do

Re: hardware for a server

From
Vick Khera
Date:
On Mon, Mar 15, 2010 at 8:55 AM, Greg Smith <greg@2ndquadrant.com> wrote:
> Don't put an Adaptec card into a Linux system.  They don't take that OS
> nearly as seriously as your other choices here.
>

Interesting... same advice goes for Adaptec + FreeBSD.  I guess
Adaptec + !Windows == bad?

Daylight savings time confusion

From
"Rob Richardson"
Date:
Greetings!

Our database monitors the progression of steel coils through the
annealing process.  The times for each step are recorded in wallclock
time (US eastern time zone for this customer) and in UTC time.  During
standard time, the difference will be 5 hours, and during daylight
savings time the difference will be 4 hours.

I just looked at the record for a charge for which heating started just
after 9:00 Saturday night, less than 3 hours before the change to
daylight savings time.  The UTC time stored for this event is six hours
later!

The function that writes these times first stores the UTC time in a
variable named UTCTimestamp:

    select into UTCTimestamp current_timestamp at time zone 'UTC';

Then, later in the function, the two times get written into the record
(along with some other stuff):

            update charge set
status=ChargeStatus,fire_date=current_timestamp,
fire_date_utc=UTCTimestamp, fire_user=FurnaceTender,
updated_by=UserId,updated_date=current_timestamp where charge=ChargeNum;

Can someone explain why fire_date is 2010-03-13 21:39:51.744 and
fire_date_utc is 2010-03-14 03:39:51.744 for this record?

There is another charge that began firing five and a half hours before
the DST switch.  The difference between its fire_date and fire_date_utc
times is five hours, as expected.

RobR

Re: hardware for a server

From
John R Pierce
Date:
Vick Khera wrote:
> Interesting... same advice goes for Adaptec + FreeBSD.  I guess
> Adaptec + !Windows == bad?
>

i've never liked adaptec, windows or not.   bunches of their too-popular
SCSI cards even way back in the old days of 10-20MB/sec SCSI had sketchy
electrical specs on the SCSI bus and were way too sensitive to cabling,
termination, while the LSI stuff was just a lot more robust.   I'd take
a 53C875 or whatever over a 2940W any day.

always been an LSI fan, as far back as when they were NCR Tolerant, then
later Symbios.   And the AMI Megaraid line, too.



Re: Daylight savings time confusion

From
Justin Graf
Date:
On 3/15/2010 2:40 PM, Rob Richardson wrote:
> Greetings!
>
> Our database monitors the progression of steel coils through the
> annealing process.  The times for each step are recorded in wallclock
> time (US eastern time zone for this customer) and in UTC time.  During
> standard time, the difference will be 5 hours, and during daylight
> savings time the difference will be 4 hours.
>
> I just looked at the record for a charge for which heating started just
> after 9:00 Saturday night, less than 3 hours before the change to
> daylight savings time.  The UTC time stored for this event is six hours
> later!
>
> The function that writes these times first stores the UTC time in a
> variable named UTCTimestamp:
>
>      select into UTCTimestamp current_timestamp at time zone 'UTC';
>
> Then, later in the function, the two times get written into the record
> (along with some other stuff):
>
>              update charge set
> status=ChargeStatus,fire_date=current_timestamp,
> fire_date_utc=UTCTimestamp, fire_user=FurnaceTender,
> updated_by=UserId,updated_date=current_timestamp where charge=ChargeNum;
>
> Can someone explain why fire_date is 2010-03-13 21:39:51.744 and
> fire_date_utc is 2010-03-14 03:39:51.744 for this record?
>
> There is another charge that began firing five and a half hours before
> the DST switch.  The difference between its fire_date and fire_date_utc
> times is five hours, as expected.
>
> RobR
>

My first thought is the server is using libraries that don't know the
DST was brought forward 3 weeks earlier than last year,  its clock is
all confused.

i would check the time on Postgresql Server making sure it read out
correctly.  below was run on pg 8.4  windows 2008 server

Select current_timestamp, current_timestamp at time zone 'UTC';
"2010-03-15 16:43:11.382-04";"2010-03-15 20:43:11.382"


All legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by
ourproprietary quotation system. Quotations received via any other form of communication will not be honored. 

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other
informationproprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it
addresses.If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified
thatany unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have
receivedthis e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this
e-mailimmediately. 
Thank you.


Re: Daylight savings time confusion

From
"Rob Richardson"
Date:
Thanks for the try, Justin, but that doesn't seem to be the problem.
The query generates the same results on my customer's machine.  Besides,
I think your theory would only hold up if there were two machines
involved.  There aren't.

RobR

Re: Daylight savings time confusion

From
Steve Crawford
Date:
Rob Richardson wrote:
> Greetings!
>
> ...
> I just looked at the record for a charge for which heating started just
> after 9:00 Saturday night, less than 3 hours before the change to
> daylight savings time.  The UTC time stored for this event is six hours
> later!
>
> The function that writes these times first stores the UTC time in a
> variable named UTCTimestamp:
>
>     select into UTCTimestamp current_timestamp at time zone 'UTC';
>
> Then, later in the function, the two times get written into the record
> (along with some other stuff):
>
>             update charge set
> status=ChargeStatus,fire_date=current_timestamp,
> fire_date_utc=UTCTimestamp, fire_user=FurnaceTender,
> updated_by=UserId,updated_date=current_timestamp where charge=ChargeNum;
>
> Can someone explain why fire_date is 2010-03-13 21:39:51.744 and
> fire_date_utc is 2010-03-14 03:39:51.744 for this record?
>
> There is another charge that began firing five and a half hours before
> the DST switch.  The difference between its fire_date and fire_date_utc
> times is five hours, as expected....
>


I think you are shooting yourself in the foot with the different
timestamp columns. Time is time is time and you only need one column to
represent it. I think the problems were masked until the time-zone
change. (Trust me, I'm having my own fun, today. Try "date -d yesterday"
between midnight and 1am the day after springing forward and you get the
11pm hour Saturday but "date -d '0015 2010-03-15 -1 day' gives fifteen
minutes past midnight on the 14th.)

It is a bit difficult to trace everything without seeing your full
functions and column types but I believe that the first issue is that
when you specify the timezone, the result does not include the time-zone
offset (timestamp without tz). Note that there is no -00 (or +00) and
there isn't one regardless of zone:

select now(),now() at time zone 'UTC' as utc, now() at time zone
'America/New_York' as ny;
-[ RECORD 1 ]-------------------------
now | 2010-03-15 15:34:52.3342-07
utc | 2010-03-15 22:34:52.3342
ny  | 2010-03-15 18:34:52.3342

Now see what happens if you run:
select current_timestamp, (select current_timestamp at time zone
'UTC')::timestamptz ;
-[ RECORD 1 ]------------------------------
now         | 2010-03-15 15:39:44.594979-07
timestamptz | 2010-03-15 22:39:44.594979-07

Two timestamptz columns offset by 7 hours. (Really offset - they are
both displayed in Pacific Daylight Time).

The second issue is that depending on which of your columns/variables
are with or without the zone information and how you do your
calculations, you could easily end up with a situation where your
current time is Standard so your program "knows" the correct offset to
be 5 hours which you add to a 9pm timestamptz. Given the missing hour,
9pm plus 5 hours gets you to 3am. But if you are mix-and-matching
timestamps with and without time-zone you are in for some interesting
problems.

Finally, beware that time handling has been updated across PG versions.
For example, "select now() - '1 day'::interval" works differently in,
7.4 (if run early Monday after a time change you will end up with late
Saturday) than in 8.4 (you get the current time of day on Sunday). So if
you take the difference between those two timestamps in 7.4 it is 24
hours but in 8.4 it is 23 hours.

A better approach is to store the fully-qualified timestamp in a single
column of type timestamptz instead of duplicated columns that are
supposed to represent the same point in time (but different zones). Then
display that one column in whatever timezone(s) you want:
select
    now() as local,
    now() at time zone 'America/New_York' as eastern,
    now() at time zone 'CST6CDT' as central,
    now() at time zone 'Chile/Continental' as chile,
    now() at time zone 'Africa/Addis_Ababa' as ethiopia;
-[ RECORD 1 ]---------------------------
local    | 2010-03-15 15:47:01.644575-07
eastern  | 2010-03-15 18:47:01.644575
central  | 2010-03-15 17:47:01.644575
chile    | 2010-03-15 18:47:01.644575
ethiopia | 2010-03-16 01:47:01.644575

Cheers,
Steve


Re: Daylight savings time confusion

From
Adrian Klaver
Date:
On 03/15/2010 12:40 PM, Rob Richardson wrote:
> Greetings!
>
> Our database monitors the progression of steel coils through the
> annealing process.  The times for each step are recorded in wallclock
> time (US eastern time zone for this customer) and in UTC time.  During
> standard time, the difference will be 5 hours, and during daylight
> savings time the difference will be 4 hours.
>
> I just looked at the record for a charge for which heating started just
> after 9:00 Saturday night, less than 3 hours before the change to
> daylight savings time.  The UTC time stored for this event is six hours
> later!

First, the time change occurs at 2:00 am Sunday morning which is 5 five
hours after 9:00 pm Saturday. Second the timestamps below show a start
time of 39 minutes after 9 which a little more than just after:) Are you
sure about the time?

>
> The function that writes these times first stores the UTC time in a
> variable named UTCTimestamp:
>
>      select into UTCTimestamp current_timestamp at time zone 'UTC';
>
> Then, later in the function, the two times get written into the record
> (along with some other stuff):
>
>              update charge set
> status=ChargeStatus,fire_date=current_timestamp,
> fire_date_utc=UTCTimestamp, fire_user=FurnaceTender,
> updated_by=UserId,updated_date=current_timestamp where charge=ChargeNum;
>
> Can someone explain why fire_date is 2010-03-13 21:39:51.744 and
> fire_date_utc is 2010-03-14 03:39:51.744 for this record?

Some other process updated either field?

>
> There is another charge that began firing five and a half hours before
> the DST switch.  The difference between its fire_date and fire_date_utc
> times is five hours, as expected.
>
> RobR
>


--
Adrian Klaver
adrian.klaver@gmail.com

Re: Daylight savings time confusion

From
Tom Lane
Date:
"Rob Richardson" <Rob.Richardson@rad-con.com> writes:
> Our database monitors the progression of steel coils through the
> annealing process.  The times for each step are recorded in wallclock
> time (US eastern time zone for this customer) and in UTC time.  During
> standard time, the difference will be 5 hours, and during daylight
> savings time the difference will be 4 hours.

It seems to me that you're not entirely understanding how timestamps
work in Postgres.  The above is quite unnecessary, and the way that
you're computing the data to store looks wrong too.  I think the
problem is that you are inserting unnecessary (and incorrect)
conversions because of sloppiness about data types.

You started with (to simplify matters) 9:39pm last Saturday:

# select '2010-03-13 21:39 EST'::timestamptz;
      timestamptz
------------------------
 2010-03-13 21:39:00-05
(1 row)

Now what this is under the hood is a *UTC time*.  The fact that I
entered it as a time with respect to EST zone doesn't change that;
it got rotated to UTC internally.  The display as EST doesn't change
it either; that's because the internal value is rotated back to my
TimeZone setting (EST5EDT) for display.  So the actual internal
value is equivalent to 2010-03-14 02:39:00 UTC.  (In your problem
case, that was what you got from current_timestamp, but we can
experiment with this manually entered value instead.)

You then did this:

>     select into UTCTimestamp current_timestamp at time zone 'UTC';

What the AT TIME ZONE expression produces is a timestamp WITHOUT time
zone value, which will be '2010-03-14 02:39:00' without reference to
any particular time zone:

# select '2010-03-13 21:39 EST'::timestamptz at time zone 'UTC';
      timezone
---------------------
 2010-03-14 02:39:00
(1 row)

Now at this point I have to guess, since you didn't show us the declared
data types of any of the variables involved, but I'm going to guess that
the local variable UTCTimestamp is declared as timestamp WITH time zone
(timestamptz) whereas the fire_date and fire_date_utc columns are
timestamp WITHOUT time zone.  Since the result of the AT TIME ZONE
construct is timestamp WITHOUT time zone, it will have to be converted
to timestamp WITH time zone to be stored into UTCTimestamp.  And since
the value has no attached time zone, the conversion process will assume
that it's relative to the zone specified by TimeZone.  So that means
it's interpreted as 2010-03-14 02:39:00 in EST5EDT.  And there's a bit
of a problem with that: since we jumped from 02:00 to 03:00 local time,
there *was* no instant when a properly functioning clock would have read
02:39 local time.  You could make an argument for throwing an error
here, but what the timestamp input routine actually does is to assume
that local standard time was meant.  So the result is the equivalent
of 07:39 UTC (five-hour offset from the given time).  If I do this by
hand I get

# select '2010-03-14 02:39:00'::timestamptz;
      timestamptz
------------------------
 2010-03-14 03:39:00-04
(1 row)

The display is 03:39 EDT, which is what an east-coast clock would
actually have read at 07:39 UTC.  Remember that the internal value
is just UTC; the rotation to 03:39 is an I/O or conversion behavior.

And then lastly you stored this value into a timestamp WITHOUT time zone
column.  That means it gets rotated to the TimeZone zone, as if for
display.  So what went into the fire_date_utc column is '2010-03-14
03:39:00', sans any identifying information that would have clarified
what this was supposed to mean.

Meanwhile, your fire_date column was set directly from current_timestamp
without any intermediate shenanigans, so what it got was 02:39 UTC
rotated just once to local time, producing 21:39 of the previous day as
expected.

If my guesses are correct, then the minimum change to avoid this type
of problem in the future is to change UTCTimestamp to be declared as
timestamp WITHOUT time zone, so that you don't get two extra zone
rotations in there.  However, I would strongly suggest that you rethink
how you're storing the data altogether.  Two columns that represent the
identical item of information is not good database design according to
any theory I've ever heard.  What I'd store is a single fire_date column
that is of type timestamp with time zone and is just assigned directly
from current_timestamp without any funny business.  Internally it is UTC
and completely unambiguous.  Subsequently you can read it out in any
time zone you want, either by setting TimeZone appropriately or by using
the AT TIME ZONE construct to do a one-time conversion.

            regards, tom lane

Re: hardware for a server

From
Greg Smith
Date:
John R Pierce wrote:
> Vick Khera wrote:
>> Interesting... same advice goes for Adaptec + FreeBSD.  I guess
>> Adaptec + !Windows == bad?
>>
>
> i've never liked adaptec, windows or not.

Yeah, I was trying to be as nice as possible since I don't run Windows
anymore, and for all I know their cards are fine on that OS.  For me, I
just simplify to "Adaptec == bad", but can only tell you specifically
why that is for Linux.

> always been an LSI fan, as far back as when they were NCR Tolerant,
> then later Symbios.   And the AMI Megaraid line, too.

I used to hesitate to recommend them only because their performance used
to lag relative to some of the alternatives--they were more the
"reliable but a bit slower" choice in earlier times.  This is no longer
the case.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: hardware for a server

From
Greg Smith
Date:
Bruce Momjian wrote:
> Greg Smith wrote:
>
>> Given what you've said about your budget here, I suspect that you're
>> heading toward either 3ware or LSI and all SATA drives.  I wouldn't
>> expect that big of a performance difference between the two with only 8
>> drives on there.  If you had 24, the 3ware controller would likely turn
>> into the bottleneck, and if this was an all SAS system the LSI one would
>> also be the only sensible choice.  (Make sure you get the right battery
>> included with whatever controller you pick)
>>
>
> Is this documented somewhere, like on our wiki?  It seems we have a
> clear consensus on this and we should document this.
>

The documentation we do have on the wiki in this area is out of date:
http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks

I just finished testing a bunch of LSI card recently enough that I
haven't gotten to fixing the outdated info on there yet about that company.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: hardware for a server

From
Greg Smith
Date:
A B wrote:
>> Don't put SAS drives on a 3ware controller.  They say that works now, but
>> they haven't really gotten it right yet--their controllers are still only
>> good with SATA drives.
>>
>
> How bad will it be with SAS drives? Is there so little performance
> gain witn 3ware+SAS?
>

The concern isn't performance.  I did a SAS+3Ware install recently and
was struck by how the SAS supported seemed bolted out without being
fully integrated.  My concerns are more related to whether they've
really handled all the possible drive failure cases, given that they are
missing basics like http://www.3ware.com/KB/article.aspx?id=15456

> Scott Marlowe stated in earlier reply that Seagates ES.2 disks are not
> very good, which would leave the SAS Cheetah discs with the LSI card.
> The LSI card is down to 128 MB memory.
> But LSI+SAS is still the clear winner over 3ware with  512 MB and SAS/SATA?
>

I think it will be too close to predict which will work better for your
application, and that you'd be better off thinking in terms of your
storage and monitoring needs instead of stressing over the possible
performance difference between these two options.  Those are the two
reasonable paths here, and I don't believe they lead to such
dramatically different places at the end from a performance perspective
that speed should be the only thing factoring into how to make that
decision now.

>> Don't mix SAS and SATA; vendors will tell you it works, but it's extremely painful when it doesn't, and that happens
sometimes.
>>
>
> Does that also forbid the case when you create two raid arrays, let
> say a raid-1 with only SATA discs (huge discs) and a  raid-10 with
> only SAS drives? (as your example with the 2/6 split)
> There are internal SATA controllers so I don't have to bother the Raid
> card with a pair of SATA drives, but I'd prefer to use the BBU  for
> all the drives.

The situation I never have satisfying results with involves mixing SAS
and SATA drives on the same controller; I assumed you'd be using them in
separate RAID arrays, which doesn't change that opinion.

Write caches typically work only against drives connected directly to
that controller.  You could easily split the OS drive out onto your
internal SATA controllers.  However, I think you'll be disappointed with
the results, because software RAID-1 for the boot drive in particular is
more difficult to manage and recover from failures with.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: hardware for a server

From
Bruce Momjian
Date:
Greg Smith wrote:
> Bruce Momjian wrote:
> > Greg Smith wrote:
> >
> >> Given what you've said about your budget here, I suspect that you're
> >> heading toward either 3ware or LSI and all SATA drives.  I wouldn't
> >> expect that big of a performance difference between the two with only 8
> >> drives on there.  If you had 24, the 3ware controller would likely turn
> >> into the bottleneck, and if this was an all SAS system the LSI one would
> >> also be the only sensible choice.  (Make sure you get the right battery
> >> included with whatever controller you pick)
> >>
> >
> > Is this documented somewhere, like on our wiki?  It seems we have a
> > clear consensus on this and we should document this.
> >
>
> The documentation we do have on the wiki in this area is out of date:
> http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks
>
> I just finished testing a bunch of LSI card recently enough that I
> haven't gotten to fixing the outdated info on there yet about that company.

I am thinking we should point to that wiki from our official docs so
people find that information easily.

--
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do

Re: Daylight savings time confusion

From
Alvaro Herrera
Date:
Tom Lane wrote:

> If my guesses are correct, then the minimum change to avoid this type
> of problem in the future is to change UTCTimestamp to be declared as
> timestamp WITHOUT time zone, so that you don't get two extra zone
> rotations in there.  However, I would strongly suggest that you rethink
> how you're storing the data altogether.  Two columns that represent the
> identical item of information is not good database design according to
> any theory I've ever heard.  What I'd store is a single fire_date column
> that is of type timestamp with time zone and is just assigned directly
> from current_timestamp without any funny business.  Internally it is UTC
> and completely unambiguous.  Subsequently you can read it out in any
> time zone you want, either by setting TimeZone appropriately or by using
> the AT TIME ZONE construct to do a one-time conversion.

And possibly store the original timezone as a separate column, if that
information is of any value.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: Daylight savings time confusion

From
"Rob Richardson"
Date:
Tom,

You said, "It seems to me that you're not entirely understanding how
timestamps work in Postgres."  That is an understatement!

Thank you very much for your explanation.  I have forwarded it to the
other members of my development group, with my suggestion that we follow
your ideas for future projects.  I am not sure how easy it will be to
retrofit existing projects, but I am sure it should be done.

One question:  We have customers all over the world.  It would be best
if we could rely on the operating system (usually Windows Server 2003)
to tell us what time zone we're in, rather than asking for a specific
timezone when we want to know a wallclock time.  Is that possible?  If
not, it's not that big a deal because our database includes a table
named system_info that contains a single record describing the
customer's environment.  We could just add a timezone field to that
table.  But how would we do that?  What data type should that column
have, and what would a query look like that converts a time from UTC to
local time based on that field?

As I was typing that question, I think I came up with the answer:  the
question is irrelevant.  The reason for having a field to store times in
UTC is so that intervals between times can be calculated without
worrying about daylight savings time.  But Postgres will take the
timezone into account when calculating intervals, so there is no reason
at all to store a UTC version of the time.

And, as you pointed out, storing the same value twice is horrible
database design.

RobR

Re: Daylight savings time confusion

From
Alban Hertroys
Date:
On 22 Mar 2010, at 14:08, Rob Richardson wrote:

> One question:  We have customers all over the world.  It would be best
> if we could rely on the operating system (usually Windows Server 2003)
> to tell us what time zone we're in, rather than asking for a specific
> timezone when we want to know a wallclock time.  Is that possible?  If

Usually that timezone is set in the client program that connects to the database. If that program lives on a central
locationinstead of at your customers' then you may be able to determine their timezone from the client they are using
upstreamand pass it along to the database server. 

For example, web browsers often pass along what timezone they're connecting from, so you may be able to set the "client
timezone"based on that information. 

A drawback of storing a clients' timezone at the server is that you would be wrong if they are connecting from another
locationthan they usually do, for example while at a conference in a different country. If you leave determining the
timezoneup to them you can't ever be wrong ;) 

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4ba789e510411783369698!