Re: Using high speed swap to improve performance? - Mailing list pgsql-performance

From Christiaan Willemsen
Subject Re: Using high speed swap to improve performance?
Date
Msg-id vmime.4bbae50e.61d6.7eb43efb6b92d0bd@yoda.dhcp.tecnocon.com
Whole thread Raw
In response to Using high speed swap to improve performance?  (Christiaan Willemsen <cwillemsen@technocon.com>)
List pgsql-performance

Hi Scott,

 

That sound like a usefull thing to do, but the big advantage of the SAN is that in case the physical machine goes down, I can quickly startup a virtual machine using the same database files to act as a fallback. It will have less memory, and less CPU's but it will do fine for some time.

 

So when putting fast tables on local storage, I losse those tables when the machine goes down.

 

Putting indexes on there however might me intresting.. What will Postgresql do when it is started on the backupmachine, and it finds out the index files are missing? Will it recreate those files, or will it panic and not start at all, or can we just manually reindex?

 

Kind regards,

 

Christiaan
 

-----Original message-----
From: Scott Marlowe <scott.marlowe@gmail.com>
Sent: Sun 04-04-2010 23:08
To: Christiaan Willemsen <cwillemsen@technocon.com>;
CC: pgsql-performance@postgresql.org;
Subject: Re: [PERFORM] Using high speed swap to improve performance?

On Fri, Apr 2, 2010 at 1:15 PM, Christiaan Willemsen
<cwillemsen@technocon.com> wrote:
> Hi there,
>
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
> we want to move away (we are more familiar with Linux anyway).
>
> So the plan is to move to Linux and put the data on a SAN using iSCSI (two
> or four network interfaces). This however leaves us with with 16 very nice
> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,
> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem
> with those features (ZFS on fuse it not really an option).
>
> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10
> or 5), and make this a big and fast swap disk. Latency will be lower than
> the SAN can provide, and throughput will also be better, and it will relief
> the SAN from a lot of read iops.
>
> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB
> of memory. Then I can set Postgres to use more than the RAM size so it will
> start swapping. It would appear to postgres that the complete database will
> fit into memory. The question is: will this do any good? And if so: what
> will happen?

I'd make a couple of RAID-10s out of it and use them for highly used
tables and / or indexes etc...

pgsql-performance by date:

Previous
From: Craig Ringer
Date:
Subject: Re: query slow; strace output worrisome
Next
From: Artiom Makarov
Date:
Subject: Re: temp table "on commit delete rows": transaction overhead