Using high speed swap to improve performance? - Mailing list pgsql-performance

Hi there,

 

About a year ago we setup a machine with sixteen 15k disk spindles on Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris, we want to move away (we are more familiar with Linux anyway).

 

So the plan is to move to Linux and put the data on a SAN using iSCSI (two or four network interfaces). This however leaves us with with 16 very nice disks dooing nothing. Sound like a wast of time. If we were to use Solaris, ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem with those features (ZFS on fuse it not really an option).

 

So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10 or 5), and make this a big and fast swap disk. Latency will be lower than the SAN can provide, and throughput will also be better, and it will relief the SAN from a lot of read iops.

 

So I could create a 1TB swap disk, and put it onto the OS next to the 64GB of memory. Then I can set Postgres to use more than the RAM size so it will start swapping. It would appear to postgres that the complete database will fit into memory. The question is: will this do any good? And if so: what will happen?

 

Kind regards,

 

Christiaan

pgsql-performance by date:

Previous
From: Scott Carey
Date:
Subject: Re: Database size growing over time and leads to performance impact
Next
From: Arjen van der Meijden
Date:
Subject: Re: Using high speed swap to improve performance?