Re: 回复: [External] Re: Separatevolumes - Mailing list pgsql-sql

From Bruce Momjian
Subject Re: 回复: [External] Re: Separatevolumes
Date
Msg-id 20200410210154.GC24987@momjian.us
Whole thread Raw
In response to Re: 回复: [External] Re: Separate volumes  (Erik Brandsberg <erik@heimdalldata.com>)
Responses Re: 回复: [External] Re: Separate volumes  (Erik Brandsberg <erik@heimdalldata.com>)
List pgsql-sql
On Fri, Apr 10, 2020 at 04:52:06PM -0400, Erik Brandsberg wrote:
> A modern filesystem can help avoid even this complexity.  As an example, I am
> managing one PG setup that is self-hosted on an AWS EC2 instance, with 16TB of
> raw storage.  The bulk of that storage is in ST1, or the cheapest rotating disk
> capacity available in EBS, but is using ZFS as the filesystem (with
> compression, so realistically about 35TB of raw data).  The instance type is a
> Z1d.metal, which has two 900GB NVME drives, which have been divided to provide
> swap space, as well as ZFS read and write caching.  This setup has largely
> offset the slow performance of the st1 disks, and kept the performance usable
> (most of the data is legacy, and rarely used).   I'm a big fan of keeping the
> DB configuration simple, as it is way too easy to overlook tuning of a
> filespace for an index, causing performance problems, while if you keep it
> auto-tuning at the filesystem level, it "just works".

You are saying the cloud automatically moves data between the fast and
slow storage?  I know many NAS systems do this, but I have also seen
problems when NAS systems guess wrong.

-- 
  Bruce Momjian  <bruce@momjian.us>        https://momjian.us
  EnterpriseDB                             https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +



pgsql-sql by date:

Previous
From: Erik Brandsberg
Date:
Subject: Re: 回复: [External] Re: Separate volumes
Next
From: Erik Brandsberg
Date:
Subject: Re: 回复: [External] Re: Separate volumes