Re: Configuration - Mailing list pgsql-performance

From Filip Rembiałkowski
Subject Re: Configuration
Date
Msg-id CAP_rww=FWC5FZB=e46qMsfVon79TSFF=X994F9k0W3yW05LQjQ@mail.gmail.com
Whole thread Raw
In response to Configuration  (sugnathi hai <suganhai@yahoo.com>)
List pgsql-performance
On Mon, Jun 1, 2020 at 2:09 PM sugnathi hai <suganhai@yahoo.com> wrote:

> In PGtune I can able to get configuration changes based on RAM and Disk and No of Connection.
>
> but if we want to recommend RAM, DISK, and No of connection based on DB size. any calculation is there
>
> For Example, 1 TB Database how much RAM and DISK Space required for better performance
>
> the DB size will increase per day 20 GB
> Frequent delete and insert will happen

The database size on itself is not enough to provide any sensible
recommendation for RAM. The connection count and usage patterns are
critical.

There are 1TB databases which could work really fine with as little as
40GB RAM, if connection number is limited, all queries are
index-based, and the active data set is fairly small.

On the other hand, if you have many connections and non-indexed
access, you might need 10x or 20x more RAM for a sustained
performance.

That's why PgTune configurator requires you enter RAM, connection
count and DB access pattern class (OLTP/Web/DWH)

Anyway, what PgTune gives is just and approximated "blind guess"
recommendation. If auto-configuration was easy, we would have it in
core postgres long time ago. It could be nice to have a configuration
advisor based on active data set size... but I doubt it will be
created - for several reasons, 1st - it would be still a "blind
guess". 2nd - current version pgtune is not-that-nice for contributors
(fairly ugly JS code).



pgsql-performance by date:

Previous
From: sugnathi hai
Date:
Subject: Configuration
Next
From: Justin Pryzby
Date:
Subject: Re: When to use PARTITION BY HASH?