Thread: Qualifying use of separate TABLESPACES (performance/functionality)

Qualifying use of separate TABLESPACES (performance/functionality)

From
Imndlf8r@gmx.com
Date:
Hi,

I'm designing an appliance (think: 24/7/365/decades) in which Postgres
will act as a "skin" on the sole persistent store.  I.e., there is no
"filesystem" visible to clients; *all* persistent (and temporary/shared)
data is presented through Postgres.

There are three different types of data maintained in the RDBMS:
- static (firmware images, "reference" data, appliance configuration, etc)
- evolving (largely static but slowly evolving)
- transient (temp tables/joins, etc. built by cooperating groups of clients)

As this is an appliance, cost is important -- along with availability.
It's not acceptable to plan on a hardware upgrade in anything less than
a decade or more)

I would, ideally, like to tailor the media used for each type of (above)
data.  E.g., QLC SSDs for static data, SLC SSDs (or HDDs) for evolving
and RAMdisks for transient.

Can I do this?  And, would my "expectations" for the types of accesses
in each tablespace be intuitive?  E.g., could I expect no/few WRITES
to the tablespace with the static data if I never call for it to be
explicitly updated??  (IIRC, Oracle allows a tablespace to be qualified
as "READ-ONLY" and implemented on true R/O media)

Or, does Postgres expect to be able to access any media however it wants
(i.e., R/w), regardless of the expected access patterns of the data stored
there?



Re: Qualifying use of separate TABLESPACES (performance/functionality)

From
Christophe Pettus
Date:

> On Jul 16, 2024, at 21:45, Imndlf8r@gmx.com wrote:
> Or, does Postgres expect to be able to access any media however it wants
> (i.e., R/w), regardless of the expected access patterns of the data stored
> there?

Well, yes and no.

PostgreSQL will not respond well to having media that is literally read only in the sense that a write operation to it
willfail.  At some point, it will need to (for example) vacuum the tables, and that will means writes.  That being
said,if the only thing in a tablespace are tables (and their indexes) that are written once then never again, you won't
beconstantly getting writes to them.  You may want to do a VACUUM (ANALYZE, FREEZE) on the tables in those tablespaces
oncethe data is loaded. 

PostgreSQL will be generating WAL as you do data-modifying operations, so that should be aimed at storage that very low
writefatigue.  Be very cautious about using a RAM disk for anything, though, unless you are *very* confident the
batterybackup on it is 100% reliable.  PostgreSQL isn't designed to handle recovery from having the WAL just disappear
outfrom under it on a crash. 


Re: Qualifying use of separate TABLESPACES (performance/functionality)

From
Imndlf8r@gmx.com
Date:
On 7/16/2024 9:57 PM, Christophe Pettus wrote:
>> On Jul 16, 2024, at 21:45, Imndlf8r@gmx.com wrote: Or, does Postgres
>> expect to be able to access any media however it wants (i.e., R/w),
>> regardless of the expected access patterns of the data stored there?
>
> Well, yes and no.
>
> PostgreSQL will not respond well to having media that is literally read only
> in the sense that a write operation to it will fail.

Picking a nit, here:   if the write() never returns an error, then Postgres
never (?) knows that the write "didn't happen" (as expected).

If Postgres is writing the same data that ALREADY EXISTS, then the
physical medium could be immutable and Postgres would never know (see
next comment)

> At some point, it will
> need to (for example) vacuum the tables, and that will means writes.

But, if nothing has ever been *written* (UPDATE) to that tablespace, what
value would VACUUM have?

>  That
> being said, if the only thing in a tablespace are tables (and their indexes)
> that are written once then never again, you won't be constantly getting
> writes to them.

Are there ever any "anonymous"/temporary objects that might be created
alongside these?

> You may want to do a VACUUM (ANALYZE, FREEZE) on the tables
> in those tablespaces once the data is loaded.

Ideally, I would like to find a way to "import" a preexisting table, index,
etc. without literally having to have two copies (the "original" being
read and the copy being *created*).  Having the original fill that role
*immutably* assures the customer that the original data hasn't been
altered in any way, even during its "initialization" within the RDBMS.

[It also avoids the issue of ever needing two *physical* copies of the data]

> PostgreSQL will be generating WAL as you do data-modifying operations, so
> that should be aimed at storage that very low write fatigue.

Yes, but there is also the need to safeguard against read disturb induced
errors.  Using MLC/TLC/QLC/PLC devices on the "static" portions of the
data puts them at higher risk.  The FLASH controller needs to rewrite
errored pages, even if the application isn't explicitly altering the data.

I.e., "read only" can be interpreted on several different layers:
- the application never calls for a write (but the DBMS silently does)
- the DBMS never calls for a write (but the memory controller silently does)
- the physical memory is truly immutable

The last being the ideal.

> Be very
> cautious about using a RAM disk for anything, though, unless you are *very*
> confident the battery backup on it is 100% reliable.  PostgreSQL isn't
> designed to handle recovery from having the WAL just disappear out from
> under it on a crash.

The server is battery backed.  As long as the server can "fix things" before
the battery is exhausted, then all is well.  (Imagine the case of a physical
disk dying; what recourse, there?)

My recovery strategy is to checkpoint the database periodically so the most
recent snapshot can be reloaded.

As I said, it's an appliance.  You don't have a DBA attending to the
database in your vehicle's navigation system (the system would be deemed
too costly if it had to support such an "accessory"  :> ).  The DBMS is
expected to be as reliable as the underlying data store.  It abstracts
references to the data in the store and ensures applications can't create
"bad" data (strict typing, constraints, triggers, etc.).