Poverty Partitioning, include indexes - Mailing list pgsql-admin

From Simon Liesenfeld
Subject Poverty Partitioning, include indexes
Date
Msg-id 1102548183.19607699.1540963534074@mail.yahoo.com
Whole thread Raw
Responses Re: Poverty Partitioning, include indexes
List pgsql-admin
Hi all,

after decades I had a look at Postgres again, last week, due to V11.
Quite impressive, what has been achieved by the Postgres team since then.
Especially the Performance of recursive  CTEs is superb,
especially in combination with window functions, lag, lead, row_number

Before Diving into the new partitioning methods, of which I assume, that  they imply constraints on the used data model,
I observed, that a  table is a file, e.g. data/base/1234/5678 .  As soon the files size exceeds
1 Gig, more files will be created, 5678.1 for the next Gig, 5678.2 for another , and  so on.
I had 10 of them for my CTE test table.

I brought down the pg-server, moved the ones with odd extension number to another  disk  and created
symbolic links in  data/base/1234/ pointing to the moved files.
I restarted the pg-server, I was surprised, that everything still worked well,  and I think, I noticed some speedup of my query. I assume, that is, because pg gathers the data of the files in parallel, and two disk can shuffle more data in the same time than one disk.

Is this a recommendable practice? Is it supported by postgres to do this by CREATE TABLE, Alter table?
Can somebody confirm the performance gain, does it even improve  with more disks?

Oh. I spotted another highlight in V11, The Include indexes.
Tables with plenty of attributes actually speedup with ratio
number of all attributes / number of selected attributes
if an include index is build on the selected attributes + the PK.
with small index files and no impact on the Data model.










pgsql-admin by date:

Previous
From: nunks
Date:
Subject: Re: Error code for "no partition or relation found for row"
Next
From: Laurenz Albe
Date:
Subject: Re: Poverty Partitioning, include indexes