Re: Millions of tables - Mailing list pgsql-performance

From Stuart Bishop
Subject Re: Millions of tables
Date
Msg-id CADmi=6NOA+YbzBTuFmmuQ7Kw61YkAPpYS14w9McjpPivwz-p7g@mail.gmail.com
Whole thread Raw
In response to Re: Millions of tables  (Greg Spiegelberg <gspiegelberg@gmail.com>)
List pgsql-performance


On 26 September 2016 at 20:51, Greg Spiegelberg <gspiegelberg@gmail.com> wrote:

An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).


However even 8 bazillion FDW's may cause an "overflow" of relationships at the loss of having an efficient storage engine acting more like a traffic cop.  In such a case, I would opt to put such logic in the app to directly access the true storage over using FDW's.

I mean one fdw table, which shards internally to 8 bazillion stores on disk. It has the sharding key, can calculate exactly which store(s) need to be hit, and returns the rows and to PostgreSQL it looks like 1 big table with 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame yourself :)


--

pgsql-performance by date:

Previous
From: Greg Spiegelberg
Date:
Subject: Re: Millions of tables
Next
From: Greg Spiegelberg
Date:
Subject: Re: Millions of tables