Many thanks for your prompt reply David. Allow me then to restate the questions, hoping that this better fits this mailing list.
MobilityDB is a time-series extension to PostgreSQL/PostGIS in which time-varying attributes (e.g., gear, GPS location of a car) are semantically grouped into "units" (e.g., a trip of a car) and are stored as temporal functions, e.g., a set of couples (integer, timestamptz) for gear (a temporal integer) or a set of triples (lon, lat, timestamptz) for the GPS location (a temporal point). All temporal types are stored using extended format, e.g.,
CREATE TYPE tint (
internallength = variable,
[...]
storage = extended,
alignment = double,
[...]
);
When ingesting mobility (IoT) data into MobilityDB we receive very wide (2K attributes) of high frequency (every tenth of a second) from flat format (e.g. CSV) and we need to store it in PostgreSQL tables using MobilityDB temporal types. In the above scenario, the values of these temporal types can be very wide (on average 30K timestamped couples/triples per trip).
As suggested by David, this goes beyond the "traditional" usage of PostgreSQL. Therefore my questions are
* What is the suggested strategy to splitting these 2K attributes into vertically partitioned tables where the tables are linked by the primary key (e.g. trip number in the example above). Are there any limitations/best practices in the number/size of TOASTED attributes that a table should contain.
* In each partitioned table containing N TOASTED attributes, given the above requirements, are there any limitations/best practices in storing them using extended storage or an alternative one such as external.
Many thanks for your insight
Esteban