Guillaume Perréal wrote:
>
> I've got a big table of measures like that:
>
> CREATE TABLE measures (
> stationCode varchar(8),
> when datetime,
> value float,
> quality char,
> PRIMARY KEY (stationCode, when)
> );
>
> Each station (identified by stationCode) could have up to 10**6 measures. So I
> want to split it into smaller tables to increase perfomance :
>
> CREATE TABLE measures<1st stationCode> (
> when datetime,
> value float,
> quality char,
> PRIMARY KEY (when)
> );
> CREATE TABLE measures<2nd stationCode> (
> ... and so on.
>
> Is there a way to handle that using SQL and PL/pgSQL languages ?
Pretty straight-forward to do that with perl/DBI, or even bash & friends. But I recall that PL/pgSQL has major
problemswith CREATE TABLE and other DDL statements.
BTW, I suspect your easiest significant performance improvement would come from adding an integer primary key for
stationCoderather than a varchar key, eg.,
CREATE TABLE stations (
id serial, # or just integer
code varchar(*),
...
);
CREATE TABLE measures (
stationId integer,
when datetime,
value float,
quality char,
PRIMARY KEY (stationId, when)
);
If you try this, I'd be curious to hear the results. Your approach is surely faster, though by how much I don't know.
Itjust looks like a bit of a headache to manage, that's all.
Regards,
Ed Loehr