Re: Very large tables - Mailing list pgsql-general

From Simon Riggs
Subject Re: Very large tables
Date
Msg-id 1227894374.20796.228.camel@hp_dx2400_1
Whole thread Raw
In response to Very large tables  ("William Temperley" <willtemperley@gmail.com>)
Responses Re: Very large tables  ("William Temperley" <willtemperley@gmail.com>)
List pgsql-general
On Fri, 2008-11-28 at 15:40 +0000, William Temperley wrote:
> Hi all
>
> Has anyone any experience with very large tables?
>
> I've been asked to store a grid of 1.5 million geographical locations,
> fine. However, associated with each point are 288 months, and
> associated with each month are 500 float values (a distribution
> curve), i.e. 1,500,000 * 288 * 500 = 216 billion values :).
>
> So a 216 billion row table is probably out of the question. I was
> considering storing the 500 floats as bytea.
>
> This means I'll need a table something like this:
>
> grid_point_id | month_id | distribution_curve
> (int4)            | (int2)       | (bytea?)
> ------------------+---------------+---------------

I would look carefully at the number of bits required for each float
value. 4 bytes is the default, but you may be able to use less bits than
that rather than rely upon the default compression scheme working in
your favour. Custom datatypes are often good for this kind of thing.

Not sure it matters what the month_id datatype is.

Everything else depends upon the usage characteristics. You may want to
consider using table or server partitioning also.

--
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


pgsql-general by date:

Previous
From: "Grzegorz Jaśkiewicz"
Date:
Subject: Re: Very large tables
Next
From: "William Temperley"
Date:
Subject: Re: Very large tables