Re: Very large tables - Mailing list pgsql-general

From Grzegorz Jaśkiewicz
Subject Re: Very large tables
Date
Msg-id 2f4958ff0811280803o6849d83fjaeed7a4b69aa8586@mail.gmail.com
Whole thread Raw
In response to Re: Very large tables  (Alvaro Herrera <alvherre@commandprompt.com>)
Responses Re: Very large tables  ("William Temperley" <willtemperley@gmail.com>)
List pgsql-general


On Fri, Nov 28, 2008 at 3:48 PM, Alvaro Herrera <alvherre@commandprompt.com> wrote:
William Temperley escribió:

> I've been asked to store a grid of 1.5 million geographical locations,
> fine. However, associated with each point are 288 months, and
> associated with each month are 500 float values (a distribution
> curve), i.e. 1,500,000 * 288 * 500 = 216 billion values :).
>
> So a 216 billion row table is probably out of the question. I was
> considering storing the 500 floats as bytea.

What about a float array, float[]?
you seriously don't want to use bytea to store anything, especially if the datatype matching exists in db of choice.
also, consider partitioning it :)

Try to follow rules of normalization, as with that sort of data - less storage space used, the better :)
And well, I would look for a machine with rather fast raid storage :) (and spacious too).



--
GJ

pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Trigger before delete does fire before, but delete doesn't not happen
Next
From: Ioana Danes
Date:
Subject: Re: Using postgres.log file for replication