Re: how to handle a big table for data log - Mailing list pgsql-performance

From Greg Spiegelberg
Subject Re: how to handle a big table for data log
Date
Msg-id AANLkTikCsVPR5836FXk0quc4K-PogbxPd4yfhrig4HvP@mail.gmail.com
Whole thread Raw
In response to Re: how to handle a big table for data log  (kuopo <spkuo@cs.nctu.edu.tw>)
List pgsql-performance
On Tue, Jul 20, 2010 at 9:51 PM, kuopo <spkuo@cs.nctu.edu.tw> wrote:
Let me make my problem clearer. Here is a requirement to log data from a set of objects consistently. For example, the object maybe a mobile phone and it will report its location every 30s. To record its historical trace, I create a table like
CREATE TABLE log_table
(
  id integer NOT NULL,
 data_type integer NOT NULL,
 data_value double precision,
 ts timestamp with time zone NOT NULL,
 CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)
)
;
In my location log example, the field data_type could be longitude or latitude.


I witnessed GridSQL in action many moons ago that managed a massive database log table.  From memory, the configuration was 4 database servers with a cumulative 500M+ records and queries were running under 5ms.  May be worth a look.

http://www.enterprisedb.com/community/projects/gridsql.do

Greg

pgsql-performance by date:

Previous
From: Lew
Date:
Subject: Re: Big difference in time returned by EXPLAIN ANALYZE SELECT ... AND SELECT ...
Next
From: Yeb Havinga
Date:
Subject: Re: Slow query using the Cube contrib module.