Re: designing time dimension for star schema - Mailing list pgsql-general

From Mark Wong
Subject Re: designing time dimension for star schema
Date
Msg-id CAE+TzGr8Pvu7dCR2KgssCHrTiLRzk-2u5qjvM_GN-3RzyrHnsQ@mail.gmail.com
Whole thread Raw
In response to designing time dimension for star schema  (Mark Wong <markwkm@gmail.com>)
List pgsql-general
On Mon, Feb 10, 2014 at 8:45 AM, Mark Wong <markwkm@gmail.com> wrote:
> Hello everybody,
>
> I was wondering if anyone had any experiences they can share when
> designing the time dimension for a star schema and the like.  I'm
> curious about how well it would work to use a timestamp for the
> attribute key, as opposed to a surrogate key, and populating the time
> dimension with triggers on insert to the fact tables.  This is
> something that would have data streaming in (as oppose to bulk
> loading) and I think we want time granularity to the minute.

Hello everybody,

I did a simple experiment and just wanted to share.  Hopefully this
wasn't too simple.  On a 72GB 15K rpm 2.5" drive, I tried to see how
long it would take to insert (committing after each insert) 100,000
bigints, timestamps with time zone, and timestamps with time zone with
insert trigger.  The timestamp and bigints by themselves took ~10
minutes to insert 100,000 rows, and implementing the trigger increased
the time up to about ~11 minutes.

Regards,
Mark


pgsql-general by date:

Previous
From: Karsten Hilbert
Date:
Subject: Re: Restore postgresql data directory to tablespace on new host? Or swap tablespaces?
Next
From: "Antman, Jason (CMG-Atlanta)"
Date:
Subject: Re: Restore postgresql data directory to tablespace on new host? Or swap tablespaces?