table design strategy - Mailing list pgsql-general

From Johnson, Shaunn
Subject table design strategy
Date
Msg-id 73309C2FDD95D11192E60008C7B1D5BB0452E162@snt452.corp.bcbsm.com
Whole thread Raw
List pgsql-general
Howdy:
 
Running Postgres 7.1.3 on RedHat 7.2, kernel 2.4.7  rel. 10.
 
My question is about table design strategies ...
 
I have a imported a table, broken up in such a fashion that
there are now 8 tables on Postgres.  For example: table_north,
table_south, table_east and table_west originally comes from
another source on another database called 'table_direction'.
From that, the tables span years, so, I have tables called
'table_north_2000' and 'table_north_2001', 'table_south_2000'
and table_south_2001' and so on ...
 
Now ... I was thinking that now that I have all 8 parts, I'd like
to:
 
* create indices on the similar names in each table
* create a view that joins all 8 tables into ONE table again
 
I don't want to join all of the tables back into ONE table
because every month or so, I'll have to update  and import
a fresh copy of the 2001 year tables ... and because they are
pretty big, pulling it all at once is totally not an option. Of course,
I'll have to start pulling 2002 data soon ...
 
I'm looking for creative suggestions.  Otherwise, I think I'll have to
go the route I stated above ... it just seems like it's going to be
painful ...
 
Thanks!
 
PS: Has anyone had a chance to test a Data Model / database
structure modeling tool (for creating pretty pictures and relational
info / documentation)?
 
-X
-----Original Message-----
From: Johnson, Shaunn
Sent: Monday, March 25, 2002 4:15 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] file size issue?

--I think you've answered at least 1/2 of my question,
Andrew.
 
--I'd like to figure out if Postgres reaches a point where
it will no longer index or vacuum a table based on its size (your answer
tells me 'No' -  it will continue until it is done, splitting each
table on 1Gig increments).
 
--And if THAT is true, then why am I getting failures when
I'm vacuuming or indexing a table just after reaching 2 Gig?
 
--And if it's an OS (or any other) problem, how can I factor
out Postgres?
 
--Thanks!
 
-X
 
 
[snip]
 
 
> Has anyone seen if it is a problem with the OS or with the way
> Postgres handles large files (or, if I should compile it again
> with some new options).
 
 
What do you mean "postgres handles large files"? The filesize
problem isn't related to the size of your table, because postgres
splits files at 1 Gig.
If it's an output problem, you could see something, but you said you
were vacuuming.
 
 
A

[snip]

pgsql-general by date:

Previous
From: Frank Hilliard
Date:
Subject: pg_dump fails
Next
From: Joe Conway
Date:
Subject: Re: Bytea vs. BLOB (what's the motivation behind the former?)