Hi, just a general design question and wondering how postgres would handle
either situation.
I have a gobb of information (400000+ records) on individual accounts. I
need to store all of their personal information (name, adress, etc) as
well as all of their more dynamic company information (last purchase,
times ordered, etc).
One: All their dynamic information can be rebuilt from other tables,
but it will be called upon rather frequently, so the redundency so as to
not have to rebuild on every call seems acceptable by me. (smack me if i'm
wrong)
Two: There is only a one to one ration between an account (personal
information) and that account's account information (makes sense,
eh?). But does it make sense to keep this information in the same table or
to break it up? I estimate about 20 fields in two separate tables or 40
in one big one. The personal information will almost always be index
searched by name or zipcode. Whereas the other information they (they
proverbial they) will probably want sorted in weirdass ways that the
design was never intended for. Basically, it will be be subjected to more
sequential scans than something with close to a half million records
should be. My basic question ends up being: does postgres handle
sequntial scans across tables with fewer fields better? Is there any
performance increase by separating this into two tables?
Thanks for any hints you could give me.
.jtp