Thread: How many fields in a table are too many
Hi, Just a quick question, not unrelated to my previous question, which I don't think will get answered. I have a table with13 fields. Is that too many fields for one table. Thanks, Mark
Generally speaking, no, not at all. Probably the question you should be asking yourself is: "Is there a way I could move some of these fields into other tables?" But, depending upon your application, putting your database schema into some esoteric normal form may be overkill. > -----Original Message----- > From: pgsql-general-owner@postgresql.org > [mailto:pgsql-general-owner@postgresql.org]On Behalf Of MT > Sent: Wednesday, June 25, 2003 7:11 PM > To: GENERAL > Subject: [GENERAL] How many fields in a table are too many > > > Hi, > > Just a quick question, not unrelated to my previous question, > which I don't think will get answered. I have a table with 13 > fields. Is that too many fields for one table. > > Thanks, > > Mark > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html >
> Hi, > > Just a quick question, not unrelated to my > previous question, which I don't think will > get answered. I have a table with 13 fields. > Is that too many fields for one table. Mathematically, a binary relation represents the relevant cross product of two domains, x and f(x): squares table: x f(x) -------- 0 | 0 1 | 1 2 | 4 3 | 9 ... The relational model of data just leverages this along with predicate logic to guarantee logical consistency and extends the traditional view of a relation to n-dimensions. So, in your relation (table), you should have a field which represents the "x". The "x" should have a unique index associated with it. After all, it wouldn't make sense in the above example to have two records for "x". The other non-key fields should represent the f(x), g(x), h(x). IOW, they should be dependent upon the key, the whole key, and nothing but the key. If that is not the case, you have some normalizing to do... Hope that helps, Mike Mascari mascarm@mascari.com
> I have a table with 13 fields. Is that > too many fields for one table. > Mark > Thirteen? No way. I've got you beat with 21: person_id place_id title_id last_name first_name middle_name gender_id birth_year birth_month birth_day deceased_year deceased_month deceased_day marital_status_id education_level e_mail_address social_security_no last_update_date updated_by username photograph ~B
Folling are the limitations of PostgreSQL: Maximum size for a database Unlimited (4 TB databases exist) Maximum size for a table 16 TB on all operating systems Maximum size for a row 1.6 TB Maximum size for a field 1 GB Maximum number of rows in a table Unlimited Maximum number of columns in a table 250 - 1600 depending on column types Maximum number of indexes on a table Unlimited Of course, these are not actually unlimited, but limited to available disk space and memory/swap space. Performance may suffer when these values get unusually large. The maximum table size of 16 TB does not require large file support from the operating system. Large tables are stored as multiple 1 GB files so file system size limits are not important. The maximum table size and maximum number of columns can be increased if the default block size is increased to 32k. Regards, Kallol. -----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org]On Behalf Of btober@seaworthysys.com Sent: Thursday, June 26, 2003 12:47 PM To: m_tessier@sympatico.ca Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] How many fields in a table are too many > I have a table with 13 fields. Is that > too many fields for one table. > Mark > Thirteen? No way. I've got you beat with 21: person_id place_id title_id last_name first_name middle_name gender_id birth_year birth_month birth_day deceased_year deceased_month deceased_day marital_status_id education_level e_mail_address social_security_no last_update_date updated_by username photograph ~B ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
On Thu, Jun 26, 2003 at 03:17:12AM -0400, btober@seaworthysys.com wrote: > > I have a table with 13 fields. Is that > > too many fields for one table. > > Mark > > > > Thirteen? No way. I've got you beat with 21: Pfft! Is *that* all? I've got a table with 116 fields. Very soon we'll be upgrading to 7.3 and we can them merrily DROP COLUMN half of them. Until then... The original developers didn't really have a concept of storing different info in different tables. -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > "the West won the world not by the superiority of its ideas or values or > religion but rather by its superiority in applying organized violence. > Westerners often forget this fact, non-Westerners never do." > - Samuel P. Huntington
Attachment
> On Thu, Jun 26, 2003 at 03:17:12AM -0400, btober@seaworthysys.com > wrote: >> > I have a table with 13 fields. Is that >> > too many fields for one table. >> > Mark >> > >> >> Thirteen? No way. I've got you beat with 21: > > Pfft! Is *that* all? I've got a table with 116 fields. I *knew* a number of these responses would be forthcoming... :) ~Berend Tober
On 26 Jun 2003 at 3:44, btober@seaworthysys.com wrote: > > On Thu, Jun 26, 2003 at 03:17:12AM -0400, btober@seaworthysys.com > > wrote: > >> > I have a table with 13 fields. Is that > >> > too many fields for one table. > >> > Mark > >> Thirteen? No way. I've got you beat with 21: > > Pfft! Is *that* all? I've got a table with 116 fields. > I *knew* a number of these responses would be forthcoming... :) I remember somebody posting a database schema with 500 fields. Don't worry as long as it works. When you have problems, post it on performance. First advice you would get is to normalize it..:-) Bye Shridhar -- We have phasers, I vote we blast 'em! -- Bailey, "The Corbomite Maneuver", stardate 1514.2
On Thu, Jun 26, 2003 at 01:02:06 -0400, Mike Mascari <mascarm@mascari.com> wrote: > > Hi, > > > > Just a quick question, not unrelated to my > > previous question, which I don't think will > > get answered. I have a table with 13 fields. > > Is that too many fields for one table. > > Mathematically, a binary relation represents the relevant cross > product of two domains, x and f(x): A binary relation is a subset of the cross product of two sets. This is more general than using x and f(x) since this implies some restrictions on the subset (namely that there is only one tuple with a specific member of x in the x part of the tuple).
> The original developers didn't really have a concept of storing different > info in different tables. That kind of stuff drives me nuts. Where do people get their CS degrees? It took me less that 2 days to teach our ARTISTS how to construct fully-normalized tables (it's a long story as to why I was doing this, but you get the picture). Sorry for the rant. Sometimes I can't help it. Jon
On Thursday 26 June 2003 12:44 am, btober@seaworthysys.com wrote: > > On Thu, Jun 26, 2003 at 03:17:12AM -0400, btober@seaworthysys.com > > > > wrote: > >> > I have a table with 13 fields. Is that > >> > too many fields for one table. > >> > Mark > >> > >> Thirteen? No way. I've got you beat with 21: > > > > Pfft! Is *that* all? I've got a table with 116 fields. > > I *knew* a number of these responses would be forthcoming... :) > Of course they would. :) As long as we are playing "who's is biggest", I have one with 900+ attributes (normalized) but there is a big warning - if you have a query that returns hundreds of columns it will be very, very slow. Slow as in tens of seconds to do a "select * from fattable" when fattable has <1000 records. Tom Lane looked at the profile data I sent and had the issue added to the TODO list some time back. Check the archives for "Terrible performance on wide selects" if you are interested. I believe the problem is still on the TODO list under the category of "caching". Cheers, Steve
> On Thursday 26 June 2003 12:44 am, btober@seaworthysys.com wrote: >> > On Thu, Jun 26, 2003 at 03:17:12AM -0400, btober@seaworthysys.com >> > >> > wrote: >> >> > I have a table with 13 fields. Is that >> >> > too many fields for one table. >> >> > Mark >> >> >> >> Thirteen? No way. I've got you beat with 21: >> > >> > Pfft! Is *that* all? I've got a table with 116 fields. >> > > As long as we are playing "who's is biggest", I have one with 900+ > attributes (normalized) but there is a big warning - if you have a > query that returns hundreds of columns it will be very, very slow. > Slow as in tens of seconds to do a "select * from fattable" when > fattable has <1000 records. > Is the SELECT * the only circumstance? That is, if you specify a small number of columns, does the response improve even though the table actually has that large number of columns but is only be asked to supply a column-limited result set? What about when you limit the rows but not the columns with a WHERE clause? And of course the last case when you limit both rows and columns? ~Berend Tober
<btober@seaworthysys.com> writes: >> As long as we are playing "who's is biggest", I have one with 900+ >> attributes (normalized) but there is a big warning - if you have a >> query that returns hundreds of columns it will be very, very slow. > Is the SELECT * the only circumstance? That is, if you specify a small > number of columns, does the response improve even though the table > actually has that large number of columns but is only be asked to supply > a column-limited result set? IIRC, the worst problems that Steve's profile exposed were associated with large numbers of columns in a SELECT result --- there are some doubly nested loops that take time O(N^2) in the number of columns. But I would not be surprised if some of those loops get invoked on the underlying table, too, depending on what your query looks like exactly. This is all eminently fixable, it's just a matter of someone finding some round tuits ... for most people it doesn't seem like a high-priority problem, since you won't notice it till you get into the hundreds of columns ... regards, tom lane
Added to TODO: * Improve performance for queries with many columns We already have an item for tables with many columsn. --------------------------------------------------------------------------- Tom Lane wrote: > <btober@seaworthysys.com> writes: > >> As long as we are playing "who's is biggest", I have one with 900+ > >> attributes (normalized) but there is a big warning - if you have a > >> query that returns hundreds of columns it will be very, very slow. > > > Is the SELECT * the only circumstance? That is, if you specify a small > > number of columns, does the response improve even though the table > > actually has that large number of columns but is only be asked to supply > > a column-limited result set? > > IIRC, the worst problems that Steve's profile exposed were associated > with large numbers of columns in a SELECT result --- there are some > doubly nested loops that take time O(N^2) in the number of columns. > But I would not be surprised if some of those loops get invoked on the > underlying table, too, depending on what your query looks like exactly. > > This is all eminently fixable, it's just a matter of someone finding > some round tuits ... for most people it doesn't seem like a > high-priority problem, since you won't notice it till you get into the > hundreds of columns ... > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Added to TODO: > * Improve performance for queries with many columns > We already have an item for tables with many columsn. That one's a duplicate then. regards, tom lane
Is the issue of many columns in a tuple the same issue as a SELECT having many columns? --------------------------------------------------------------------------- Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > Added to TODO: > > * Improve performance for queries with many columns > > We already have an item for tables with many columsn. > > That one's a duplicate then. > > regards, tom lane > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Is the issue of many columns in a tuple the same issue as a SELECT > having many columns? I believe all the same inefficiencies need to be fixed whichever way you look at it. Probably "many columns in SELECT" is the more accurate description though. regards, tom lane
There is an article on normalisation at IBM developer works for all those who need a primer on table normalisation. I learnt this from, would you believe, "dBase Mac" by Jim Heid when I was one of the two users in the world running that version in 1988-1989 =:-D I recomend "Database Application Programming with Linux" Wiley. There are some good table design chapters even for non Linux people. Good database design helps make PostgreSQL run fast. Cheers Tony Grant -- www.tgds.net Library management software toolkit, redhat linux on Sony Vaio C1XD, Dreamweaver MX with Tomcat and PostgreSQL
On Thursday 26 June 2003 1:03 pm, btober@seaworthysys.com wrote: <snip> > > As long as we are playing "who's is biggest", I have one with > > 900+ attributes (normalized) but there is a big warning - if you > > have a query that returns hundreds of columns it will be very, > > very slow. Slow as in tens of seconds to do a "select * from > > fattable" when fattable has <1000 records. > > Is the SELECT * the only circumstance? That is, if you specify a > small number of columns, does the response improve even though the > table actually has that large number of columns but is only be > asked to supply a column-limited result set? What about when you > limit the rows but not the columns with a WHERE clause? And of > course the last case when you limit both rows and columns? It's the number of columns in the result, not the width of the table. Speed is also substantially influenced by whether the returned columns are fixed or variable width (variable is slower) and where the variable columns appear (earlier is worse). Selecting a single column from a wide table or even array element 1000 from an array is fast. For example on my slow machine selecting array element 600 from an array with a limit of 100 rows takes 0.02 seconds but selecting elements 1-600 takes 20 seconds as does selecting element 1 600 times (select a[1],a[1],a[1]...). Whether you select the first, middle or last field/array element does not impact the query speed much. You can see the effect with a 2 column table: create table foo (i int, x text) and add some data. On my test (1000 rows): Single field: select i from foo: 10ms select t from foo: 10ms Int field 600 times: select i,i,i,i...(600 times) from foo: 2400ms Text field 600 times: select t,t,t,t...(600 times) from foo: 6500ms 599 ints and a text: select i,i,i,i...(599 times), t from foo: 2500ms Text and then 599 ints: select t,i,i,i...(599 times) from foo: 6400ms Cheers, Steve
> I believe all the same inefficiencies need to be fixed whichever > way you look at it. Probably "many columns in SELECT" is the more > accurate description though. Definitely SELECT, see my response to Berend (I responded to him before I saw this post). Cheers, Steve
Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: >> Is the issue of many columns in a tuple the same issue as a SELECT >> having many columns? > > I believe all the same inefficiencies need to be fixed whichever > way you look at it. Probably "many columns in SELECT" is the more > accurate description though. Together with the recent discussions about attribute reordering, it'd make sense, if we have a "resentation order" different from the actual physical tuple layout, that the table starts with all variable length fields at the end. This would give a better utilization of attribute offset caching. Don't know though, if this counts for much of the suffering. Jan > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Is this a TODO? When columns can be reordered, move varlena columns to the end? --------------------------------------------------------------------------- Jan Wieck wrote: > Tom Lane wrote: > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > >> Is the issue of many columns in a tuple the same issue as a SELECT > >> having many columns? > > > > I believe all the same inefficiencies need to be fixed whichever > > way you look at it. Probably "many columns in SELECT" is the more > > accurate description though. > > Together with the recent discussions about attribute reordering, it'd > make sense, if we have a "resentation order" different from the actual > physical tuple layout, that the table starts with all variable length > fields at the end. This would give a better utilization of attribute > offset caching. > > Don't know though, if this counts for much of the suffering. > > > Jan > > > > > regards, tom lane > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 6: Have you searched our list archives? > > > > http://archives.postgresql.org > > > > -- > #======================================================================# > # It's easier to get forgiveness for being wrong than for being right. # > # Let's break this rule - forgive me. # > #================================================== JanWieck@Yahoo.com # > > -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Is this a TODO? > When columns can be reordered, move varlena columns to the end? Yeah, if it's not in there already. I remember seeing some Berkeley-era comments speculating that this would be a good thing to do. Not sure if they're still in the code, or I believe that really what you want is fixed-width NOT NULL columns first, then fixed-width nullable, then var-width. This ordering will improve the odds of actually being able to use the fixed offset for a particular column. regards, tom lane
On Sat, Jun 28, 2003 at 01:43:32PM -0400, Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > Is this a TODO? > > When columns can be reordered, move varlena columns to the end? > > I believe that really what you want is fixed-width NOT NULL columns > first, then fixed-width nullable, then var-width. This ordering will > improve the odds of actually being able to use the fixed offset for > a particular column. This means that when you do ALTER TABLE ... SET/DROP NOT NULL the table may have to be completely rewritten? Altering the physical order is sure to provoke a rewrite, but I think people will expect a change in constraints to be a fast operation. -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) Essentially, you're proposing Kevlar shoes as a solution for the problem that you want to walk around carrying a loaded gun aimed at your foot. (Tom Lane)
Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > Is this a TODO? > > When columns can be reordered, move varlena columns to the end? > > Yeah, if it's not in there already. I remember seeing some Berkeley-era > comments speculating that this would be a good thing to do. Not sure if > they're still in the code, or > > I believe that really what you want is fixed-width NOT NULL columns > first, then fixed-width nullable, then var-width. This ordering will > improve the odds of actually being able to use the fixed offset for > a particular column. Added to TODO: * Automatically place fixed-width, NOT NULL columns first in a table I like the idea of putting NOT NULL before NULL columns. However, I am not sure how useful NOT NULL is in practice because there are lots of columns that don't specify NOT NULL but have mostly nulls or mostly non-nulls, which kills our caching --- what I was hoping to do some day was to cache the null bitmask and offsets of the previous tuple and use those if the new tuple has the same null bitmask as the previous tuple. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Alvaro Herrera wrote: > On Sat, Jun 28, 2003 at 01:43:32PM -0400, Tom Lane wrote: > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > > Is this a TODO? > > > When columns can be reordered, move varlena columns to the end? > > > > I believe that really what you want is fixed-width NOT NULL columns > > first, then fixed-width nullable, then var-width. This ordering will > > improve the odds of actually being able to use the fixed offset for > > a particular column. > > This means that when you do ALTER TABLE ... SET/DROP NOT NULL the table > may have to be completely rewritten? Altering the physical order is > sure to provoke a rewrite, but I think people will expect a change in > constraints to be a fast operation. I don't think we would change any physical ordering with ALTER TABLE --- just do it when the table is created, if possible. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Alvaro Herrera wrote: >> This means that when you do ALTER TABLE ... SET/DROP NOT NULL the table >> may have to be completely rewritten? > I don't think we would change any physical ordering with ALTER TABLE --- > just do it when the table is created, if possible. Right, I was just thinking of this as an optimization we'd apply during initial table creation. Renumbering columns later on would be a huge mess, far more work than the optimization is worth. regards, tom lane
Alvaro Herrera wrote: > On Sat, Jun 28, 2003 at 01:43:32PM -0400, Tom Lane wrote: >> Bruce Momjian <pgman@candle.pha.pa.us> writes: >> > Is this a TODO? >> > When columns can be reordered, move varlena columns to the end? >> >> I believe that really what you want is fixed-width NOT NULL columns >> first, then fixed-width nullable, then var-width. This ordering will >> improve the odds of actually being able to use the fixed offset for >> a particular column. > > This means that when you do ALTER TABLE ... SET/DROP NOT NULL the table > may have to be completely rewritten? Altering the physical order is > sure to provoke a rewrite, but I think people will expect a change in > constraints to be a fast operation. > I said "... table starts with ...", I said nothing about physically reordering an existing table. All we where talking about is uncoupling the order in which attributes of a relation appear if automatically expanded from "SELECT * ...". Currently, they are tied to the order of the physical tuple layout, which in turn is the result of the original order at CREATE TABLE, plus added columns at the end and dropped ones taken out. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Bruce Momjian <pgman@candle.pha.pa.us> writes: > However, I am not sure how useful NOT NULL is in practice because there > are lots of columns that don't specify NOT NULL but have mostly nulls or > mostly non-nulls, which kills our caching --- what I was hoping to do > some day was to cache the null bitmask and offsets of the previous tuple > and use those if the new tuple has the same null bitmask as the previous > tuple. We already cache fairly effectively in cases where there are no nulls. I'm not sure it's worth trying to do something with the idea that two adjacent tuples might have nulls in the same places. regards, tom lane
Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > However, I am not sure how useful NOT NULL is in practice because there > > are lots of columns that don't specify NOT NULL but have mostly nulls or > > mostly non-nulls, which kills our caching --- what I was hoping to do > > some day was to cache the null bitmask and offsets of the previous tuple > > and use those if the new tuple has the same null bitmask as the previous > > tuple. > > We already cache fairly effectively in cases where there are no nulls. > I'm not sure it's worth trying to do something with the idea that two > adjacent tuples might have nulls in the same places. I was thinking of trying it and seeing how often it would be a win, because right now, when we hit a null, our cache is dead. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
On Sat, Jun 28, 2003 at 02:21:57PM -0400, Tom Lane wrote: <snip> > Right, I was just thinking of this as an optimization we'd apply during > initial table creation. Renumbering columns later on would be a huge > mess, far more work than the optimization is worth. perhaps it could be done on VACUUM ? cu -- --------------------------------------------------------------------- Enrico Weigelt == metux ITS Webhosting ab 5 EUR/Monat. UUCP, rawIP und vieles mehr. phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de cellphone: +49 174 7066481 smsgate: sms.weigelt@metux.de --------------------------------------------------------------------- Diese Mail wurde mit UUCP versandt. http://www.metux.de/uucp/
Re: attribute ordering The column presentation issue maybe should be cross referenced with the problem with rowtypes where the referenced table has dropped columns. This is the one where in plpgsql you do a select * into a rowtype of a table with dropped columns you get a mismatch on the types. Implementing the presentation ordering will probably give a solution to the rowtype problem. elein On Saturday 28 June 2003 08:05, Jan Wieck wrote: > Tom Lane wrote: > > Bruce Momjian <pgman@candle.pha.pa.us> writes: > >> Is the issue of many columns in a tuple the same issue as a SELECT > >> having many columns? > > > > I believe all the same inefficiencies need to be fixed whichever > > way you look at it. Probably "many columns in SELECT" is the more > > accurate description though. > > Together with the recent discussions about attribute reordering, it'd > make sense, if we have a "resentation order" different from the actual > physical tuple layout, that the table starts with all variable length > fields at the end. This would give a better utilization of attribute > offset caching. > > Don't know though, if this counts for much of the suffering. > > > Jan > > > > > regards, tom lane > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 6: Have you searched our list archives? > > > > http://archives.postgresql.org > > > > -- > #======================================================================# > # It's easier to get forgiveness for being wrong than for being right. # > # Let's break this rule - forgive me. # > #================================================== JanWieck@Yahoo.com # > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match > > -- ============================================================= elein@varlena.com Database Consulting www.varlena.com PostgreSQL General Bits http:/www.varlena.com/GeneralBits/ "Free your mind the rest will follow" -- en vogue
Hi Mark, actually there is a simple way to determine this. create an entity relationship model, try to build a 3rd normal form, look on what the users/customers are really doing, not only what data they are dealing with and finally break up the 3rd NF again. A table with too many columns is as bad as a database with too many tables. If you can't keep the overview, something went wrong. Sometimes you can tune the database by tuning the datastructure. It's good if you end up with tables that only have a few columns. But why not twenty, if you need them, it's ok. But if you have significantly more than this number, you should have a good reason, than it's ok as well. What I think is more important, is that you have a tight look on the nature of your data. Treat a number like a number and a date like a date. Ciao, Toni MT wrote: >Hi, > >Just a quick question, not unrelated to my previous question, which I don't think will get answered. I have a table with13 fields. Is that too many fields for one table. > >Thanks, > >Mark > >---------------------------(end of broadcast)--------------------------- >TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html > > > >
elein wrote: > Re: attribute ordering > > The column presentation issue maybe should be > cross referenced with the problem with rowtypes > where the referenced table has dropped columns. > This is the one where in plpgsql you do a select * > into a rowtype of a table with dropped columns > you get a mismatch on the types. Implementing > the presentation ordering will probably give a > solution to the rowtype problem. Recompiling the function should be totally sufficient. That you mention this as a problem indicates that you have self modifying code or are developing on your production system. Which of them is it? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
On Thu, Jun 26, 2003 at 01:19:09PM +0530, Shridhar Daithankar wrote: > On 26 Jun 2003 at 3:44, btober@seaworthysys.com wrote: > > > On Thu, Jun 26, 2003 at 03:17:12AM -0400, btober@seaworthysys.com > > > wrote: > > >> > I have a table with 13 fields. Is that > > >> > too many fields for one table. > > >> > Mark > > >> Thirteen? No way. I've got you beat with 21: > > > Pfft! Is *that* all? I've got a table with 116 fields. > > I *knew* a number of these responses would be forthcoming... :) > > I remember somebody posting a database schema with 500 fields. > > Don't worry as long as it works. When you have problems, post it on > performance. First advice you would get is to normalize it..:-) > > Bye > Shridhar Do normal form classifications go into negative numbers?