Thread: One or more tables?
Hi. For an example let me say that I have a big (over 1 million) user "base". Then every user does a lot of inserting/updating of data. Would it be better to create different tables for insert/updating for every user or would it be better just to have one big table with all data (tables would have of course the same columns, ...). How do you cope with this kind of things? 1.example (1 enormous table) tablename (id, user_id, datetime, some_data) 2. example (a big number of tables) tablename_user_id( id, datetime, some_data) Thank you. Kind regards, Rok
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/02/07 07:35, rokj wrote: > Hi. > > For an example let me say that I have a big (over 1 million) user > "base". Then every user does a lot of inserting/updating of data. > Would it be better to create different tables for insert/updating for > every user or would it be better just to have one big table with all > data (tables would have of course the same columns, ...). How do you > cope with this kind of things? > > 1.example (1 enormous table) > tablename (id, user_id, datetime, some_data) > > 2. example (a big number of tables) > tablename_user_id( id, datetime, some_data) This should help you to decide how to design your tables. 3NF is as far as you really need to go. http://en.wikipedia.org/wiki/Data_normalization http://pubs.logicalexpressions.com/Pub0009/LPMArticle.asp?ID=88 - -- Ron Johnson, Jr. Jefferson LA USA %SYSTEM-F-FISH, my hovercraft is full of eels -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHUvmzS9HxQb37XmcRAnNhAJ4/bMbLyDXioe7duTO4Dm0vBD8TCgCg3H84 /+gRlkgyuIlRYYGOGH8LWPM= =LfO7 -----END PGP SIGNATURE-----
On Dec 2, 2007 6:35 PM, rokj <rjaklic@gmail.com> wrote:
Although there isn't enough information in the email, but instead of creating a separate table for every user, you could use one table , partitioned on userid, that would , however, add a maint overhead whenever you add a new user.
Hi.
For an example let me say that I have a big (over 1 million) user
"base". Then every user does a lot of inserting/updating of data.
Would it be better to create different tables for insert/updating for
every user or would it be better just to have one big table with all
data (tables would have of course the same columns, ...). How do you
cope with this kind of things?
1.example (1 enormous table)
tablename (id, user_id, datetime, some_data)
2. example (a big number of tables)
tablename_user_id( id, datetime, some_data)
Although there isn't enough information in the email, but instead of creating a separate table for every user, you could use one table , partitioned on userid, that would , however, add a maint overhead whenever you add a new user.
Thank you.
Kind regards,
Rok
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
--
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/02/07 14:58, Usama Dar wrote: > On Dec 2, 2007 6:35 PM, rokj <rjaklic@gmail.com> wrote: > >> Hi. >> >> For an example let me say that I have a big (over 1 million) user >> "base". Then every user does a lot of inserting/updating of data. >> Would it be better to create different tables for insert/updating for >> every user or would it be better just to have one big table with all >> data (tables would have of course the same columns, ...). How do you >> cope with this kind of things? >> >> 1.example (1 enormous table) >> tablename (id, user_id, datetime, some_data) >> >> 2. example (a big number of tables) >> tablename_user_id( id, datetime, some_data) > > > Although there isn't enough information in the email, but instead of > creating a separate table for every user, you could use one table , > partitioned on userid, that would , however, add a maint overhead whenever > you add a new user. Cluster by *range* of user ids, and preallocate some number of tablespaces. - -- Ron Johnson, Jr. Jefferson LA USA %SYSTEM-F-FISH, my hovercraft is full of eels -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHU0tsS9HxQb37XmcRAhPoAJsESJL/Zs+SBRisowPXZbWQzIZqSgCeMEJE uKC47H0oPOI6qxxCFpipD9E= =A0ks -----END PGP SIGNATURE-----
On 3 dec., 01:18, ron.l.john...@cox.net (Ron Johnson) wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 12/02/07 14:58, Usama Dar wrote: > > > > > On Dec 2, 2007 6:35 PM, rokj <rjak...@gmail.com> wrote: > > >> Hi. > > >> For an example let me say that I have a big (over 1 million) user > >> "base". Then every user does a lot of inserting/updating of data. > >> Would it be better to create different tables for insert/updating for > >> every user or would it be better just to have one big table with all > >> data (tables would have of course the same columns, ...). How do you > >> cope with this kind of things? > > >> 1.example (1 enormous table) > >> tablename (id, user_id, datetime, some_data) > > >> 2. example (a big number of tables) > >> tablename_user_id( id, datetime, some_data) > > > Although there isn't enough information in the email, but instead of > > creating a separate table for every user, you could use one table , > > partitioned on userid, that would , however, add a maint overhead whenever > > you add a new user. > > Cluster by *range* of user ids, and preallocate some number of > tablespaces. > > - -- > Ron Johnson, Jr. > Jefferson LA USA > > I was just looking http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html which is something you said about and which is something I was looking for. So if I do table like: CREATE TABLE measurement ( city_id int not null, logdate date not null, peaktemp int, unitsales int ); CREATE TABLE measurement_y2004m02 ( CHECK ( logdate >= DATE '2004-02-01' AND logdate < DATE '2004-03-01' ) ) INHERITS (measurement); ... .. . I do SELECT with: SET constraint_exclusion = on; SELECT count(*) FROM measurement WHERE logdate >= DATE '2004-02-01'; ------------------------- Personally I think this is really powerfull thing, since it saves a lot of resources especially in big "environments". Regards, Rok