Thread: Inserting values into a variable table
I am creating a function for inserting values into a table.
The problem is the tables have various names, depending on the userid.
i.e
email_inbox_a608ac4e-5215-4758-905e-78cfb809aebc
is for one user, while…
email_inbox_c86hce32-4758-905e-gfkk439d83jd
is for another.
I am trying in my sql, to pass the tablename as a parameter,
i.e
select * from $1 where messageid = $2
but the compiler is not accepting it. What is recommended?
Thank you.
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.430 / Virus Database: 268.14.15/550 - Release Date: 11/24/2006 5:20 PM
> I am creating a function for inserting values into a table. > The problem is the tables have various names, depending on the userid. > email_inbox_a608ac4e-5215-4758-905e-78cfb809aebc > is for one user, while > email_inbox_c86hce32-4758-905e-gfkk439d83jd > is for another. > select * from $1 where messageid = $2 > but the compiler is not accepting it. What is recommended? > I am trying in my sql, to pass the tablename as a parameter, Is this a question about mySQL? Regards, Richard Broersma Jr.
On Sat, 25 Nov 2006, Greg Quinn wrote: > I am creating a function for inserting values into a table. > > The problem is the tables have various names, depending on the userid. > > i.e > > email_inbox_a608ac4e-5215-4758-905e-78cfb809aebc > > is for one user, while� > > email_inbox_c86hce32-4758-905e-gfkk439d83jd > > is for another. > > I am trying in my sql, to pass the tablename as a parameter, > > i.e > > select * from $1 where messageid = $2 > > but the compiler is not accepting it. What is recommended? You can't use arguments directly as a replacement for the table name in sql or plpgsql functions. In plpgsql, you can build up a query string and use execute. You may consider seeing whether or not there's another way to lay out the data as well that might be easier to work with.
On 11/26/06, Stephan Szabo <sszabo@megazone.bigpanda.com> wrote: > You may consider seeing whether or not there's another way to lay out > the data as well that might be easier to work with. I'm with Stephan on this one - I'd say that having all mail in one table, and using the "random string" that denominates the inbox to select the messages on a per user basis. Makes the code to handle things much cleaner, and I can't see a security (or other) benefit in the separate tables. Cheers
Well, I am writing an email client where data will be stored on both the client and server. I have a table that stores all the message headers, and a table that stores the entire source for every message (including encoding for attachments etc.) Every time a user clicks on a mail folder, it pulls their message headers from the headers table. Every time a user clicks on a message, it needs to pull The message body etc. from the message source table. Now as you can imagine, on the server side, if you have 100 users, and all their message source sitting in one big table, it can slow down read operations because of all the disk i/o. Previously, I was using MySQL and placing all the users data into separate tables gave me a huge performance increase. I'm not sure if PostGreSQL will handle this better. But my main concern over this matter is the problem with Disk I/O on one big table. -----Original Message----- From: Andrej Ricnik-Bay [mailto:andrej.groups@gmail.com] Sent: Sunday, November 26, 2006 1:01 AM To: pgsql-novice@postgresql.org Cc: Greg Quinn; Stephan Szabo Subject: SPAM-LOW: Re: [NOVICE] Inserting values into a variable table On 11/26/06, Stephan Szabo <sszabo@megazone.bigpanda.com> wrote: > You may consider seeing whether or not there's another way to lay out > the data as well that might be easier to work with. I'm with Stephan on this one - I'd say that having all mail in one table, and using the "random string" that denominates the inbox to select the messages on a per user basis. Makes the code to handle things much cleaner, and I can't see a security (or other) benefit in the separate tables. Cheers -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.430 / Virus Database: 268.14.16/551 - Release Date: 11/25/2006 10:55 AM
On Sun, 26 Nov 2006, Greg Quinn wrote: > Well, I am writing an email client where data will be stored on both the > client and server. I have a table that stores all the message headers, and a > table that stores the entire source for every message (including encoding > for attachments etc.) > > Every time a user clicks on a mail folder, it pulls their message headers > from the headers table. Every time a user clicks on a message, it needs to > pull > The message body etc. from the message source table. > > Now as you can imagine, on the server side, if you have 100 users, and all > their message source sitting in one big table, it can slow down read > operations because of all the disk i/o. > > Previously, I was using MySQL and placing all the users data into separate > tables gave me a huge performance increase. > > I'm not sure if PostGreSQL will handle this better. But my main concern over > this matter is the problem with Disk I/O on one big table. That makes sense, although table partitioning might be a good fit for what you're trying to do. I don't have enough personal experience with it to give you a good sense of all of the restrictions, but you might want to peruse the docs. http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html
On 11/26/06, Greg Quinn <greg@officium.co.za> wrote: > Every time a user clicks on a mail folder, it pulls their message headers > from the headers table. Every time a user clicks on a message, it needs to > pull The message body etc. from the message source table. Ok. > Now as you can imagine, on the server side, if you have 100 users, and all > their message source sitting in one big table, it can slow down read > operations because of all the disk i/o. OK, I have a problem with this one intellectually. As far as I'm concerned I'd think I get a higher latency and bigger head-movement if the heads have to dash back and forth over a larger are of disk to get to individual tables than when manipulating a single bit of data? > Previously, I was using MySQL and placing all the users data into separate > tables gave me a huge performance increase. > > I'm not sure if PostGreSQL will handle this better. But my main concern over > this matter is the problem with Disk I/O on one big table. Me neither - I wouldn't think it does make a difference for the better, but to be sure I'd try to bench-mark it, with the same data-volume once in one big table, and once indisparate tables (and see what indexing does in both cases). Cheers, Andrej
Stephan Szabo <sszabo@megazone.bigpanda.com> writes: > On Sun, 26 Nov 2006, Greg Quinn wrote: >> Now as you can imagine, on the server side, if you have 100 users, and all >> their message source sitting in one big table, it can slow down read >> operations because of all the disk i/o. >> >> Previously, I was using MySQL and placing all the users data into separate >> tables gave me a huge performance increase. > That makes sense, although table partitioning might be a good fit for what > you're trying to do. To be blunt, this sounds like you didn't have your tables properly indexed. regards, tom lane
On Mon, 27 Nov 2006 06:20:50 +1300, "Andrej Ricnik-Bay" <andrej.groups@gmail.com> wrote: >On 11/26/06, Greg Quinn <greg@officium.co.za> wrote: > >> Every time a user clicks on a mail folder, it pulls their message headers >> from the headers table. Every time a user clicks on a message, it needs to >> pull The message body etc. from the message source table. >Ok. > > >> Now as you can imagine, on the server side, if you have 100 users, and all >> their message source sitting in one big table, it can slow down read >> operations because of all the disk i/o. >OK, I have a problem with this one intellectually. As far as I'm concerned >I'd think I get a higher latency and bigger head-movement if the heads >have to dash back and forth over a larger are of disk to get to individual >tables than when manipulating a single bit of data? > > >> Previously, I was using MySQL and placing all the users data into separate >> tables gave me a huge performance increase. >> >> I'm not sure if PostGreSQL will handle this better. But my main concern over >> this matter is the problem with Disk I/O on one big table. >Me neither - I wouldn't think it does make a difference for the better, but > to be sure I'd try to bench-mark it, with the same data-volume once in >one big table, and once indisparate tables (and see what indexing does >in both cases). > Look out for file fragmentation too! If the volume where you store the data does not have a *large* free area any big file would be fragmented and the head starts jumping like mad. Before benchmarking I would shut down the server and then defrag the disk. Bo Berglund
On 11/27/06, Bo Berglund <bo.berglund@telia.com> wrote: > >> I'm not sure if PostGreSQL will handle this better. But my main concern over > >> this matter is the problem with Disk I/O on one big table. > >Me neither - I wouldn't think it does make a difference for the better, but > > to be sure I'd try to bench-mark it, with the same data-volume once in > >one big table, and once indisparate tables (and see what indexing does > >in both cases). > Look out for file fragmentation too! > If the volume where you store the data does not have a *large* free > area any big file would be fragmented and the head starts jumping like > mad. > Before benchmarking I would shut down the server and then defrag the > disk. I wouldn't; my logic tells me that disparate tables will cause MORE head-movement than a single (well-indexed) table ... and my Linux file systems a) don't fragment to begin with and b) don't have any defragmentation tools for that reason. ;} > Bo Berglund Cheers, Andrej
On Mon, 27 Nov 2006 09:39:57 +1300, "Andrej Ricnik-Bay" <andrej.groups@gmail.com> wrote: >On 11/27/06, Bo Berglund <bo.berglund@telia.com> wrote: > >> >> I'm not sure if PostGreSQL will handle this better. But my main concern over >> >> this matter is the problem with Disk I/O on one big table. >> >Me neither - I wouldn't think it does make a difference for the better, but >> > to be sure I'd try to bench-mark it, with the same data-volume once in >> >one big table, and once indisparate tables (and see what indexing does >> >in both cases). >> Look out for file fragmentation too! >> If the volume where you store the data does not have a *large* free >> area any big file would be fragmented and the head starts jumping like >> mad. >> Before benchmarking I would shut down the server and then defrag the >> disk. >I wouldn't; my logic tells me that disparate tables will cause MORE >head-movement than a single (well-indexed) table ... and my Linux >file systems a) don't fragment to begin with and b) don't have any >defragmentation tools for that reason. ;} > Oops, I mistakenly looked at this from my limited Windows world... Of course you are right, PG is far more common in Linux than on Windows and comments regarding performance should be mainly geared towards Linux, I guess. :-) Bo Berglund
Partitioning isn't a "magic performance bullet"... it only works if you can define a clear access pattern, and then use it to segregate frequently-accessed data from infrequently-accessed data. In the case of email, this *might* make sense if you have some users that have a lot of email and rarely access it, but even then the partitioning overhead could become an issue... I'd say design it the simple way first, and see how it performs. If performance isn't good enough then it'd be time to start looking at other options. In this case, partial indexes might make more sense than partitioning the table. Storing the GUID more efficiently wouldn't be a bad plan either. But don't add that complexity until you know you need to. On Nov 26, 2006, at 1:53 AM, Greg Quinn wrote: > Well, I am writing an email client where data will be stored on > both the > client and server. I have a table that stores all the message > headers, and a > table that stores the entire source for every message (including > encoding > for attachments etc.) > > Every time a user clicks on a mail folder, it pulls their message > headers > from the headers table. Every time a user clicks on a message, it > needs to > pull > The message body etc. from the message source table. > > Now as you can imagine, on the server side, if you have 100 users, > and all > their message source sitting in one big table, it can slow down read > operations because of all the disk i/o. > > Previously, I was using MySQL and placing all the users data into > separate > tables gave me a huge performance increase. > > I'm not sure if PostGreSQL will handle this better. But my main > concern over > this matter is the problem with Disk I/O on one big table. > > > > > > -----Original Message----- > From: Andrej Ricnik-Bay [mailto:andrej.groups@gmail.com] > Sent: Sunday, November 26, 2006 1:01 AM > To: pgsql-novice@postgresql.org > Cc: Greg Quinn; Stephan Szabo > Subject: SPAM-LOW: Re: [NOVICE] Inserting values into a variable table > > On 11/26/06, Stephan Szabo <sszabo@megazone.bigpanda.com> wrote: > >> You may consider seeing whether or not there's another way to lay out >> the data as well that might be easier to work with. > I'm with Stephan on this one - I'd say that having all mail in one > table, and using the "random string" that denominates the inbox > to select the messages on a per user basis. Makes the code > to handle things much cleaner, and I can't see a security (or > other) benefit in the separate tables. > > > Cheers > > -- > No virus found in this outgoing message. > Checked by AVG Free Edition. > Version: 7.5.430 / Virus Database: 268.14.16/551 - Release Date: > 11/25/2006 > 10:55 AM > > > > ---------------------------(end of > broadcast)--------------------------- > TIP 6: explain analyze is your friend > -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)