Thread: unpacking composite values
Hi, I'm using DBD::Pg 1.32 and postgresql 8.0.3. At first I posted to the DBD::Pg mailing list because I though the following might be a perl-specific issue. Now I'm not so sure. I have a query that looks like this: select *, query_gt_sentence_param(s.expert_id, s.construal_id) from rating r join c_stat s on (r.construal_expert_id = s.expert_id and r.construal_id = s.construal_id) And query_gt_sentence_param returns a composite value. In perl, I get: $VAR1 = { 'c_stat_mean' => '-1', 'rating_value' => '-1', 'query_gt_sentence_param' => '(Brian,"admiring Brian","The Director",m,a)', 'z' => '0' }; Note the query_gt_sentence_param is returned as a string instead of as what I what, separate columns. Is there an easy way to get it to break out each column as a separate key value pair? Maybe I can flatten the composite value into regular fields? The other thing I tried is: select *, (query_gt_sentence_param(s.expert_id, s.construal_id)).mindreader, (query_gt_sentence_param(s.expert_id, s.construal_id)).topic, (query_gt_sentence_param(s.expert_id, s.construal_id)).agent, (query_gt_sentence_param(s.expert_id, s.construal_id)).sex from rating r join c_stat s on (r.construal_expert_id = s.expert_id and r.construal_id = s.construal_id) This works, but explain analyze shows that it is calling my function 4 times instead of once. The function is declared as STABLE but I guess the optimizer can't prove that the function returns the same results within the same query. Any suggestions?
Attachment
On Fri, Jun 24, 2005 at 10:26:49AM +0530, Joshua N Pritikin wrote: > Is there an easy way to get it to break out each column as a separate key > value pair? Maybe I can flatten the composite value into regular fields? I guess the solution is to write another stored procedure.
Attachment
Hi, Is it possible to limit the size or the number of rows of table, in a FIFO mode for example ?? Regards Al
alex wrote: > Hi, > > Is it possible to limit the size or the number of rows of table, in a > FIFO mode for example ?? > > Regards > > Al Hmmm, do you mean that every time a row is INSERTed that you want to delete the oldest row? How about a serial column and an AFTER INSERT trigger? The trigger could search for and delete the row with the smallest serial value. This could get real slow if your table is huge. If the volume is high enough then you will have to handle wrap around. You could simply write a script that would subtract the minimum value from all the rows when the serial reached the maximum. -- Kind Regards, Keith
Keith Worthington wrote: > alex wrote: > >> Hi, >> >> Is it possible to limit the size or the number of rows of table, in a >> FIFO mode for example ?? >> >> Regards >> >> Al > > > Hmmm, do you mean that every time a row is INSERTed that you want to > delete the oldest row? > > How about a serial column and an AFTER INSERT trigger? The trigger > could search for and delete the row with the smallest serial value. > This could get real slow if your table is huge. If the volume is high > enough then you will have to handle wrap around. You could simply > write a script that would subtract the minimum value from all the rows > when the serial reached the maximum. > Yes It may be the solution, look at the index some times and delete the old ones ... kind regards Al