Thread: Plans for 8.2?
Hi, I wonder what features other users would like to see in the next version? (8.2) The features I'd (very much) would like tosee implemented are: - "Top offender statistics" In other DBMS:es there are functions that allow you to turn on a sort of monitoring that can tell you, for a period of time: 1) which queries that generated the highest I/O 2) which queries that took the longest time to execute 3) percentage of total time spent on I/O wait, CPU time, etc Statistics like these can be very helpful when it comes to identifying application bottlenecks / areas for improvements. - Materialized views What's the status on the matview project? - "Built in" failover/clustering There are lots of projects supplying replication/failover/clustering functionality (Slony-I, pgcluster, pgpool etc), butthese are "non-official" in that they do not provide out-of-the box functionality (ie not bundled with a default Pg install) - Analytic/window functions In DBMS:es such as Oracle, there's rank(), dense_rank(), lag(), lead() etc. These functions may(?) be possible to implementusing existing/new aggregates. Comments? /Mikael
Mikael Carneholm wrote: >Hi, > >I wonder what features other users would like to see in the next version? (8.2) The features I'd (very much) would liketo see implemented are: > >- "Top offender statistics" >In other DBMS:es there are functions that allow you to turn on a sort of monitoring that can tell you, for a period of time: >1) which queries that generated the highest I/O >2) which queries that took the longest time to execute >3) percentage of total time spent on I/O wait, CPU time, etc > >Statistics like these can be very helpful when it comes to identifying application bottlenecks / areas for improvements. > >- Materialized views >What's the status on the matview project? > >- "Built in" failover/clustering >There are lots of projects supplying replication/failover/clustering functionality (Slony-I, pgcluster, pgpool etc), butthese are "non-official" in that they do not provide out-of-the box functionality (ie not bundled with a default Pg install) > >- Analytic/window functions >In DBMS:es such as Oracle, there's rank(), dense_rank(), lag(), lead() etc. These functions may(?) be possible to implementusing existing/new aggregates. > >Comments? > > You should probably review the archives on a lot of these topics. Many of them have been discussed to death :). In terms of statistics we do have statistics and exhaustive logging that can provide you with all of that information. Is there something specific that the information already provided really doesn't give you? Materialized views we have, in a sense but much like updateable views you have to code for it. "Built In" Failover/Clustering This won't happen. The community stance, which is a good one is that no single replication solutions fits everyone's needs and therefore we rely out the outside sources. Slony-I, Mammoth Replicator and pgpool being the most popular. Analytic/window functions: Get coding man! :) We would love to have them. Sincerely, Joshua D. Drake >/Mikael > >---------------------------(end of broadcast)--------------------------- >TIP 4: Have you searched our list archives? > > http://archives.postgresql.org > > -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PLphp, PLperl - http://www.commandprompt.com/
>In terms of statistics we do have statistics and exhaustive logging that >can provide you with all of that information. Is there something >specific that >the information already provided really doesn't give you? Can you give an example query for "list all queries executed since 12.00 AM, order by block_reads desc"? What I'm aimingfor is the ability to turn "measuring" on, regression test my application, turn "measuring" off again, and list the most offensivequeries executed during the regression test. I know of at least one other DBMS that is capable of this...won't mentionwhich one :) >Materialized views we have, in a sense but much like updateable views >you have to code for it. I think what I'm looking for is MV support out-of-the-box, i.e "CREATE MATERIALIZED VIEW foo_mv AS .. " >"Built In" Failover/Clustering >This won't happen. The community stance, which is a good one is that no >single replication solutions fits everyone's needs and therefore we rely >out the outside >sources. Slony-I, Mammoth Replicator and pgpool being the most popular. Too bad - I think that will keep a lot of potential users from evaluating Pg as a serious alternative. Good or bad, decidefor yourself :) >Analytic/window functions: >Get coding man! :) We would love to have them. Sure - as soon as I'm finished with my 4 other hobby projects. Too many ideas, too little time.. *sigh* :/ /Mikael
From the ToDo list... http://www.postgresql.org/docs/faqs.TODO.html * Allow administrators to safely terminate individual sessions either via an SQL function or SIGTERM * Add SQL99 WITH clause to SELECT * Add SQL99 WITH RECURSIVE to SELECT * Create a bitmap of pages that need vacuuming -- output = reverse("moc.enworbbc" "@" "enworbbc") http://cbbrowne.com/info/languages.html Rules of the Evil Overlord #86. "I will make sure that my doomsday device is up to code and properly grounded." <http://www.eviloverlord.com/>
Mikael Carneholm wrote: >> In terms of statistics we do have statistics and exhaustive logging that >> can provide you with all of that information. Is there something >> specific that >> the information already provided really doesn't give you? > > Can you give an example query for "list all queries executed since 12.00 AM, order by block_reads desc"? What I'm aimingfor > is the ability to turn "measuring" on, regression test my application, turn "measuring" off again, and list the most offensivequeries executed during the regression test. I know of at least one other DBMS that is capable of this...won't mentionwhich one :) You can use timestamp and one of the duration logging options for this. >> "Built In" Failover/Clustering > >> This won't happen. The community stance, which is a good one is that no >> single replication solutions fits everyone's needs and therefore we rely >> out the outside >> sources. Slony-I, Mammoth Replicator and pgpool being the most popular. > > Too bad - I think that will keep a lot of potential users from evaluating Pg as a serious alternative. Good or bad, decidefor yourself :) Although that is one thought, the project is fairly good at supporting the various solutions in terms of where and how they exist. Most people that are going to seriously use postgresql are going to do the very easy research to get the right answer they need. >> Get coding man! :) We would love to have them. > > Sure - as soon as I'm finished with my 4 other hobby projects. Too many ideas, too little time.. *sigh* :/ Perhaps you should drop the other 4 and concentrate on the important one ;) Joshua D. Drake > > /Mikael > > ---------------------------(end of broadcast)--------------------------- > TIP 5: don't forget to increase your free space map settings -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
Chris Browne schrieb: > From the ToDo list... > > http://www.postgresql.org/docs/faqs.TODO.html > > * Allow administrators to safely terminate individual sessions either via an SQL function or SIGTERM I thought this already works? At least I'm doing so when I need ... (SIGTERM)
Tino Wildenhain <tino@wildenhain.de> writes: >> * Allow administrators to safely terminate individual sessions either via an SQL function or SIGTERM > I thought this already works? At least I'm doing so when I need ... > (SIGTERM) The key word there is "safely". We don't have a lot of trust in SIGTERM'ing individual backends (as opposed to shutting down the whole cluster at once, which is a well-tested code path). See the archives. regards, tom lane
On Jan 12, 2006, at 1:36 PM, Joshua D. Drake wrote: >>> "Built In" Failover/Clustering >>> This won't happen. The community stance, which is a good one is >>> that no single replication solutions fits everyone's needs and >>> therefore we rely out the outside >>> sources. Slony-I, Mammoth Replicator and pgpool being the most >>> popular. >> Too bad - I think that will keep a lot of potential users from >> evaluating Pg as a serious alternative. Good or bad, decide for >> yourself :) > Isn't the [expensive db name here]'s replication/failover just an expensive addon? As in if you don't pay for it you don't get it. So we're basically in the same boat as them.. just an add on. we just offer more variety. -- Jeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/ http://www.stuarthamm.net/
Jeff Trout <threshar@torgo.978.org> writes: > Isn't the [expensive db name here]'s replication/failover just an > expensive addon? > As in if you don't pay for it you don't get it. > > So we're basically in the same boat as them.. just an add on. we just > offer more variety. Well, [cheap and crappy open-source db name here]'s replication is built in, but we already know we don't want to take them as an example. :) -Doug
On 1/12/06, Jeff Trout <threshar@torgo.978.org> wrote: > > On Jan 12, 2006, at 1:36 PM, Joshua D. Drake wrote: > > >>> "Built In" Failover/Clustering > >>> This won't happen. The community stance, which is a good one is > >>> that no single replication solutions fits everyone's needs and > >>> therefore we rely out the outside > >>> sources. Slony-I, Mammoth Replicator and pgpool being the most > >>> popular. > >> Too bad - I think that will keep a lot of potential users from > >> evaluating Pg as a serious alternative. Good or bad, decide for > >> yourself :) > > > > Isn't the [expensive db name here]'s replication/failover just an > expensive addon? > As in if you don't pay for it you don't get it. > > So we're basically in the same boat as them.. just an add on. we just > offer more variety. Not really. The entire company of [expensive DB name here] is at the end of the phone[0]. Taking Oracle as an example. I am not aware of Oracle etc having a seperate company that sells replication on top of their database although I could be wrong. The other thing is that Oracle is supported by various platforms etc and that support will include their replication or clustering offering. Sun has offered to support PostgreSQL just recently but have they offered to support any of the replication offerings? I would hardly say we are in the same boat just because we have bolt on replication. -- Harry http://www.hjackson.org http://www.uklug.co.uk [0] I am not suggesting that this improves support although it does improve the appearance of support.
Mikael.Carneholm@WirelessCar.com ("Mikael Carneholm") writes: >>"Built In" Failover/Clustering > >>This won't happen. The community stance, which is a good one is that >>no single replication solutions fits everyone's needs and therefore >>we rely out the outside sources. Slony-I, Mammoth Replicator and >>pgpool being the most popular. > > Too bad - I think that will keep a lot of potential users from > evaluating Pg as a serious alternative. Good or bad, decide for > yourself :) Why on earth should that be? What serious alternative to PostgreSQL actually includes built-in failover or clustering? For Oracle, it is a separate add-on product licensed separately. Ditto for DB2. The same is likely the case for Informix and others. -- (format nil "~S@~S" "cbbrowne" "acm.org") http://cbbrowne.com/info/x.html "Let's face it -- ASCII text is a far richer medium than most of us deserve." -- Scott McNealy
"Tom Lane" <tgl@sss.pgh.pa.us> wrote > > The key word there is "safely". We don't have a lot of trust in > SIGTERM'ing individual backends (as opposed to shutting down the > whole cluster at once, which is a well-tested code path). See the > archives. > Maybe related question: is the code below in XactLockTableWait() related to SIGQUIT? /* * Transaction was committed/aborted/crashed - we have to update pg_clog * if transaction is still marked as running. */ if (!TransactionIdDidCommit(xid) && !TransactionIdDidAbort(xid)) TransactionIdAbort(xid); I interpret that if a quickdie or crash happens, then other backends may still run for a while, so it is important to mark related transaction abort. Or there is some other more obvious reason for that? Regards, Qingqing
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes: > Maybe related question: is the code below in XactLockTableWait() related to > SIGQUIT? No. > /* > * Transaction was committed/aborted/crashed - we have to update pg_clog > * if transaction is still marked as running. > */ > if (!TransactionIdDidCommit(xid) && !TransactionIdDidAbort(xid)) > TransactionIdAbort(xid); The comment's "have to" is an overstatement. The transaction would be treated as crashed anyway, it's just that this is a convenient place to make pg_clog a bit cleaner. I'm not real sure why we bother, actually. regards, tom lane
Hello. Just one request that would make the transition from another great database to PostgreSQL a lot easier: SET LOCK MODE TO WAIT n n = the max.time in second to wait. Please? My Christmas present?? For Christmas 2007 I like: statistics about how many sequential scans where have been for a given table. Regards Henk Sanders
On Fri, Jan 13, 2006 at 09:30:22AM +0100, H.J. Sanders wrote: > Just one request that would make the transition from another > great database to PostgreSQL a lot easier: > > SET LOCK MODE TO WAIT n > > n = the max.time in second to wait. Will statement_timeout suffice? http://www.postgresql.org/docs/8.1/interactive/runtime-config-client.html > For Christmas 2007 I like: > > statistics about how many sequential scans where have been for a given table. Is pg_stat_{all,sys,user}_tables.seq_scan not what you're looking for? http://www.postgresql.org/docs/8.1/interactive/monitoring-stats.html -- Michael Fuhr
On Fri, Jan 13, 2006 at 01:49:02AM -0700, Michael Fuhr wrote: > On Fri, Jan 13, 2006 at 09:30:22AM +0100, H.J. Sanders wrote: > > Just one request that would make the transition from another > > great database to PostgreSQL a lot easier: > > > > SET LOCK MODE TO WAIT n > > > > n = the max.time in second to wait. > > Will statement_timeout suffice? (I'm not implying that statement_timeout is equivalent, I'm just wondering if you might be able to use it in certain circumstances.) -- Michael Fuhr
On Thu, Jan 12, 2006 at 07:46:18PM -0500, Tom Lane wrote: > > * Transaction was committed/aborted/crashed - we have to update pg_clog > > * if transaction is still marked as running. > > */ > > if (!TransactionIdDidCommit(xid) && !TransactionIdDidAbort(xid)) > > TransactionIdAbort(xid); > > The comment's "have to" is an overstatement. The transaction would be > treated as crashed anyway, it's just that this is a convenient place to > make pg_clog a bit cleaner. I'm not real sure why we bother, actually. Because that's what makes PostgreSQL such a reliable product. You follow your intuition and taste and bother doing cleanup even if you cannot immediately tell whether it's *really* needed. Better safe than sorry. Not a bad idea for a database. Karsten (who is storing clinical data in PostgreSQL) -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
In article <45b42ce40601121233h481a36a4ieafeba4104dd33b7@mail.gmail.com>, Harry Jackson <harryjackson@gmail.com> wrote: % I am not aware of Oracle etc having a seperate company that sells % replication on top of their database although I could be wrong. There's more than one third-party replication offering for Oracle. -- Patrick TJ McPhee North York Canada ptjm@interlog.com
Jeff Trout skrev: > >>>> "Built In" Failover/Clustering >>>> This won't happen. The community stance, which is a good one is >>>> that no single replication solutions fits everyone's needs and >>>> therefore we rely out the outside >>>> sources. Slony-I, Mammoth Replicator and pgpool being the most >>>> popular. >>> Too bad - I think that will keep a lot of potential users from >>> evaluating Pg as a serious alternative. Good or bad, decide for >>> yourself :) >> > > Isn't the [expensive db name here]'s replication/failover just an > expensive addon? > As in if you don't pay for it you don't get it. > > So we're basically in the same boat as them.. just an add on. we just > offer more variety. Not really. The available options for postgresql are simply not as good as what the big databases offer. For some problems the non-transaction master/slave Slony-I is good enough. But to claim it is good enough for all, is like when MySQL claimed nobody really needs transactions. I am a big postgresql fan, and I have several production clusters using DRBD to replicate postgresql databases in an active/failover configuration. But some day I am going to need a cluster that can do active/active, and that day I will be forced to adopt a different database. I will also point out that none of the replication solutions have the same solid reputation as postgresql. As long the postgresql team will not endorse a replication solution, you can not expect people to put the same trust in these solutions as we put into postgresql itself. Oracle do endorse their own replication solution after all. Baldur
On Thu, Jan 12, 2006 at 10:40:40PM +0100, Baldur Norddahl wrote: > I will also point out that none of the replication solutions have the > same solid reputation as postgresql. As long the postgresql team will > not endorse a replication solution, you can not expect people to put the > same trust in these solutions as we put into postgresql itself. So you're saying that unless PostgreSQL Core (which I assume you're referring to with "postgresql team") endorse a pile of code that they neither wrote, audited nor have any experience with, it won't be good enough for you? I rather they didn't endorse anything they wern't sure of. Replication is hard. There are many replication solutions for Postgres, both multi-master and master/slave and sync/async. I'd rather these products prove themselves than by anyone stamping them Endorsed. > Oracle do endorse their own replication solution after all. I suppose they had a hand in writing it too... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment
> > Not really. The available options for postgresql are simply not as > good as what the big databases offer. For some problems the > non-transaction master/slave Slony-I is good enough. But to claim it > is good enough for all, is like when MySQL claimed nobody really needs > transactions. Nobody claimed Slony-I is good for all. That is why there are other products out there. Mammoth Replicator (blatant plug and which is a transaction log based replication) for example. > I am a big postgresql fan, and I have several production clusters > using DRBD to replicate postgresql databases in an active/failover > configuration. But some day I am going to need a cluster that can do > active/active, and that day I will be forced to adopt a different > database. Active, active as in multi-master? > I will also point out that none of the replication solutions have the > same solid reputation as postgresql. As long the postgresql team will > not endorse a replication solution, you can not expect people to put > the same trust in these solutions as we put into postgresql itself. That's true enough. > Oracle do endorse their own replication solution after all. Well they wrote it, they have to. Joshua D. Drake > > Baldur > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PLphp, PLperl - http://www.commandprompt.com/
Karsten Hilbert <Karsten.Hilbert@gmx.net> writes: > On Thu, Jan 12, 2006 at 07:46:18PM -0500, Tom Lane wrote: >> The comment's "have to" is an overstatement. The transaction would be >> treated as crashed anyway, it's just that this is a convenient place to >> make pg_clog a bit cleaner. I'm not real sure why we bother, actually. > Because that's what makes PostgreSQL such a reliable > product. You follow your intuition and taste and bother > doing cleanup even if you cannot immediately tell whether > it's *really* needed. Well, this is not a correctness issue. If it were, XactLockTableWait would be quite the wrong place for it, because there's no guarantee anyone will ever do XactLockTableWait for any particular transaction ID. In fact, now that I look at it, I'm pretty well convinced this is dead code: we only ever wait for XIDs that are known to have been alive in the recent past, ie, within the lifespan of the current backend. So the case should never occur ... unless possibly in the microseconds before we receive SIGQUIT from the postmaster because the other guy crashed, and in that situation trying to issue a write on pg_clog is probably not the brightest action to be taking anyway. This could easily represent a net minus for reliability, not a plus, if it increases the risk of pg_clog getting corrupted during a crash sequence. regards, tom lane
> Active, active as in multi-master? More like a Oracle grid system. 10g. -- Aly S.P Dharshi aly.dharshi@telus.net "A good speech is like a good dress that's short enough to be interesting and long enough to cover the subject"
Aly Dharshi wrote: > >> Active, active as in multi-master? > > More like a Oracle grid system. 10g. Heh. Well here is the thing. Oracle has billions of dollars. Plus. 1. We really don't compete against Oracle grid. The people that *need* Oracle grid, are going to buy Oracle grid. 2. We don't need to compete against Oracle grid as nobody else that could be considered competition has it either. 3. We will get there in time but we have more important things to worry about. I would rather us be a 100% solution for 98% of the people then a 100% solution for 2%. Sincerely, Joshua D. Drake > -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
On Thu, Jan 12, 2006 at 10:36:39AM -0800, Joshua D. Drake wrote: > Mikael Carneholm wrote: > >>In terms of statistics we do have statistics and exhaustive logging that > >>can provide you with all of that information. Is there something > >>specific that > >>the information already provided really doesn't give you? > > > >Can you give an example query for "list all queries executed since 12.00 > >AM, order by block_reads desc"? What I'm aiming for > >is the ability to turn "measuring" on, regression test my application, > >turn "measuring" off again, and list the most offensive queries executed > >during the regression test. I know of at least one other DBMS that is > >capable of this...won't mention which one :) > > You can use timestamp and one of the duration logging options for this. But AFAIK that won't provide any information on IO used, or even blocks read. Not to mention that parsing the logs is a PITA. Plain and simple, it would be damn nice if query execution stats could be easily logged to a table. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Fri, Jan 13, 2006 at 01:59:19AM -0700, Michael Fuhr wrote: > On Fri, Jan 13, 2006 at 01:49:02AM -0700, Michael Fuhr wrote: > > On Fri, Jan 13, 2006 at 09:30:22AM +0100, H.J. Sanders wrote: > > > Just one request that would make the transition from another > > > great database to PostgreSQL a lot easier: > > > > > > SET LOCK MODE TO WAIT n > > > > > > n = the max.time in second to wait. > > > > Will statement_timeout suffice? > > (I'm not implying that statement_timeout is equivalent, I'm just > wondering if you might be able to use it in certain circumstances.) It strikes me that if we had a way to abort a statement on another backend, you could abort anything that's been waiting more than x seconds for a lock via an external process watching pg_locks. Of course, that would be much more cumbersom than SET LOCK MODE TO WAIT n... -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On 1/13/06, Jim C. Nasby <jnasby@pervasive.com> wrote: > On Thu, Jan 12, 2006 at 10:36:39AM -0800, Joshua D. Drake wrote: > > Mikael Carneholm wrote: > > >>In terms of statistics we do have statistics and exhaustive logging that > > >>can provide you with all of that information. Is there something > > >>specific that > > >>the information already provided really doesn't give you? > > > > > >Can you give an example query for "list all queries executed since 12.00 > > >AM, order by block_reads desc"? What I'm aiming for > > >is the ability to turn "measuring" on, regression test my application, > > >turn "measuring" off again, and list the most offensive queries executed > > >during the regression test. I know of at least one other DBMS that is > > >capable of this...won't mention which one :) > > > > You can use timestamp and one of the duration logging options for this. > > But AFAIK that won't provide any information on IO used, or even blocks > read. Not to mention that parsing the logs is a PITA. > There has been announce recently about a project for analizing slow queries: http://archives.postgresql.org/pgsql-announce/2006-01/msg00007.php -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;)
On Fri, Jan 13, 2006 at 03:40:48PM -0600, Jim C. Nasby wrote: > It strikes me that if we had a way to abort a statement on another > backend, you could abort anything that's been waiting more than x > seconds for a lock via an external process watching pg_locks. Of course, > that would be much more cumbersom than SET LOCK MODE TO WAIT n... There's pg_cancel_backend(), but a mechanism like you describe seems subject to race conditions: by the time you decide to cancel a query and send the signal, the offending query might have completed and you end up cancelling some subsequent query that just started. -- Michael Fuhr
On Fri, Jan 13, 2006 at 02:59:47PM -0700, Michael Fuhr wrote: > On Fri, Jan 13, 2006 at 03:40:48PM -0600, Jim C. Nasby wrote: > > It strikes me that if we had a way to abort a statement on another > > backend, you could abort anything that's been waiting more than x > > seconds for a lock via an external process watching pg_locks. Of course, > > that would be much more cumbersom than SET LOCK MODE TO WAIT n... > > There's pg_cancel_backend(), but a mechanism like you describe seems > subject to race conditions: by the time you decide to cancel a query > and send the signal, the offending query might have completed and > you end up cancelling some subsequent query that just started. Yeah, to make it useable you'd probably need to do something like expose cid via pg_locks and have a pg_cancel_backend(cid) that would only cancel that command. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Fri, Jan 13, 2006 at 04:49:02PM -0500, Jaime Casanova wrote: > There has been announce recently about a project for analizing slow queries: > http://archives.postgresql.org/pgsql-announce/2006-01/msg00007.php Sure, but it still means installing an external tool, which requires PHP, which isn't trivial to install in it's own right. And afaik there's still no way to find out how much IO each query did, how much CPU was spent, if any sorts overflowed, etc., etc. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461