Thread: Change request - log line prefix
I am part of a team that fills an operational roll administering 1000+ servers and 100's of applications. Of course we need to "read" all of our logs, and must use computers to help us. In filtering postgreSQL logs there is one thing that makes life difficult for us admins. Nice things about the postgreSQL logs - user definable prefix - each log line after the prefix contains a log line status such as; ERROR: FATAL: LOG: NOTICE: WARNING: STATEMENT: - the configurable compile time option to set the wrap column for the log lines. Now for the bad things Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in the original SQL statement. Wrapped line no longer have the prefix - difficult to grep the log for everything pertaining to a particular database oruser Wrapped lines no longer have the log line status - difficult to auto-ignore all NOTICE status log lines when they wrap, or ignore all user STATEMENT lines because they almost always wrap. In conclusion, I would like to see a logging change that included the prefix on EVERY line, and included the STATUS on everyline. Comments? If everyone :-) is in agreement can the authors just "get it done"? Thanks for your time. Evan Rempel Systems administrator University of Victoria.
Evan Rempel <erempel@uvic.ca> writes: > Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in > the original SQL statement. > Wrapped line no longer have the prefix - difficult to grep the log for everything pertaining to a particular database oruser > Wrapped lines no longer have the log line status - difficult to auto-ignore all NOTICE status log lines when they wrap,or > ignore all user STATEMENT lines because they almost always wrap. I think your life would be better if you used CSV log format. > In conclusion, I would like to see a logging change that included the prefix on EVERY line, and included the STATUS onevery line. This doesn't really sound like an improvement to me. It's going to make the logs bulkier, but they're still not automatically parseable in any meaningful sense. CSV is the way to go if you want machine-readable logs. regards, tom lane
On Thu, May 31, 2012 at 2:05 PM, Evan Rempel <erempel@uvic.ca> wrote: > Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in > the original SQL statement. The problem isn't so much the wrapping, it seems, as that your statements' line breaks are being propagated through. So as a possible alternative solution, perhaps there could be an option to replace newlines with spaces before the line goes to the log? ChrisA
Can this be done to syslog destination? Evan Rempel Systems Administrator University of Victoria On 2012-05-30, at 10:37 PM, "Tom Lane" <tgl@sss.pgh.pa.us> wrote: > Evan Rempel <erempel@uvic.ca> writes: >> Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in >> the original SQL statement. >> Wrapped line no longer have the prefix - difficult to grep the log for everything pertaining to a particular databaseor user >> Wrapped lines no longer have the log line status - difficult to auto-ignore all NOTICE status log lines when they wrap,or >> ignore all user STATEMENT lines because they almost always wrap. > > I think your life would be better if you used CSV log format. > >> In conclusion, I would like to see a logging change that included the prefix on EVERY line, and included the STATUS onevery line. > > This doesn't really sound like an improvement to me. It's going to make > the logs bulkier, but they're still not automatically parseable in any > meaningful sense. CSV is the way to go if you want machine-readable logs. > > regards, tom lane
On Wed, May 30, 2012 at 09:05:23PM -0700, Evan Rempel wrote: > I am part of a team that fills an operational roll administering 1000+ servers and > 100's of applications. Of course we need to "read" all of our logs, and must use computers to > help us. In filtering postgreSQL logs there is one thing that makes life difficult for us admins. consider using pg.grep: http://www.depesz.com/2012/01/23/some-new-tools-for-postgresql-or-around-postgresql/ Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/
On Thu, May 31, 2012 at 12:19 PM, Chris Angelico <rosuav@gmail.com> wrote: > On Thu, May 31, 2012 at 2:05 PM, Evan Rempel <erempel@uvic.ca> wrote: >> Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in >> the original SQL statement. > > The problem isn't so much the wrapping, it seems, as that your > statements' line breaks are being propagated through. So as a possible > alternative solution, perhaps there could be an option to replace > newlines with spaces before the line goes to the log? I'd certainly like to see this or similar (encode the querys into a single line of ascii, lossy is ok). I like my logs both readable and greppable. -- Stuart Bishop <stuart@stuartbishop.net> http://www.stuartbishop.net/
I have a project where I will have two clients essentially doing the same things at the same time. The idea is that if one has already done the work, then the second one does not need to do it. I was hoping that adding a task related unique identifier to a table could be used to coordinate these client, something like a primary key and using select for update. The challenge I have is during the initial insert. One of the two clients will cause postgresql to log an error, which I would rather avoid (just seems dirty). Here is the time line; Both clients A and B becomes aware to do a task Client A or client B issues the "select for update ... if not exist do insert" type command The other client gets blocked on the "select for update. First client finishes insert/updates to record that it has delt with the task second client gets unblocked and reads the record realizing that the first client delt with the task already. It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code. Anyone care to school me? Evan.
On Sat, 9 Jun 2012 15:41:34 -0700 Evan Rempel <erempel@uvic.ca> wrote: > I have a project where I will have two clients essentially doing the > same things at the same time. The idea is that if one has already done the > work, then the second one does not need to do it. > > I was hoping that adding a task related unique identifier to a table > could be used to coordinate these client, something like a primary key and using > select for update. > > The challenge I have is during the initial insert. One of the two clients will cause postgresql > to log an error, which I would rather avoid (just seems dirty). > > Here is the time line; > > Both clients A and B becomes aware to do a task > > Client A or client B issues the "select for update ... if not exist do insert" type command > The other client gets blocked on the "select for update. > > First client finishes insert/updates to record that it has delt with the task > > second client gets unblocked and reads the record realizing that the first client delt with the task already. > > > It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code. > > Anyone care to school me? It's amazing to me how often I have this conversation ... How would you expect SELECT FOR UPDATE to work when you're checking to see if you can insert a row? If the row doesn't exist, there's nothing to lock against, and thus it doesn't help anything. FOR UPDATE is only useful if you're UPDATING a row. That being given, there are a number of ways to solve your problem. Which one you use depends on a number of factors. If it's x number of processes all contending for one piece of work, you could just exclusive lock the entire table, and do the check/insert with the table locked. This essentially creates a wait queue. If the processes need to coordinate around doing several pieces of work, you can put a row in for each piece of work with a boolean field indicating whether a process is currently working on it. Then you can SELECT FOR UPDATE a particular row representing work to be done, and if the boolean isn't already true, set it to true and start working. In my experience, you'll benefit from going a few steps forward and storing some information about what's being done on it (like the PID of the process working on it, and the time it started processing) -- it just makes problems easier to debug later. There are other approaches as well, but those are the two that come to mind. Not sure what your experience level is, but I'll point out that these kinds of things only work well if you're transaction management is correct. I have seen people struggle to get these kind of things working because they didn't really understand how transactions and locking interact, or they were using some sort of abstraction layer that does transaction stuff in such an opaque way that they couldn't figure out what was actually happening. Hope this helps. -- Bill Moran <wmoran@potentialtech.com>
You will find this reading a good start point: http://www.cs.uiuc.edu/class/fa07/cs411/lectures/cs411-f07-tranmgr-3.pdf There are no "fit all needs" cookbook about this, you will have to learn the theory about transactional database transaction management and locking mechanism and work on your solution. Wish you the best, Edson. Em 09/06/2012 19:41, Evan Rempel escreveu: > I have a project where I will have two clients essentially doing the > same things at the same time. The idea is that if one has already done the > work, then the second one does not need to do it. > > I was hoping that adding a task related unique identifier to a table > could be used to coordinate these client, something like a primary key and using > select for update. > > The challenge I have is during the initial insert. One of the two clients will cause postgresql > to log an error, which I would rather avoid (just seems dirty). > > Here is the time line; > > Both clients A and B becomes aware to do a task > > Client A or client B issues the "select for update ... if not exist do insert" type command > The other client gets blocked on the "select for update. > > First client finishes insert/updates to record that it has delt with the task > > second client gets unblocked and reads the record realizing that the first client delt with the task already. > > > It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code. > > Anyone care to school me? > > Evan.
> -----Original Message----- > > Both clients A and B becomes aware to do a task > Ideally you would have this aware-ness manifested as an INSERT into some kind of job table. The clients can issue the "SELECT FOR UPDATE" + "UPDATE" commands to indicate that they are going to be responsible for said task. You seem to combining "something needs to be done" with "I am able to do that something". You may not have a choice depending on your situation but it is something to think about - how can I just focus on implementing the "something needs to be done" part. If you want to avoid the errors appearing in the logs or client you could just wrap the INSERT command into a function and trap the duplicate key exception. It is hard to give suggestions when you are as vague as "becomes aware to do a task". Ideally even if you have multiple clients monitoring for "aware state" only one client should ever actually realize said awareness for a given task. In effect you want to serialize the monitoring routine at this level, insert the "something needs to be done" record, then serialize (for update) the "I am able to do that something" action. David J.
Thanks for the input. Dave also replied indicating that without more details it is difficult the really help. I was intentionally vague to see if there was some SQL standard way like to mysql "insert ... on duplicate update ... " syntax, or the proposed MSSQL merge command. Since not, I'll give a lot more detail without writing a book. We are working on a project where a syslog stream arrives and is parsed in real time. Every log message is considered to be an "event". Events can usually be ignored because they are "normal behaviour". Some events indicate a problem and should create an incident. Repetitions of the same event should not create a new incident if the current incident is not resolved. For redundancy, two independent systems will consume and process these events. Now for the part where postgresql comes in. When an event that should create an incident is encountered, only one incident should be created. The incident details are not known until such time as the event occurs, so no pre-population of tables can occur. So only one of the two server should perform the insert, and then update it with details of successive events, possibly ticketing system identification, date/time of last pager message that went out. Once the incident is placed into postgresql, everything is easy, "select for update", determine all that should take place like paging, updating tickets, recording date/time of last alert sent to administrators. It is just that first insert that is the challenge. One system does the insert, only to have the other do the "select for update". I would like to have an insert that is locked AND visible to other sessions. Exclusive lock on the table is an idea, but it serializes ALL new incident creation, and we only NEED to serialize for the same incident identifier. Since both (all) of the systems that will be processing the live log stream, in all likelihood all of the servers will always be working on the same data and thus the same incident, so will always be locking the same piece of data anyway, so the full table lock may not be any worse. I was thinking of using and advisory lock, which would also serialize everything, just like a table lock, but again, that may not be a problem since all processes work on the same data at the same time anyways. I could also use a custom "my_locks" table that just has rows with unique values that I do a "select for update" on to serialize everything. Again, no functional difference from table or advisory locks. A follow on questions; Is there anything inherently atomic about a stored procedure? Does the stored procedure simply run within the transaction context of where it is called from? begin transaction - select storedProc1 - select storedProc2 commit Would the actions of both stored procedures would be a single atomic action? Thanks again for lending me your experience, it can, and is, saving me days. Evan. ________________________________________ From: Bill Moran [wmoran@potentialtech.com] Sent: Saturday, June 09, 2012 4:35 PM To: Evan Rempel Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] is there a select for update insert if not exist type command? On Sat, 9 Jun 2012 15:41:34 -0700 Evan Rempel <erempel@uvic.ca> wrote: > I have a project where I will have two clients essentially doing the > same things at the same time. The idea is that if one has already done the > work, then the second one does not need to do it. > > I was hoping that adding a task related unique identifier to a table > could be used to coordinate these client, something like a primary key and using > select for update. > > The challenge I have is during the initial insert. One of the two clients will cause postgresql > to log an error, which I would rather avoid (just seems dirty). > > Here is the time line; > > Both clients A and B becomes aware to do a task > > Client A or client B issues the "select for update ... if not exist do insert" type command > The other client gets blocked on the "select for update. > > First client finishes insert/updates to record that it has delt with the task > > second client gets unblocked and reads the record realizing that the first client delt with the task already. > > > It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code. > > Anyone care to school me? It's amazing to me how often I have this conversation ... How would you expect SELECT FOR UPDATE to work when you're checking to see if you can insert a row? If the row doesn't exist, there's nothing to lock against, and thus it doesn't help anything. FOR UPDATE is only useful if you're UPDATING a row. That being given, there are a number of ways to solve your problem. Which one you use depends on a number of factors. If it's x number of processes all contending for one piece of work, you could just exclusive lock the entire table, and do the check/insert with the table locked. This essentially creates a wait queue. If the processes need to coordinate around doing several pieces of work, you can put a row in for each piece of work with a boolean field indicating whether a process is currently working on it. Then you can SELECT FOR UPDATE a particular row representing work to be done, and if the boolean isn't already true, set it to true and start working. In my experience, you'll benefit from going a few steps forward and storing some information about what's being done on it (like the PID of the process working on it, and the time it started processing) -- it just makes problems easier to debug later. There are other approaches as well, but those are the two that come to mind. Not sure what your experience level is, but I'll point out that these kinds of things only work well if you're transaction management is correct. I have seen people struggle to get these kind of things working because they didn't really understand how transactions and locking interact, or they were using some sort of abstraction layer that does transaction stuff in such an opaque way that they couldn't figure out what was actually happening. Hope this helps. -- Bill Moran <wmoran@potentialtech.com>
Depending on the version of Pg there are two possible solutions to this problem. The first (old solution) that really only works well one row at a time is to do a stored procedure that does something like: update foo set bar = baz where id = in_id if not found insert into foo (bar) values (baz) end if; The newer way, which can be done in SQL with Pg 9.1 is to use writable common table expressions. See http://vibhorkumar.wordpress.com/2011/10/26/upsertmerge-using-writable-cte-in-postgresql-9-1/ for an example by Vibhor Kumar. Best Wishes, Chris Travers
One of the possible strategies that comes to my mind is: 1) Log your "syslog stream" into PostgreSQL database (no need to record all message, just a simple table the the event key and a flag "processed" field) 2) When problem event arrives, the first server should "select for update" on the event of syslog, then update a "processed" field (from "0" to "1" or false to true, something like that). When second server try to get the "select for update" on the events table, it will fails, and then it can move ahead in the log looking for other errors that need attention. I cannot say how adversely this strategy would affect your system (or if there are other contention involved, like page locks), but seems very logical for me. I've used this to distribute processing on users desktops for a massive processing system with success (but using MySQL for storing data). Regards, Edson Em 09/06/2012 23:40, Evan Rempel escreveu: > Thanks for the input. Dave also replied indicating that without more details it > is difficult the really help. I was intentionally vague to see if there was some > SQL standard way like to mysql "insert ... on duplicate update ... " syntax, or > the proposed MSSQL merge command. > > Since not, I'll give a lot more detail without writing a book. > > We are working on a project where a syslog stream arrives and is parsed in > real time. Every log message is considered to be an "event". Events can usually > be ignored because they are "normal behaviour". Some events indicate a problem > and should create an incident. Repetitions of the same event should not create a new > incident if the current incident is not resolved. > > For redundancy, two independent systems will consume and process these events. > > > Now for the part where postgresql comes in. > > When an event that should create an incident is encountered, only one incident > should be created. The incident details are not known until such time as the event > occurs, so no pre-population of tables can occur. So only one of the two server should > perform the insert, and then update it with details of successive events, possibly ticketing > system identification, date/time of last pager message that went out. > > Once the incident is placed into postgresql, everything is easy, "select for update", > determine all that should take place like paging, updating tickets, recording date/time > of last alert sent to administrators. > > It is just that first insert that is the challenge. One system does the insert, only to have the > other do the "select for update". I would like to have an insert that is locked AND visible to > other sessions. > > Exclusive lock on the table is an idea, but it serializes ALL new incident creation, and we > only NEED to serialize for the same incident identifier. Since both (all) of the systems > that will be processing the live log stream, in all likelihood all of the servers will always be > working on the same data and thus the same incident, so will always be locking the same > piece of data anyway, so the full table lock may not be any worse. > > I was thinking of using and advisory lock, which would also serialize everything, just like a table lock, > but again, that may not be a problem since all processes work on the same data at the same time > anyways. > > I could also use a custom "my_locks" table that just has rows with unique values that I do a > "select for update" on to serialize everything. Again, no functional difference from table or advisory locks. > > > A follow on questions; > > Is there anything inherently atomic about a stored procedure? > Does the stored procedure simply run within the transaction context of where it is called from? > > begin transaction > - select storedProc1 > - select storedProc2 > commit > > Would the actions of both stored procedures would be a single atomic action? > > Thanks again for lending me your experience, it can, and is, saving me days. > > Evan. > > ________________________________________ > From: Bill Moran [wmoran@potentialtech.com] > Sent: Saturday, June 09, 2012 4:35 PM > To: Evan Rempel > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] is there a select for update insert if not exist type command? > > On Sat, 9 Jun 2012 15:41:34 -0700 Evan Rempel<erempel@uvic.ca> wrote: > >> I have a project where I will have two clients essentially doing the >> same things at the same time. The idea is that if one has already done the >> work, then the second one does not need to do it. >> >> I was hoping that adding a task related unique identifier to a table >> could be used to coordinate these client, something like a primary key and using >> select for update. >> >> The challenge I have is during the initial insert. One of the two clients will cause postgresql >> to log an error, which I would rather avoid (just seems dirty). >> >> Here is the time line; >> >> Both clients A and B becomes aware to do a task >> >> Client A or client B issues the "select for update ... if not exist do insert" type command >> The other client gets blocked on the "select for update. >> >> First client finishes insert/updates to record that it has delt with the task >> >> second client gets unblocked and reads the record realizing that the first client delt with the task already. >> >> >> It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code. >> >> Anyone care to school me? > It's amazing to me how often I have this conversation ... > > How would you expect SELECT FOR UPDATE to work when you're checking to see > if you can insert a row? If the row doesn't exist, there's nothing to > lock against, and thus it doesn't help anything. FOR UPDATE is only > useful if you're UPDATING a row. > > That being given, there are a number of ways to solve your problem. Which > one you use depends on a number of factors. > > If it's x number of processes all contending for one piece of work, you could > just exclusive lock the entire table, and do the check/insert with the > table locked. This essentially creates a wait queue. > > If the processes need to coordinate around doing several pieces of work, you > can put a row in for each piece of work with a boolean field indicating > whether a process is currently working on it. Then you can SELECT FOR > UPDATE a particular row representing work to be done, and if the boolean > isn't already true, set it to true and start working. In my experience, > you'll benefit from going a few steps forward and storing some information > about what's being done on it (like the PID of the process working on it, > and the time it started processing) -- it just makes problems easier to > debug later. > > There are other approaches as well, but those are the two that come to > mind. > > Not sure what your experience level is, but I'll point out that these > kinds of things only work well if you're transaction management is > correct. I have seen people struggle to get these kind of things working > because they didn't really understand how transactions and locking interact, > or they were using some sort of abstraction layer that does transaction > stuff in such an opaque way that they couldn't figure out what was actually > happening. > > Hope this helps. > > -- > Bill Moran<wmoran@potentialtech.com>