Thread: Is PostgreSQL for this?
Hi everybody, I'm looking for a database system for a SCADA system. The major probles I think it's on performance because the application it's going to poll about 4k variables per second from hardware and has to register the values on the procces table. I heard that PostgreSQL provides a bulk loading mechanism called COPY, which takes tab-delimited or CSV input from a file. Where COPY can be used instead of hundreds or thousands of INSERTS, it can cut execution time. I'm less than a novice so I'll thank any piece of advice. Sorry for my poor english. F.Hi everybody, I'm looking for a database system for a SCADA system. The major probles I think it's on performance because the application it's going to poll about 4k variables per second from hardware and has to register the values on the procces table. I heard that PostgreSQL provides a bulk loading mechanism called COPY, which takes tab-delimited or CSV input from a file. Where COPY can be used instead of hundreds or thousands of INSERTS, it can cut execution time. I'm less than a novice so I'll thank any piece of advice. Sorry for my poor english. F.
> I'm looking for a database system for a SCADA system. The major > probles I think it's on performance because the application it's going > to poll about 4k variables per second from hardware and has to > register > the values on the procces table. > I heard that PostgreSQL provides a bulk loading mechanism called COPY, > which takes tab-delimited or CSV input from a file. Where COPY can be > used instead of hundreds or thousands of INSERTS, it can cut execution > time. I think Postgresql can do what you want provide you have the hardware that can support 4K + writes/second. However, if you followed the model that PIE uses you could really relieve the write load on the database since they only write a 15 second rolling average of the process values every minute. A good example of what not to do is how Siemans PCS7 implemented MSSQL server as its backend historian. They created a new db every week to hold trended data. The problem they faced was that if you requested data that spanned multiple weeks from their front end they would sometimes loose weeks of data in the trend. The problem in thier design what that (for some reason) they would write data to the wrong database segment. Regards, Richard Broersma Jr.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/27/06 11:59, Richard Broersma Jr wrote: >> I'm looking for a database system for a SCADA system. The major >> probles I think it's on performance because the application >> it's going to poll about 4k variables per second from hardware >> and has to register the values on the procces table. I heard >> that PostgreSQL provides a bulk loading mechanism called COPY, >> which takes tab-delimited or CSV input from a file. Where COPY >> can be used instead of hundreds or thousands of INSERTS, it can >> cut execution time. > > I think Postgresql can do what you want provide you have the > hardware that can support 4K + writes/second. However, if you As others have mentioned, DP variables are 8 bytes. If metadata is 16 bytes, that's a total of (8+16)*4096 = 96KBps. Trivial for even the slowest home-designed hard disks. Let's then take the CPU. A dual-core 2GHz Opteron has 4 gigacycles per second. That gives a dedicated machine 1 megacycle to handle each variable per second. I certainly think that's achievable... - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFFkt4iS9HxQb37XmcRAm4GAJ4lsDl1Juw2/VMsaKA+6YlOth9RngCgiMse PlMJq3loRhVUaBh5y+oDLnM= =QWBN -----END PGP SIGNATURE-----