Josh wrote:
>You cannot cap CPU usage in *any* way unless you are using a "real time
>operating system", like QNX or Real Time Linux, or some of the radical
>patches for Linux kernel 2.5. And PostgreSQL has not been ported to any of
>those systems AFAIK, so you're on your own ...
How about "man ulimit" whose man page hasn't changed since Linux 2.0?
The csh command "limit cputime 1" will happily limit child processes
to 1 second of CPU time, and any process exceeding this limit will
die with a SIGXCPU.
When I try this with postgresql, my log file then happily says something
like:
"server process (pid 18945) was terminated by signal 24"
from the SIGXCPU that kills the backend,
and my client happily chokes with:
"The connection to the server was lost. Attempting reset: Failed."
If I was worried about runaway queries (say, I exposed a reporting
system to non-technical users that had the ability for people to
easily shoot themselves in the foot by running absurd queries, would I get
myself in big trouble by "limit cputime 3600" before restarting the
postmaster?
Yeah, I know that the postmaster, stats collector, etc would eventually
get nailed by this limit. But can I assume WAL, etc, will protect against
data corruption?
Ron
PS: no, i don't recommend doing this on data that isn't backed up :-)