I asked this question before killing a process that uses too much memory, and I have most of the solution developed.
However, there is one problem: calculating massive numbers does not seem to be affected by the method I'm trying to use. This code below is for setting a 10 second processor time per process.
import resource import os import signal def timeRanOut(n, stack): raise SystemExit('ran out of time!') signal.signal(signal.SIGXCPU, timeRanOut) soft,hard = resource.getrlimit(resource.RLIMIT_CPU) print(soft,hard) resource.setrlimit(resource.RLIMIT_CPU, (10, 100)) y = 10**(10**10)
What I expect to see when running this script (on a Unix machine) is this:
-1 -1 ran out of time!
Instead, I get no output. The only way to get the result is Ctrl + C , and I get this if I Ctrl + C after 10 seconds:
^C-1 -1 ran out of time! CPU time limit exceeded
If I Ctrl + C to 10 seconds, then I have to do it twice, and the console output looks like this:
^C-1 -1 ^CTraceback (most recent call last): File "procLimitTest.py", line 18, in <module> y = 10**(10**10) KeyboardInterrupt
In the course of experiments and attempts to understand this, I also put time.sleep(2) between calculating print and a large number. This seems to have no effect. If I change y = 10**(10**10) to y = 10**10 , then the print and sleep statements will work as expected. Adding flush=True to the print statement or sys.stdout.flush() after the print statement also fails.
Why can't I limit the processor time to calculate a very large number? How can I fix or at least mitigate this?
Additional Information:
Python version: 3.3.5 (default, Jul 22 2014, 18:16:02) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]
Linux information: Linux web455.webfaction.com 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux