I need to run applications submitted by users. My code looks like this:
def run_app(app_path):
inp = open("app.in", "r")
otp = open("app.out", "w")
return subprocess.call(app_path, stdout=otp, stdin=inp)
Now, since I have no control over what users will send, I want to limit the size of the application’s output. Other things, such as attempting to access unauthorized system resources and abuse of CPU cycles, are limited by the rule of law. The maximum time allowed to run is handled by the parent process (in python). Now a rogue application can still try to load the server system by writing a lot of data to its stdout, knowing that stdout is stored in a file.
I do not want to use AppArmors RLIMIT or anything in kernel mode for stdout / stderr files. It would be great to do this with python using the standard library.
I'm currently thinking of subclassing the file and each write check, how much data has already been written to the stream. Or create a memory mapping file with maximum length.
But I feel that there may be an easier way to limit the size of a file that I still don't see.
source
share