Log syntax errors and uncaught exceptions for python subprocess and print them to terminal

Problem

I am trying to write a program that logs uncaught exceptions and syntax errors for a subprocess. Easy, right? Just pipe the stderr to the right place.

However, the subprocess is another python program - I will call it test.py - which should start as if its output / errors were not being written. That is, starting the registration program should look like as soon as the user python test.py as usual.

A further complication of the problem is the problem that raw_input actually sent to stderr if readline not used. Unfortunately, I cannot just import readline , since I have no control over the files that are launched using my error logger.

Notes:

  • I'm limited enough for the machines this code will run on. I cannot install pexpect or edit *customize.py files (since the program will be launched by many different users). I really feel that there should be a stdlib solution anyway, though ...
  • This is only necessary for working with poppies.
  • The motivation for this is that I am part of a team that studies the errors that new programmers receive.

What i tried

I tried the following methods without success:

  • just using tee , as in the question How to write stderr to a file when using "tee" with a pipe? (failed to create raw_input hints); The python tee implementations I found in several SO questions had similar problems.
  • rewriting sys.excepthook (failed to get it to work for the subprocess)
  • this question turned out to be correct, but it was not able to correctly display raw_input hints.
  • the logging module seems useful for actually writing to the log file, but does not seem to cope with the issue issue
  • custom stderr readers
  • endless googling
+4
source share
2 answers

Based on the @nneonneo tip in the comments on the question, I made this program that seems to be doing its job. (Note that the log file name is currently set to "pylog" to correctly print errors to the end user.)

 #!/usr/bin/python ''' This module logs python errors. ''' import socket, os, sys, traceback def sendError(err): # log the error (in my actual implementation, this sends the error to a database) with open('log','w') as f: f.write(err) def exceptHandler(etype, value, tb): """An additional wrapper around our custom exception handler, to prevent errors in this program from being seen by end users.""" try: subProgExceptHandler(etype, value, tb) except: sys.stderr.write('Sorry, but there seems to have been an error in pylog itself. Please run your program using regular python.\n') def subProgExceptHandler(etype, value, tb): """A custom exception handler that both prints error and traceback information in the standard Python format, as well as logs it.""" import linecache errorVerbatim = '' # The following code mimics a traceback.print_exception(etype, value, tb) call. if tb: msg = "Traceback (most recent call last):\n" sys.stderr.write(msg) errorVerbatim += msg # The following code is a modified version of the trackeback.print_tb implementation from # cypthon 2.7.3 while tb is not None: f = tb.tb_frame lineno = tb.tb_lineno co = f.f_code filename = co.co_filename name = co.co_name # Filter out exceptions from pylog itself (eg. execfile). if not "pylog" in filename: msg = ' File "%s", line %d, in %s\n' % (filename, lineno, name) sys.stderr.write(msg) errorVerbatim += msg linecache.checkcache(filename) line = linecache.getline(filename, lineno, f.f_globals) if line: msg = ' ' + line.strip() + '\n' sys.stderr.write(msg) errorVerbatim += msg tb = tb.tb_next lines = traceback.format_exception_only(etype, value) for line in lines: sys.stderr.write(line) errorVerbatim += line # Send the error data to our database handler via sendError. sendError(errorVerbatim) def main(): """Executes the program specified by the user in its own sandbox, then sends the error to our database for logging and analysis.""" # Get the user (sub)program to run. try: subProgName = sys.argv[1] subProgArgs = sys.argv[3:] except: print 'USAGE: ./pylog FILENAME.py *ARGS' sys.exit() # Catch exceptions by overriding the system excepthook. sys.excepthook = exceptHandler # Sandbox user code exeuction to its own global namespace to prevent malicious code injection. execfile(subProgName, {'__builtins__': __builtins__, '__name__': '__main__', '__file__': subProgName, '__doc__': None, '__package__': None}) if __name__ == '__main__': main() 
+2
source

The answer associated with the tee is not suitable for your task. Although you can fix the " raw_input() hints" with the -u option to disable buffering:

 errf = open('err.txt', 'wb') # any object with .write() method rc = call([sys.executable, '-u', 'test.py'], stderr=errf, bufsize=0, close_fds=True) errf.close() 

A more suitable solution might be based on pexpect or pty , example .

running the registration program should look like as soon as the user runs python test.py as usual.

 #!/usr/bin/env python import sys import pexpect with open('log', 'ab') as fout: p = pexpect.spawn("python test.py") p.logfile = fout p.interact() 

You do not need to install pexpect , it is pure Python, you can put it next to your code.

Here's the tee-based counterpart ( test.py runs silently):

 #!/usr/bin/env python import sys from subprocess import Popen, PIPE, STDOUT from threading import Thread def tee(infile, *files): """Print `infile` to `files` in a separate thread.""" def fanout(infile, *files): flushable = [f for f in files if hasattr(f, 'flush')] for c in iter(lambda: infile.read(1), ''): for f in files: f.write(c) for f in flushable: f.flush() infile.close() t = Thread(target=fanout, args=(infile,)+files) t.daemon = True t.start() return t def call(cmd_args, **kwargs): stdout, stderr = [kwargs.pop(s, None) for s in 'stdout', 'stderr'] p = Popen(cmd_args, stdout=None if stdout is None else PIPE, stderr=None if stderr is None else ( STDOUT if stderr is STDOUT else PIPE), **kwargs) threads = [] if stdout is not None: threads.append(tee(p.stdout, stdout, sys.stdout)) if stderr is not None and stderr is not STDOUT: threads.append(tee(p.stderr, stderr, sys.stderr)) for t in threads: t.join() # wait for IO completion return p.wait() with open('log','ab') as file: rc = call([sys.executable, '-u', 'test.py'], stdout=file, stderr=STDOUT, bufsize=0, close_fds=True) 

You need to merge stdout / stderr because of the confusion where raw_input() , getpass.getpass() can print their prompts.

In this case, threads are also not needed:

 #!/usr/bin/env python import sys from subprocess import Popen, PIPE, STDOUT with open('log','ab') as file: p = Popen([sys.executable, '-u', 'test.py'], stdout=PIPE, stderr=STDOUT, close_fds=True, bufsize=0) for c in iter(lambda: p.stdout.read(1), ''): for f in [sys.stdout, file]: f.write(c) f.flush() p.stdout.close() rc = p.wait() 

Note: the last example and a tee-based solution do not capture the getpass.getpass() pexpect , but pexpect and pty are a solution based on:

 #!/usr/bin/env python import os import pty import sys with open('log', 'ab') as file: def read(fd): data = os.read(fd, 1024) file.write(data) file.flush() return data pty.spawn([sys.executable, "test.py"], read) 

I don't know if pty.spawn() works on macs.

+2
source

Source: https://habr.com/ru/post/1435230/


All Articles