How to connect live video frames from ffmpeg to PIL?

I need to use ffmpeg / avconv to transfer jpg frames to a Python PIL (Pillow) image object using gst as an intermediary *. I searched everywhere for this answer without much luck. I think I'm close, but I'm stuck. Using Python 2.7

My ideal pipeline launched from python looks like this:

  • ffmpeg / avconv (as h264 video)
  • Pipes β†’
  • gst-streamer (frames are divided into jpg)
  • Pipes β†’
  • Pil Image Object

I have the first few steps under control as a single command that writes .jpg to disk as fast as the hardware allows.

This command looks something like this:

command = [ "ffmpeg", "-f video4linux2", "-r 30", "-video_size 1280x720", "-pixel_format 'uyvy422'", "-i /dev/video0", "-vf fps=30", "-f H264", "-vcodec libx264", "-preset ultrafast", "pipe:1 -", "|", # Pipe to GST "gst-launch-1.0 fdsrc !", "video/x-h264,framerate=30/1,stream-format=byte-stream !", "decodebin ! videorate ! video/x-raw,framerate=30/1 !", "videoconvert !", "jpegenc quality=55 !", "multifilesink location=" + Utils.live_sync_path + "live_%04d.jpg" ] 

This will successfully write frames to disk if they are started using popen or os.system.

But instead of writing frames to disk, I want to capture the output in my subprocess channel and read the frames as they are written in the file buffer, which can then be read by PIL.

Something like that:

  import subprocess as sp import shlex import StringIO clean_cmd = shlex.split(" ".join(command)) pipe = sp.Popen(clean_cmd, stdout = sp.PIPE, bufsize=10**8) while pipe: raw = pipe.stdout.read() buff = StringIO.StringIO() buff.write(raw) buff.seek(0) # Open or do something clever... im = Image.open(buff) im.show() pipe.flush() 

This code does not work - I'm not even sure I can use the while pipe in this way. I am new to using buffers and piping this way.

I’m not sure how I would know that the image was recorded in the handset or when the β€œnext” image was read.

Any help would be greatly appreciated in understanding how to read images from a pipe, not to disk.

  • This is ultimately the Raspberry Pi 3 pipeline, and to increase the frame rate I cannot (A) read / write to / from the disc or (B) use the frame capture method - unlike launching the H246 video directly from the camera chip.
+6
source share
1 answer

My guess is that the ultimate goal is to handle a high frame rate USB camera on Linux, and the next question is about this issue.

First, although several USB cameras support H.264, the Linux driver for USB cameras (UVC driver) does not currently support the stream payload that includes H.264, see the UVC Feature table on the driver home page . User space tools, such as ffmpeg, use the driver, so they have the same limitations as to which video format is used for USB transfer.

The good news is that if the camera supports H.264, it almost certainly supports MJPEG, which is supported by the UVC driver and is compressed well enough to support 1280x720 at 30 frames per second over USB 2.0. You can list the video formats supported by your camera with v4l2-ctl -d 0 --list-formats-ext . For Microsoft Lifecam Cinema, for example, 1280x720 only supports 10 frames per second for YUV 4: 2: 2, but at 30 fps for MJPEG.

For reading from the camera, I have good experience with OpenCV. In one of my projects, I have 24 (!) Lifecams connected to one 6-core Ubuntu i7 machine, which tracks fruit flies in real time using 320x240 at 7.5 frames per second per camera (and also saves MJPEG AVI for each camera to have an experiment record). Since OpenCV directly uses the V4L2 API, it should be faster than a solution using ffmpeg, gst-streamer and two pipes.

Bare Bones error codes (no errors) for reading from a camera using OpenCV and creating PIL images are as follows:

 import cv2 from PIL import Image cap = cv2.VideoCapture(0) # /dev/video0 while True: ret, frame = cap.read() if not ret: break pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) ... # do something with PIL image 

Final note: you will probably need to build an OpenCV version of v4l to get compression (MJPEG), see this answer .

+2
source

Source: https://habr.com/ru/post/1013959/


All Articles