How to save a frame using QMediaPlayer?

I want to save a frame image from QMediaPlayer . After reading the documentation, I realized that I should use QVideoProbe . I am using the following code:

 QMediaPlayer *player = new QMediaPlayer(); QVideoProbe *probe = new QVideoProbe; connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame))); qDebug()<<probe->setSource(player); // Returns true, hopefully. player->setVideoOutput(myVideoSurface); player->setMedia(QUrl::fromLocalFile("observation.mp4")); player->play(); // Start receving frames as they get presented to myVideoSurface 

But, unfortunately, probe->setSource(player) always returns false for me, and therefore my processFrame slot does not start.

What am I doing wrong? Does anyone have a working QVideoProbe example?

+1
source share
2 answers

You are not doing anything wrong. As @DYangu noted, an instance of a media object does not support video monitoring. I had the same problem (and the same for QAudioProbe , but that doesn't interest us). I found a solution by looking at this answer and this one .

The basic idea is to subclass QAbstractVideoSurface . After you do this, it will call the QAbstractVideoSurface::present(const QVideoFrame & frame) method of your QAbstractVideoSurface implementation, and you can process the frames of your video.

As said here , usually you just need to override two methods:

  • supportedPixelFormats so that the manufacturer can choose the appropriate format for QVideoFrame
  • present , which allows you to display a frame

But at that time, I was looking in the Qt source code and was happy to find this piece of code that helped me complete the full implementation. So, this is the complete code for using "capture video".

VideoFrameGrabber.cpp:

 #include "VideoFrameGrabber.h" #include <QtWidgets> #include <qabstractvideosurface.h> #include <qvideosurfaceformat.h> VideoFrameGrabber::VideoFrameGrabber(QWidget *widget, QObject *parent) : QAbstractVideoSurface(parent) , widget(widget) , imageFormat(QImage::Format_Invalid) { } QList<QVideoFrame::PixelFormat> VideoFrameGrabber::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const { Q_UNUSED(handleType); return QList<QVideoFrame::PixelFormat>() << QVideoFrame::Format_ARGB32 << QVideoFrame::Format_ARGB32_Premultiplied << QVideoFrame::Format_RGB32 << QVideoFrame::Format_RGB24 << QVideoFrame::Format_RGB565 << QVideoFrame::Format_RGB555 << QVideoFrame::Format_ARGB8565_Premultiplied << QVideoFrame::Format_BGRA32 << QVideoFrame::Format_BGRA32_Premultiplied << QVideoFrame::Format_BGR32 << QVideoFrame::Format_BGR24 << QVideoFrame::Format_BGR565 << QVideoFrame::Format_BGR555 << QVideoFrame::Format_BGRA5658_Premultiplied << QVideoFrame::Format_AYUV444 << QVideoFrame::Format_AYUV444_Premultiplied << QVideoFrame::Format_YUV444 << QVideoFrame::Format_YUV420P << QVideoFrame::Format_YV12 << QVideoFrame::Format_UYVY << QVideoFrame::Format_YUYV << QVideoFrame::Format_NV12 << QVideoFrame::Format_NV21 << QVideoFrame::Format_IMC1 << QVideoFrame::Format_IMC2 << QVideoFrame::Format_IMC3 << QVideoFrame::Format_IMC4 << QVideoFrame::Format_Y8 << QVideoFrame::Format_Y16 << QVideoFrame::Format_Jpeg << QVideoFrame::Format_CameraRaw << QVideoFrame::Format_AdobeDng; } bool VideoFrameGrabber::isFormatSupported(const QVideoSurfaceFormat &format) const { const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat()); const QSize size = format.frameSize(); return imageFormat != QImage::Format_Invalid && !size.isEmpty() && format.handleType() == QAbstractVideoBuffer::NoHandle; } bool VideoFrameGrabber::start(const QVideoSurfaceFormat &format) { const QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(format.pixelFormat()); const QSize size = format.frameSize(); if (imageFormat != QImage::Format_Invalid && !size.isEmpty()) { this->imageFormat = imageFormat; imageSize = size; sourceRect = format.viewport(); QAbstractVideoSurface::start(format); widget->updateGeometry(); updateVideoRect(); return true; } else { return false; } } void VideoFrameGrabber::stop() { currentFrame = QVideoFrame(); targetRect = QRect(); QAbstractVideoSurface::stop(); widget->update(); } bool VideoFrameGrabber::present(const QVideoFrame &frame) { if (frame.isValid()) { QVideoFrame cloneFrame(frame); cloneFrame.map(QAbstractVideoBuffer::ReadOnly); const QImage image(cloneFrame.bits(), cloneFrame.width(), cloneFrame.height(), QVideoFrame::imageFormatFromPixelFormat(cloneFrame .pixelFormat())); emit frameAvailable(image); // this is very important cloneFrame.unmap(); } if (surfaceFormat().pixelFormat() != frame.pixelFormat() || surfaceFormat().frameSize() != frame.size()) { setError(IncorrectFormatError); stop(); return false; } else { currentFrame = frame; widget->repaint(targetRect); return true; } } void VideoFrameGrabber::updateVideoRect() { QSize size = surfaceFormat().sizeHint(); size.scale(widget->size().boundedTo(size), Qt::KeepAspectRatio); targetRect = QRect(QPoint(0, 0), size); targetRect.moveCenter(widget->rect().center()); } void VideoFrameGrabber::paint(QPainter *painter) { if (currentFrame.map(QAbstractVideoBuffer::ReadOnly)) { const QTransform oldTransform = painter->transform(); if (surfaceFormat().scanLineDirection() == QVideoSurfaceFormat::BottomToTop) { painter->scale(1, -1); painter->translate(0, -widget->height()); } QImage image( currentFrame.bits(), currentFrame.width(), currentFrame.height(), currentFrame.bytesPerLine(), imageFormat); painter->drawImage(targetRect, image, sourceRect); painter->setTransform(oldTransform); currentFrame.unmap(); } } 

VideoFrameGrabber.h

 #ifndef VIDEOFRAMEGRABBER_H #define VIDEOFRAMEGRABBER_H #include <QtWidgets> class VideoFrameGrabber : public QAbstractVideoSurface { Q_OBJECT public: VideoFrameGrabber(QWidget *widget, QObject *parent = 0); QList<QVideoFrame::PixelFormat> supportedPixelFormats( QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const; bool isFormatSupported(const QVideoSurfaceFormat &format) const; bool start(const QVideoSurfaceFormat &format); void stop(); bool present(const QVideoFrame &frame); QRect videoRect() const { return targetRect; } void updateVideoRect(); void paint(QPainter *painter); private: QWidget *widget; QImage::Format imageFormat; QRect targetRect; QSize imageSize; QRect sourceRect; QVideoFrame currentFrame; signals: void frameAvailable(QImage frame); }; #endif //VIDEOFRAMEGRABBER_H 

Note : in .h you will see that I added signal , taking the image as a parameter. This will allow you to process the frame anywhere in your code. At that time, this signal took QImage as a parameter, but you can, of course, take a QVideoFrame if you want.


Now we are ready to use this video capture:

 QMediaPlayer* player = new QMediaPlayer(this); // no more QVideoProbe VideoFrameGrabber* grabber = new VideoFrameGrabber(this); player->setVideoOutput(grabber); connect(grabber, SIGNAL(frameAvailable(QImage)), this, SLOT(processFrame(QImage))); 

Now you just need to declare a slot named processFrame(QImage image) , and you will get a QImage every time you enter the present method of your VideoFrameGrabber .

I hope this helps you!

+3
source

Following the Qt QVideoProbe Documentation :

 bool QVideoProbe::setSource(QMediaObject *mediaObject) 

Starts monitoring this mediaObject.

If there is no media object associated with mediaObject , or if it is zero, this sensor will be deactivated and this function will return true.

If the instance of the multimedia object does not support video monitoring, this function will return false.

Any previously controlled objects will no longer be tracked. Going to the same facility will be ignored, but monitoring will continue.

So, it seems that your instance of the media object does not support video monitoring.

+1
source

Source: https://habr.com/ru/post/989005/


All Articles