I am completely new to OpenCV, but while searching the web I found out about object detection and border detection. But, nevertheless, it was not possible to find the correct way to detect images from ScreenShot.
For example, if I transfer an image with a photograph inside it, as shown below, I need to extract this photograph from the original image.

EDIT
After the following response from @Amitay Nachmani, I tried to implement the following code until step 4.
-(UIImage*)processImage:(UIImage*)sourceImage{
cv::Mat processMat;
UIImageToMat(sourceImage, processMat);
cv::Mat grayImage;
cvtColor(processMat, grayImage, CV_BGR2GRAY);
cv::Mat cannyImage;
cv::Canny(grayImage, cannyImage, 0, 50);
cv::Vec2f lines2;
std::vector<cv::Vec2f> lines;
cv::HoughLines(cannyImage, lines, 1, CV_PI/180, 300);
size_t sizeOfLine = lines.size();
for(size_t i=0;i<sizeOfLine;i++){
float rho = lines[i][0], theta = lines[i][1];
if(rho==0){
cv::Point pt1,pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cv::line(cannyImage, pt1, pt2, cv::Scalar(255,0,0),2.0);
}
}
UIImage *result = MatToUIImage(cannyImage);
return result;
}
From the above code, I got the following image.

EDIT 2
I revised the code, replacing the condition
if(rho==0)withif(theta==0)
The result is the image below

But, what else to do next? I am a little confused in the following steps.