Python。如何从图像中切出具有特定颜色的区域 (OpenCV, Numpy)

9 人关注

我一直在尝试编写一个Python脚本,它将一张图片作为输入,然后切出一个具有特定背景颜色的矩形。然而,对我的编码技巧造成的问题是,这个矩形在每张图片中都不是在一个固定的位置(位置是随机的)。

我不太了解如何管理numpy函数。我还读了一些关于OpenCV的资料,但我对它完全陌生。到目前为止,我只是通过".crop "函数来裁剪图片,但这样我就必须使用固定的值。

这就是输入图像的样子,现在我想检测黄色矩形的位置,然后将图像裁剪成其大小。

希望得到帮助,提前感谢。

编辑。@MarkSetchell的方法很好用,但对另一张测试图片发现了一个问题。另一张图片的问题是,在图片的顶部和底部有两个颜色相同的小像素,这导致了错误或不好的裁剪。

3 个评论
AMC
我认为这对Stack Overflow来说太广泛了,而且很可能偏离主题。请看 如何提问 , help center .
我已经更新了我的答案--请再看看。
欣赏它!非常感谢您
python
numpy
opencv
python-imaging-library
crop
Keanu
Keanu
发布于 2020-03-21
3 个回答
Mark Setchell
Mark Setchell
发布于 2020-03-21
已采纳
0 人赞同

我更新了我的答案,以应对与黄框相同颜色的嘈杂离群像素斑点的问题。这个方法是先在图像上运行一个3x3的中值过滤器,以去除这些斑点。

#!/usr/bin/env python3
import numpy as np
from PIL import Image, ImageFilter
# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
orig = na.copy()    # Save original
# Median filter to remove outliers
im = im.filter(ImageFilter.MedianFilter(3))
# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))
top, bottom = yellowY[0], yellowY[-1]
left, right = yellowX[0], yellowX[-1]
print(top,bottom,left,right)
# Extract Region of Interest from unblurred original
ROI = orig[top:bottom, left:right]
Image.fromarray(ROI).save('result.png')

好的,你的黄色是rgb(247,213,83),所以我们要找到所有黄色像素的X,Y坐标。

#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))
# Find first and last row containing yellow pixels
top, bottom = yellowY[0], yellowY[-1]
# Find first and last column containing yellow pixels
left, right = yellowX[0], yellowX[-1]
# Extract Region of Interest
ROI=na[top:bottom, left:right]
Image.fromarray(ROI).save('result.png')
# Get trim box of yellow pixels
trim=$(magick image.png -fill black +opaque "rgb(247,213,83)" -format %@ info:)
# Check how it looks
echo $trim
251x109+101+220
# Crop image to trim box and save as "ROI.png"
magick image.png -crop "$trim" ROI.png

If still using 形象化的Magickv6而不是v7,将magick改为convert

谢谢你的帮助!试了一下,效果很好--至少对我上传的例子图片是这样。但我用其他图片进行了测试,由于某些原因,我得到了一个 "ValueError: tile cannot extend outside image" for Image.fromarray(ROI).save('result.png')
@Karlo 你不应该指望Stack Overflow的答案能涵盖你没有发布的情况。问题应该是具体的,而不是笼统的。你可以尝试多发几张图片,但一定要展示你解决问题的努力(发布你的代码)。
说得没错,我应该想到这一点的--对不起。我目前正在试图找出问题所在,因为@MarkSetchell的代码对我的大多数测试图片都有效,但有几个图片不工作。我认为这是因为有其他相同颜色的小像素。
Rotem
Rotem
发布于 2020-03-21
0 人赞同

我看到的是侧面和顶部的深灰色和浅灰色区域,一个白色区域,以及一个白色区域内有灰色三角形的黄色矩形。

我建议的第一个阶段是将图像从RGB色彩空间转换为 HSV 色彩空间。
The S HSV空间中的颜色通道,是 "颜色饱和度通道"。
所有的无色(灰/黑/白)都是零,黄色像素在S通道的零以上。

下一个阶段。

  • Apply threshold on S channel (convert it to binary image).
    The yellow pixels goes to 255, and other goes to zero.
  • Find contours in thresh (find only the outer contour - only the rectangle).
  • Invert polarity of the pixels inside the rectangle.
    The gray triangles become 255, and other pixels are zeros.
  • Find contours in thresh - find the gray triangles.
  • hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero. s = hsv[:, :, 1] # Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU). ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU) # Find contours in thresh (find only the outer contour - only the rectangle). contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues). # Mark rectangle with green line cv2.drawContours(img, contours, -1, (0, 255, 0), 2) # Assume there is only one contour, get the bounding rectangle of the contour. x, y, w, h = cv2.boundingRect(contours[0]) # Invert polarity of the pixels inside the rectangle (on thresh image). thresh[y:y+h, x:x+w] = 255 - thresh[y:y+h, x:x+w] # Find contours in thresh (find the triangles). contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues). # Iterate triangle contours for c in contours: if cv2.contourArea(c) > 4: # Ignore very small contours # Mark triangle with blue line cv2.drawContours(img, [c], -1, (255, 0, 0), 2) # Show result (for testing). cv2.imshow('img', img) cv2.waitKey(0) cv2.destroyAllWindows() hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero. s = hsv[:, :, 1] cv2.imwrite('s.png', s) # Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU). ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU) # Find contours cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cnts = imutils.grab_contours(cnts) # Find the contour with the maximum area. c = max(cnts, key=cv2.contourArea) # Get bounding rectangle x, y, w, h = cv2.boundingRect(c) # Crop the bounding rectangle out of img out = img[y:y+h, x:x+w, :].copy()