• 场景与任务:判断相邻的两张微信聊天截图是否为同一张(传输压缩、格式转换过程中存在一定像素失真和边缘抖动,不可以直接相减)
  • 要求:使用数字图像处理的方法(仅作为预处理去重,不用深度学习方法);一组(两张)图片判断的时间要求在20ms以内;
  • 思路:
    • 转换到HSV空间下,先将聊天窗口通过颜色阈值单独分离出来;
    • 轮廓查找判断两张图的闭包矩形框的数目是否相同,不同则为不同截图;
    • 否则再利用ORB提取特征点(keypoints)和描述子(descriptors),计算两张图对应特征点的斜率,若60%的线条均值在0附近(前后各去掉最大最小的20%斜率的曲线),则为同一张图,否则为不同张;
    • Python 测试成功后转写成 C++ 版本以适应速度需求;

Python 代码:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import os, glob, time
orb = cv2.ORB_create()
def read_img(name1, name2):
    img1 = cv2.imread(name1, 1)
    img2 = cv2.imread(name2)
    h, w = min(img1.shape[0], img2.shape[0]), min(img1.shape[1], img2.shape[1])
    img1 = img1[:h, :w]
    img2 = img2[:h, :w]
    img1 = cv2.resize(img1, (800, 800 * img1.shape[0] // img1.shape[1]))
    img2 = cv2.resize(img2, (800, 800 * img2.shape[0] // img2.shape[1]))
    return img1, img2
def color_enhance(img):
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    lower_green = np.array([35, 43, 46])
    upper_green = np.array([77, 255, 255])
    mask_green = cv2.inRange(hsv, lowerb=lower_green, upperb=upper_green)
    lower_white = np.array([0, 0, 245])
    upper_white = np.array([180, 30, 255])
    mask_white = cv2.inRange(hsv, lowerb=lower_white, upperb=upper_white)
    dst_w = cv2.bitwise_and(img, img, mask=mask_white)
    dst_g = cv2.bitwise_and(img, img, mask=mask_green)
    dst = dst_w + dst_g
    return dst
def count_box(img, show=False):
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 9, 2)
    contours, hierarchy = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    H, W = img.shape[:2]
    count = 0
    for contour in contours:
        if cv2.contourArea(contour) < H * W / 500 or H > W * 1.1:
            continue
        count += 1
        x, y, w, h = cv2.boundingRect(contour)
        cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
    if show:
        cv2.imshow('img', img)
        cv2.imshow('gray', gray)
        cv2.imshow('thresh', thresh)
        cv2.waitKey(0)
    return count
def orb_match(img1, img2):
    kp1, des1 = orb.detectAndCompute(img1, None)  # des是描述子
    kp2, des2 = orb.detectAndCompute(img2, None)
    return kp1, des1, kp2, des2
def draw_keypoints(img, keypoints, color=(0, 255, 255)):
    for kp in keypoints:
            x, y = kp.pt
            cv2.circle(img, (int(x), int(y)), 2, color)
    return img
def match_imgs(des1, des2):
    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.8 * n.distance:
            good.append([m])
    return good
def compute_slope(src, dst):
    # slope = (y - y') / (x' - x + 800)
    return (src[1] - dst[1]) / (dst[0] - src[0] + 800)
def judge(img1, img2, show=False):
    img3, img4 = color_enhance(img1), color_enhance(img2)
    n1 = count_box(img3)
    n2 = count_box(img4)
    if n1 != n2:
        print('n1, n2: ', n1, n2)
        return False
    kp1, des1, kp2, des2 = orb_match(img3, img4)
    good = match_imgs(des1, des2)
    src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 2)
    dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 2)
    all_slopes = []
    for i in range(len(src_pts)):
        all_slopes.append(compute_slope(src_pts[i], dst_pts[i]))
    all_slopes.sort()
    len_s = len(all_slopes) // 5
    filtered_slopes = all_slopes[len_s:-len_s]
    slopes = filtered_slopes if filtered_slopes else all_slopes
    if show:
        slopes = pd.Series(slopes)
        # print(slopes.describe())
        fig = plt.figure()
        ax = fig.add_subplot(111)
        ax.hist(slopes, bins=20, color='blue', alpha=0.8)
        plt.show()
        img5 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, good, None, flags=2)
        thresh_merge = np.hstack((img3, img4))
        cv2.imshow("thresh_merge", thresh_merge)
        visual_1 = draw_keypoints(img1, kp1, color=(255, 0, 255))
        visual_2 = draw_keypoints(img2, kp2, color=(255, 0, 255))
        hmerge = np.hstack((visual_1, visual_2))
        cv2.imshow("point", hmerge)
        cv2.imshow("ORB", img5)
        cv2.waitKey(0)
        cv2.destroyAllWindows()
    slopes_mean = sum(slopes) / len(slopes)
    print('abs slope mean: ', abs(slopes_mean))
    if abs(slopes_mean) < 0.01:
        return True
    else:
        return False
if __name__ == '__main__':
    name1, name2 = './1.png', './2.png'
    img1, img2 = read_img(name1, name2)
    if judge(img1, img2, show=True):
        print('Same screenshots.')
    else:
        print('Different screenshots.')

C++ 代码(去掉了颜色增强部分):

JudgeDuplicates.h

#ifndef JUDGEDUPLICATES_H
#define JUDGEDUPLICATES_H
#include <cstdlib>
#include <iostream>
#include <vector>
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
class JudgeDuplicates
    public:
        JudgeDuplicates();
        void orb_match(cv::Mat, cv::Mat,
                       std::vector<cv::KeyPoint>&,
                       std::vector<cv::KeyPoint>&,
                       std::vector<cv::DMatch>&);
        double compute_slope(cv::Point, cv::Point);
        bool judge(std::string, std::string);
        virtual ~JudgeDuplicates();
    protected:
    private:
#endif // JUDGEDUPLICATES_H

JudgeDuplicates.cpp

#include "JudgeDuplicates.h"
JudgeDuplicates::JudgeDuplicates()
    //ctor
JudgeDuplicates::~JudgeDuplicates()
    //dtor
void JudgeDuplicates::orb_match(cv::Mat img1, cv::Mat img2,
                                std::vector<cv::KeyPoint>& kp1,
                                std::vector<cv::KeyPoint>& kp2,
                                std::vector<cv::DMatch>& goodmatches){
    int Hessian = 500;
    cv::Ptr<cv::ORB> detector = cv::ORB::create(Hessian);
    cv::Mat des1, des2;
    detector->detectAndCompute(img1, cv::Mat(), kp1, des1);
    detector->detectAndCompute(img2, cv::Mat(), kp2, des2);
    cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("BruteForce");
    std::vector<std::vector<cv::DMatch> > matches_knn;
    matcher->knnMatch(des1, des2, matches_knn, 2);
    for (size_t i = 0; i < matches_knn.size(); ++i){
        if(matches_knn[i][0].distance < 0.8 * matches_knn[i][1].distance){
            goodmatches.push_back(matches_knn[i][0]);
double JudgeDuplicates::compute_slope(cv::Point src, cv::Point dst){
    // slope = (y - y') / (x' - x + 800)
    return double(src.y - dst.y) / double(dst.x - src.x + 800.0);
bool JudgeDuplicates::judge(std::string name1, std::string name2){
    cv::Mat img1 = cv::imread(name1, 1);
    cv::Mat img2 = cv::imread(name2, 1);
    int h1 = img1.rows;
    int w1 = img1.cols;
    cv::resize(img1, img1, cv::Size(800, int(800 * h1 / w1)));
    cv::resize(img2, img2, cv::Size(800, int(800 * h1 / w1)));
    std::vector<cv::KeyPoint> kp1, kp2;
    std::vector<cv::DMatch> good_matches;
    orb_match(img1, img2, kp1, kp2, good_matches);
    std::cout << good_matches.size() << std::endl;
    std::vector<cv::Point> src_pts, dst_pts;
    for(size_t i = 0; i < good_matches.size(); ++i){
        int x1 = kp1[good_matches[i].queryIdx].pt.x;
        int y1 = kp1[good_matches[i].queryIdx].pt.y;
        int x2 = kp2[good_matches[i].trainIdx].pt.x;
        int y2 = kp2[good_matches[i].trainIdx].pt.y;
        cv::Point src_pt = cv::Point(x1, y1);
        cv::Point dst_pt = cv::Point(x2, y2);
        src_pts.push_back(src_pt);
        dst_pts.push_back(dst_pt);
    double slope, mean_slope = 0.0;
    std::vector<double> slopes;
    for(size_t i = 0; i < src_pts.size(); ++i){
        slope = compute_slope(src_pts[i], dst_pts[i]);
        slopes.push_back(slope);
    sort(slopes.begin(), slopes.end());
    int line_cnt = 0;
    for(size_t i = 0; i < slopes.size(); ++i){
        if(i < slopes.size() * 0.2){
            continue;
        if(i > slopes.size() * 0.8){
            break;
        line_cnt += 1;
        mean_slope += fabs(slopes[i]);
        std::cout << slopes[i] << std::endl;
    if(line_cnt != 0){
        mean_slope /= line_cnt;
    else{
        mean_slope = 1000000;
    std::cout << mean_slope << " line_cnt " << line_cnt << std::endl;
    if(mean_slope < 0.001)
        return true;
        return false;
                    概述:场景与任务:判断相邻的两张微信聊天截图是否为同一张(传输压缩、格式转换过程中存在一定像素失真和边缘抖动,不可以直接相减)	要求:使用数字图像处理的方法(仅作为预处理去重,不用深度学习方法);一组(两张)图片判断的时间要求在20ms以内;	思路:	转换到HSV空间下,先将聊天窗口通过颜色阈值单独分离出来;		轮廓查找判断两张图的闭包矩形框的数目是否相同,不同则为不同截图;		否则...
				
在网上看到python图像识别的相关文章后,真心感觉python的功能实在太强大,因此将这些文章总结一下,建立一下自己的知识体系。 当然了,图像识别这个话题作为计算机科学的一个分支,不可能就在本文简单几句就说清,所以本文只作基本算法的科普向。 看到一篇博客是介绍这个,但他用的是PIL中的Image实现的,感觉比较麻烦,于是利用Opencv库进行了更简洁化的实现。 要识别两张相似图像,我们从感性上来谈是怎么样的一个过程?首先我们会区分这两张相片的类型,例如是风景照,还是人物照。风景照中,是沙漠还是海洋,人物照中,两个人是不是都是国字脸,还是瓜子脸(还是倒瓜子脸……哈哈……)。
该项目基于OpenCV实现 项目主要作用是实现图像的显示 并完成类似截图的功能 1 操作方法: 在感兴趣区域 ROI 的左上角位置左击一次 放开左键并拖动鼠标则将在图中出现白色矩形方框 至ROI的右下角再次左击则退出 该程序不获取图片 只获得始 末点的像素位置; 截图时以鼠标放开时的位置为准; 截图方向为左上 >右下; 希望这一程序能对OpenCV的初学者起到抛砖引玉的作用 ">该项目基于OpenCV实现 项目主要作用是实现图像的显示 并完成类似截图的功能 1 操作方法: 在感兴趣区域 ROI 的左上角位置左击一次 放开左键并拖动鼠标则将在图中出现白色矩形方框 至ROI的右下角再次左击则退出 2 [更多]
首先,为什么我们需要作物?裁剪是为了从图像中移除所有不需要的物体或区域或者是突出图像的一个特殊特征。 与Numpy使用切片操作实现裁剪不同,OpenCV没有特定的函数来进行裁剪操作。读取的每个图像都存储在一个2D数组中(对于每个颜色通道)。只需指定要裁剪的区域的高度和宽度(以像素为单位)即可。 1.简单版本代码实现 下面的代码片段展示了如何使用Pythonc++裁剪图像。在后面的文章中,你将会更详细地了解这些。 (1)Python # 导入相关包 import cv2 import numpy as np
#include <opencv2/opencv.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/core/core.hpp> #include <iostream> using namespace std; using namespace cv; cv::Mat img; cv::Rect m_select; int main() img = imread("F://Visua
#include "opencv2/opencv.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" /*********************************************************************************...
可以使用cv2.addWeighted()函数将两张亮度不同的图像合成。该函数可以根据两张图像的权重值进行加权平均,从而实现图像融合。具体的实现方法可以参考以下代码: import cv2 # 读取两张图像 img1 = cv2.imread('image1.jpg') img2 = cv2.imread('image2.jpg') # 将两张图像进行加权平均 alpha = 0.5 beta = 0.5 gamma = 0 dst = cv2.addWeighted(img1, alpha, img2, beta, gamma) # 显示合成后的图像 cv2.imshow('dst', dst) cv2.waitKey(0) cv2.destroyAllWindows()