使用Python,OpenCV对图像进行亚像素点检测,并拟合椭圆进行绘制

news/2024/9/22 7:35:59

这篇博客将介绍如何使用Python,OpenCV对图像进行亚像素检测,并对亚像素点进行椭圆拟合绘制。

1. 效果图

原始图上绘制拟合椭圆 VS 原始图上绘制拟合椭圆及亚像素点绘制随机半径及颜色的圆 VS 灰度图上绘制亚像素点效果图如下:
在这里插入图片描述
我喜欢的颖宝效果图2如下:
在这里插入图片描述
圆的随机半径放大一点效果图如下:
在这里插入图片描述

2. 源码

需要注意,绘制圆需要圆心都是int值

# 计算图像的亚像素点并绘制(matplotlib以及cv2),并对亚像素点进行3种椭圆方式拟合绘制
# python zernick_detection.pyimport timeimport cv2
import imutils
import matplotlib.pyplot as plt
import numpy as npg_N = 7M00 = np.array([0, 0.0287, 0.0686, 0.0807, 0.0686, 0.0287, 0,0.0287, 0.0815, 0.0816, 0.0816, 0.0816, 0.0815, 0.0287,0.0686, 0.0816, 0.0816, 0.0816, 0.0816, 0.0816, 0.0686,0.0807, 0.0816, 0.0816, 0.0816, 0.0816, 0.0816, 0.0807,0.0686, 0.0816, 0.0816, 0.0816, 0.0816, 0.0816, 0.0686,0.0287, 0.0815, 0.0816, 0.0816, 0.0816, 0.0815, 0.0287,0, 0.0287, 0.0686, 0.0807, 0.0686, 0.0287, 0]).reshape((7, 7))M11R = np.array([0, -0.015, -0.019, 0, 0.019, 0.015, 0,-0.0224, -0.0466, -0.0233, 0, 0.0233, 0.0466, 0.0224,-0.0573, -0.0466, -0.0233, 0, 0.0233, 0.0466, 0.0573,-0.069, -0.0466, -0.0233, 0, 0.0233, 0.0466, 0.069,-0.0573, -0.0466, -0.0233, 0, 0.0233, 0.0466, 0.0573,-0.0224, -0.0466, -0.0233, 0, 0.0233, 0.0466, 0.0224,0, -0.015, -0.019, 0, 0.019, 0.015, 0]).reshape((7, 7))M11I = np.array([0, -0.0224, -0.0573, -0.069, -0.0573, -0.0224, 0,-0.015, -0.0466, -0.0466, -0.0466, -0.0466, -0.0466, -0.015,-0.019, -0.0233, -0.0233, -0.0233, -0.0233, -0.0233, -0.019,0, 0, 0, 0, 0, 0, 0,0.019, 0.0233, 0.0233, 0.0233, 0.0233, 0.0233, 0.019,0.015, 0.0466, 0.0466, 0.0466, 0.0466, 0.0466, 0.015,0, 0.0224, 0.0573, 0.069, 0.0573, 0.0224, 0]).reshape((7, 7))M20 = np.array([0, 0.0225, 0.0394, 0.0396, 0.0394, 0.0225, 0,0.0225, 0.0271, -0.0128, -0.0261, -0.0128, 0.0271, 0.0225,0.0394, -0.0128, -0.0528, -0.0661, -0.0528, -0.0128, 0.0394,0.0396, -0.0261, -0.0661, -0.0794, -0.0661, -0.0261, 0.0396,0.0394, -0.0128, -0.0528, -0.0661, -0.0528, -0.0128, 0.0394,0.0225, 0.0271, -0.0128, -0.0261, -0.0128, 0.0271, 0.0225,0, 0.0225, 0.0394, 0.0396, 0.0394, 0.0225, 0]).reshape((7, 7))M31R = np.array([0, -0.0103, -0.0073, 0, 0.0073, 0.0103, 0,-0.0153, -0.0018, 0.0162, 0, -0.0162, 0.0018, 0.0153,-0.0223, 0.0324, 0.0333, 0, -0.0333, -0.0324, 0.0223,-0.0190, 0.0438, 0.0390, 0, -0.0390, -0.0438, 0.0190,-0.0223, 0.0324, 0.0333, 0, -0.0333, -0.0324, 0.0223,-0.0153, -0.0018, 0.0162, 0, -0.0162, 0.0018, 0.0153,0, -0.0103, -0.0073, 0, 0.0073, 0.0103, 0]).reshape(7, 7)M31I = np.array([0, -0.0153, -0.0223, -0.019, -0.0223, -0.0153, 0,-0.0103, -0.0018, 0.0324, 0.0438, 0.0324, -0.0018, -0.0103,-0.0073, 0.0162, 0.0333, 0.039, 0.0333, 0.0162, -0.0073,0, 0, 0, 0, 0, 0, 0,0.0073, -0.0162, -0.0333, -0.039, -0.0333, -0.0162, 0.0073,0.0103, 0.0018, -0.0324, -0.0438, -0.0324, 0.0018, 0.0103,0, 0.0153, 0.0223, 0.0190, 0.0223, 0.0153, 0]).reshape(7, 7)M40 = np.array([0, 0.013, 0.0056, -0.0018, 0.0056, 0.013, 0,0.0130, -0.0186, -0.0323, -0.0239, -0.0323, -0.0186, 0.0130,0.0056, -0.0323, 0.0125, 0.0406, 0.0125, -0.0323, 0.0056,-0.0018, -0.0239, 0.0406, 0.0751, 0.0406, -0.0239, -0.0018,0.0056, -0.0323, 0.0125, 0.0406, 0.0125, -0.0323, 0.0056,0.0130, -0.0186, -0.0323, -0.0239, -0.0323, -0.0186, 0.0130,0, 0.013, 0.0056, -0.0018, 0.0056, 0.013, 0]).reshape(7, 7)def zernike_detection(path):img = cv2.imread(path)img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)blur_img = cv2.medianBlur(img, 13)c_img = cv2.adaptiveThreshold(blur_img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 7, 4)ZerImgM00 = cv2.filter2D(c_img, cv2.CV_64F, M00)ZerImgM11R = cv2.filter2D(c_img, cv2.CV_64F, M11R)ZerImgM11I = cv2.filter2D(c_img, cv2.CV_64F, M11I)ZerImgM20 = cv2.filter2D(c_img, cv2.CV_64F, M20)ZerImgM31R = cv2.filter2D(c_img, cv2.CV_64F, M31R)ZerImgM31I = cv2.filter2D(c_img, cv2.CV_64F, M31I)ZerImgM40 = cv2.filter2D(c_img, cv2.CV_64F, M40)point_temporary_x = []point_temporary_y = []scatter_arr = cv2.findNonZero(ZerImgM00).reshape(-1, 2)for idx in scatter_arr:j, i = idxtheta_temporary = np.arctan2(ZerImgM31I[i][j], ZerImgM31R[i][j])rotated_z11 = np.sin(theta_temporary) * ZerImgM11I[i][j] + np.cos(theta_temporary) * ZerImgM11R[i][j]rotated_z31 = np.sin(theta_temporary) * ZerImgM31I[i][j] + np.cos(theta_temporary) * ZerImgM31R[i][j]l_method1 = np.sqrt((5 * ZerImgM40[i][j] + 3 * ZerImgM20[i][j]) / (8 * ZerImgM20[i][j]))l_method2 = np.sqrt((5 * rotated_z31 + rotated_z11) / (6 * rotated_z11))l = (l_method1 + l_method2) / 2k = 3 * rotated_z11 / (2 * (1 - l_method2 ** 2) ** 1.5)# h = (ZerImgM00[i][j] - k * np.pi / 2 + k * np.arcsin(l_method2) + k * l_method2 * (1 - l_method2 ** 2) ** 0.5)# / np.pik_value = 20.0l_value = 2 ** 0.5 / g_Nabsl = np.abs(l_method2 - l_method1)if k >= k_value and absl <= l_value:y = i + g_N * l * np.sin(theta_temporary) / 2x = j + g_N * l * np.cos(theta_temporary) / 2point_temporary_x.append(x)point_temporary_y.append(y)else:continuereturn point_temporary_x, point_temporary_yimage = cv2.imread('ml2.jpg')
path = 'ym_600.jpg'
cv2.imwrite(path, imutils.resize(image, height=600))time1 = time.time()
point_temporary_x, point_temporary_y = zernike_detection(path)
time2 = time.time()
print(time2 - time1)# gray : 进行检测的图像
gray = cv2.imread(path, 0)
plt.imshow(gray, cmap="gray")
# point检测出的亚像素点
point = np.array([point_temporary_x, point_temporary_y])
# s:调整显示点的大小
plt.scatter(point[0, :], point[1, :], s=10, marker="*")
plt.show()def cv_fit_ellipse(points, flag=0):# cv2.fitEllipse only can estimate int numpy 2D array datap = np.array(points)p = np.array(p).astype(int)center0, axes0, angle0 = cv2.fitEllipse(p)center1, axes1, angle1 = cv2.fitEllipseAMS(p)center2, axes2, angle2 = cv2.fitEllipseDirect(p)print(type(center0))print("fitEllipse:       " + str(np.array(center0)))print("fitEllipseAMS:    " + str(np.array(center1)))print("fitEllipseDirect: " + str(np.array(center2)))box = tuple([tuple(np.array(center0)), tuple(np.array(axes0)), angle0])if flag == 0:box = tuple([tuple(np.array(center0)), tuple(np.array(axes0)), angle0])elif flag == 1:box = tuple([tuple(np.array(center1)), tuple(np.array(axes1)), angle1])else:box = tuple([tuple(np.array(center0)), tuple(np.array(axes0)), angle0])return boxpoints = np.array(list(zip(point_temporary_x, point_temporary_y)))
print(type(points), points[0], points.shape)
box1 = cv_fit_ellipse(points, flag=1)image = cv2.imread(path)cv2.ellipse(img=image, box=box1, color=(255, 0, 255), thickness=1)
cv2.imshow("image", image)
for i in points:# 亚像素点绘制随机颜色半径圆radius = np.random.randint(1, high=10)color = np.random.randint(0, high=256, size=(3,)).tolist()# print('radius: ', radius, type(radius))# print('color: ', color, type(color))cv2.circle(image, (int(i[0]), int(i[1])), radius, color, -1)cv2.imshow("image1", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

参考

  • Python,OpenCV鼠标事件进行矩形、圆形的绘制(随机颜色、随机半径)
  • 图像亚像素点计算
  • 椭圆拟合

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.pgtn.cn/news/17516.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈,一经查实,立即删除!

相关文章

Android控件之ImageView探究

ImageView控件是一个图片控件&#xff0c;负责显示图片。 以下模拟手机图片查看器 目录结构 main.xml布局文件 <?xml version"1.0"encoding"utf-8"?><LinearLayout xmlns:android"http://schemas.android.com/apk/res/android"androi…

使用make_blob,KNeighborsClassifier-K近邻算法进行分类

写这篇博客源于博友的提问&#xff1a; 1. 效果图 输入 100 5 3 7 得到结果 [2] 可视化效果图如下&#xff1a;待预测点红色x展示&#xff0c; 输入 88 2 1 9 得到结果&#xff1a; [1] 可视化效果图如下&#xff1a;待预测点红色x展示&#xff0c; 2. 源码 # KNeighbo…

一生受益的三个小故事

转载于:https://www.cnblogs.com/88223100/archive/2011/02/22/three_stories.html

使用matplotlib绘制定制化饼图(图例比例标签支持中文等)

写这篇博客源于博友的提问 1. 效果图 效果图如下&#xff1a; 2. 原理 autopct‘%0.1f%%’ 自动添加百分比显示&#xff0c;格式化保留1位小数labeldistance 设置各扇形标签&#xff08;图例&#xff09;与圆心的距离&#xff08;labeldistance&#xff09;为1.1shadowTrue…

Python字母数字下划线生成田字格随机密码

写这篇博客源于博友的提问1&#xff0c;提问2 1. 效果图 10行随机密码&#xff0c;首字母不同&#xff0c;效果图如下&#xff1a; 田字格随机字符串如下&#xff1a; 2. 源码 # 生成随机密码 import randomimport numpy as np# 1. 生成随机密码,密码首字母不同 np.rando…

NHibernate从入门到精通系列(7)——多对一关联映射

内容摘要 多对一关联映射概括 多对一关联映射插入和查询 多对一关联映配置介绍 一、多对一关联映射概括 关联关系是实体类与实体类之间的结构关系&#xff0c;分别为“多对一”、“一对一”、“多对多”。然而“多对一”是怎样描述的呢&#xff1f;让我们参考图1.1所示&#xf…

使用Python爬取信息403解决,并统计汇总绘制直方图,柱状图,折线图

使用Python爬取信息403解决&#xff0c;并统计汇总绘制直方图&#xff0c;柱状图&#xff0c;折线图 写这篇博客源于博友的提问&#xff1a; 1. 效果图 拟录取专业-人数分布直方图效果图如下&#xff1a; 拟录取专业-人数效果图如下&#xff1a; 拟录取专业-人数柱状图…

使用Python对图像进行不同级别量化QP,使用RLE计算压缩比,并计算对应的PSNR

写这篇博客源于 博友的提问&#xff1a; 1.效果图 原图 VS QP2 VS QP4 VS QP8效果图如下&#xff1a; QP量化是指把原始图像按像素级别划分取值。如QP2&#xff0c;则<128 取0&#xff0c;>128取128. QP4&#xff0c;则<64取0&#xff0c;<128取64&#xff0c;&…