世界坐标系,相机坐标系和图像坐标系的转换(Python)
世界坐標系,相機坐標系和圖像坐標系的轉換(Python)
相機內參外參說明:相機內參外參_pan_jinquan的博客-CSDN博客_相機內參
計算機視覺:相機成像原理:世界坐標系、相機坐標系、圖像坐標系、像素坐標系之間的轉換:計算機視覺:相機成像原理:世界坐標系、相機坐標系、圖像坐標系、像素坐標系之間的轉換_生活沒有if-else-CSDN博客_相機坐標系轉世界坐標系
1.世界坐標->相機坐標
2.相機坐標系->圖像坐標系
此時投影點p的單位還是mm,并不是pixel,需要進一步轉換到像素坐標系。
3.圖像坐標系與像素坐標系
像素坐標系和圖像坐標系都在成像平面上,只是各自的原點和度量單位不一樣。圖像坐標系的原點為相機光軸與成像平面的交點,通常情況下是成像平面的中點或者叫principal point。圖像坐標系的單位是mm,屬于物理單位,而像素坐標系的單位是pixel,我們平常描述一個像素點都是幾行幾列。所以這二者之間的轉換如下:其中dx和dy表示每一列和每一行分別代表多少mm,即1pixel=dx mm?
那么通過上面四個坐標系的轉換就可以得到一個點從世界坐標系如何轉換到像素坐標系的。?
python代碼實現:
以下是實現變換的關鍵代碼
相關可視化部分已經push到github:??https://github.com/PanJinquan/python-learning-notes
https://github.com/PanJinquan/python-learning-notes/blob/master/modules/utils_3d/camera_tools.py
# -*- coding: utf-8 -*- """ # -------------------------------------------------------- # @Project: Integral-Human-Pose-Regression-for-3D-Human-Pose-Estimation # @Author : panjq # @E-mail : pan_jinquan@163.com # @Date : 2020-02-04 16:03:01 # @url : https://www.jianshu.com/p/c5627ad019df # -------------------------------------------------------- """ import sys import ossys.path.append(os.getcwd())import cv2 import numpy as np from modules.utils_3d import vis_3d as vis from utils import image_processinghuman36m_camera_intrinsic = {# R,旋轉矩陣"R": [[-0.91536173, 0.40180837, 0.02574754],[0.05154812, 0.18037357, -0.98224649],[-0.39931903, -0.89778361, -0.18581953]],# t,平移向量"T": [1841.10702775, 4955.28462345, 1563.4453959],# 焦距,f/dx, f/dy"f": [1145.04940459, 1143.78109572],# principal point,主點,主軸與像平面的交點"c": [512.54150496, 515.45148698]}kinect2_camera_intrinsic = {# R,旋轉矩陣"R": [[0.999853, -0.00340388, 0.0167495],[0.00300206, 0.999708, 0.0239986],[-0.0168257, -0.0239459, 0.999571]],# t,平移向量"T": [15.2562, 70.2212, -10.9926],# 焦距,f/dx, f/dy"f": [367.535, 367.535],# principal point,主點,主軸與像平面的交點"c": [260.166, 205.197]}camera_intrinsic = human36m_camera_intrinsic # camera_intrinsic = kinect2_camera_intrinsicclass CameraTools(object):@staticmethoddef convert_wc_to_cc(joint_world):"""世界坐標系 -> 相機坐標系: R * (pt - T):joint_cam = np.dot(R, (joint_world - T).T).T:return:"""joint_world = np.asarray(joint_world)R = np.asarray(camera_intrinsic["R"])T = np.asarray(camera_intrinsic["T"])joint_num = len(joint_world)# 世界坐標系 -> 相機坐標系# [R|t] world coords -> camera coords# joint_cam = np.zeros((joint_num, 3)) # joint camera# for i in range(joint_num): # joint i# joint_cam[i] = np.dot(R, joint_world[i] - T) # R * (pt - T)# .T is 轉置, T is translation matjoint_cam = np.dot(R, (joint_world - T).T).T # R * (pt - T)return joint_cam@staticmethoddef convert_cc_to_wc(joint_world):"""相機坐標系 -> 世界坐標系: inv(R) * pt +T joint_cam = np.dot(inv(R), joint_world.T)+T:return:"""joint_world = np.asarray(joint_world)R = np.asarray(camera_intrinsic["R"])T = np.asarray(camera_intrinsic["T"])# 相機坐標系 -> 世界坐標系joint_cam = np.dot(np.linalg.inv(R), joint_world.T).T + Treturn joint_cam@staticmethoddef __cam2pixel(cam_coord, f, c):"""相機坐標系 -> 像素坐標系: (f / dx) * (X / Z) = f * (X / Z) / dxcx,ppx=260.166; cy,ppy=205.197; fx=367.535; fy=367.535將從3D(X,Y,Z)映射到2D像素坐標P(u,v)計算公式為:u = X * fx / Z + cxv = Y * fy / Z + cyD(v,u) = Z / Alpha=====================================================camera_matrix = [[428.30114, 0., 316.41648],[ 0., 427.00564, 218.34591],[ 0., 0., 1.]])fx = camera_intrinsic[0, 0]fy = camera_intrinsic[1, 1]cx = camera_intrinsic[0, 2]cy = camera_intrinsic[1, 2]=====================================================:param cam_coord::param f: [fx,fy]:param c: [cx,cy]:return:"""# 等價于:(f / dx) * (X / Z) = f * (X / Z) / dx# 三角變換, / dx, + center_xu = cam_coord[..., 0] / cam_coord[..., 2] * f[0] + c[0]v = cam_coord[..., 1] / cam_coord[..., 2] * f[1] + c[1]d = cam_coord[..., 2]return u, v, d@staticmethoddef convert_cc_to_ic(joint_cam):"""相機坐標系 -> 像素坐標系:param joint_cam::return:"""# 相機坐標系 -> 像素坐標系,并 get relative depth# Subtract center depth# 選擇 Pelvis骨盆 所在位置作為相機中心,后面用之求relative depthroot_idx = 0center_cam = joint_cam[root_idx] # (x,y,z) mmjoint_num = len(joint_cam)f = camera_intrinsic["f"]c = camera_intrinsic["c"]# joint image_dict,像素坐標系,Depth 為相對深度 mmjoint_img = np.zeros((joint_num, 3))joint_img[:, 0], joint_img[:, 1], joint_img[:, 2] = CameraTools.__cam2pixel(joint_cam, f, c) # x,yjoint_img[:, 2] = joint_img[:, 2] - center_cam[2] # zreturn joint_imgdef demo_for_human36m():joint_world = [[-91.679, 154.404, 907.261],[-223.23566, 163.80551, 890.5342],[-188.4703, 14.077106, 475.1688],[-261.84055, 186.55286, 61.438915],[39.877888, 145.00247, 923.98785],[-11.675994, 160.89919, 484.39148],[-51.550297, 220.14624, 35.834396],[-132.34781, 215.73018, 1128.8396],[-97.1674, 202.34435, 1383.1466],[-112.97073, 127.96946, 1477.4457],[-120.03289, 190.96477, 1573.4],[25.895456, 192.35947, 1296.1571],[107.10581, 116.050285, 1040.5062],[129.8381, -48.024918, 850.94806],[-230.36955, 203.17923, 1311.9639],[-315.40536, 164.55284, 1049.1747],[-350.77136, 43.442127, 831.3473],[-102.237045, 197.76935, 1304.0605]]joint_world = np.asarray(joint_world)# 關節點連接線kps_lines = ((0, 7), (7, 8), (8, 9), (9, 10), (8, 11), (11, 12), (12, 13), (8, 14), (14, 15),(15, 16), (0, 1), (1, 2), (2, 3), (0, 4), (4, 5), (5, 6))# show in 世界坐標系vis.vis_3d(joint_world, kps_lines, coordinate="WC", title="WC", set_lim=True, isshow=True)kp_vis = CameraTools()# show in 相機坐標系joint_cam = kp_vis.convert_wc_to_cc(joint_world)vis.vis_3d(joint_cam, kps_lines, coordinate="CC", title="CC", set_lim=True, isshow=True)joint_img = kp_vis.convert_cc_to_ic(joint_cam)joint_world1 = kp_vis.convert_cc_to_wc(joint_cam)vis.vis_3d(joint_world1, kps_lines, coordinate="WC", title="WC", set_lim=True, isshow=True)# show in 像素坐標系kpt_2d = joint_img[:, 0:2]image_path = "./data/s_01_act_02_subact_01_ca_02_000001.jpg"image = image_processing.read_image(image_path)image = image_processing.draw_key_point_in_image(image, key_points=[kpt_2d], pointline=kps_lines)image_processing.cv_show_image("image_dict", image)if __name__ == "__main__":demo_for_human36m()效果:
總結
以上是生活随笔為你收集整理的世界坐标系,相机坐标系和图像坐标系的转换(Python)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: matplotlib figure转为n
- 下一篇: opencv多线程显示的问题和解决方法