近年來,利用拍攝物體及場景的2D影像以重建其3D影像的需求逐漸增加,一般來說,常見的3D影像重建大多是建立在以模擬雙眼看景物的方式,使用二台攝影機擷取影像,以取得所需要的左右成對圖像,加以計算得知深度圖。而本文使用單一鏡頭多角度拍攝的架構,擷取多張不同角度圖像,再經過影像特徵點提取、特徵點匹配、運動回復結構等處理,最後利用點雲呈現物體3D影像輪廓。本文以樹莓派微型電腦為基礎,建置實驗設備以拍攝物體影像計算其3D點雲資料。實驗顯示本文完成的成果有三:(1)完成使用樹莓派控制馬達並可利用Picamera擷取圖片;(2)在python環境下使用SIFT特徵點提取並匹配,再利用閥值的調整,以得出更多的匹配值;(3)利用運動回復結構演算的結構光之三角測量原理,將匹配的點顯示於3D座標中。
關鍵字:樹莓派、相機校準、python、三維重建、SfM
In recent years, the need to reconstruct 3D image of objects and scenes by capturing its 2D images has been increasing. In general, most of the common 3D image reconstructions are based on the way of simulating binoculars, using two cameras to capture images to obtain the desired pair of left and right images, and calculating the depth map. In this dissertation, a single camera multi-angle shooting architecture is used to capture multiple images of different angles, and then through image feature point extraction, feature point matching, motion recovery structure, etc., and finally use the point cloud to present the 3D image contour of the object. Based on the Raspberry Pi, this dissertation builds experimental equipment to calculate the 3D point cloud data by taking object images. Experimental results show that there are three results have been completed in this dissertation as follow. First, Raspberry Pi is used to develop an experimental equipment for controlling stepping motor to capture images by using Picamera. Next, the feature points between 2D images are extracted and matched by using SIFT algorithm in python environment, and then more feature points can be obtained by adjusting the threshold. 3) The triangulation principle of the structured light calculated by the motion recovery structure is used to display the matched points in the 3D coordinates. Finally, in this dissertation, the principle of triangulation of structured light calculated by motion recovery structure is used to display the matching points in 3D coordinates.
Keywords: Raspberry Pi, camera calibration, python, 3D reconstruction, motion recovery structure
[1]Weng, J., Cohen, P., & Herniou, M. (1992). Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (10), 965-980.
Google Scholar
[2]Schad, L., Lott, S., Schmitt, F., Sturm, V., & Lorenz, W. J. (1987). Correction of spatial distortion in MR imaging: a prerequisite for accurate stereotaxy. Journal of computer assisted tomography, 11(3), 499-505.
Google Scholar
[3]Gribbon, K. T., Johnston, C. T., & Bailey, D. G. (2003, November). A real-time FPGA implementation of a barrel distortion correction algorithm with bilinear interpolation. In Image and Vision Computing New Zealand (pp. 408-413).
Google Scholar
[5]Wang, J., Shi, F., Zhang, J., & Liu, Y. (2008). A new calibration model of camera lens distortion. Pattern Recognition, 41(2), 607-615.
Google Scholar