簡易檢索 / 詳目顯示

研究生: 楊子逸
Yang, Tzu-Yi
論文名稱: 使用擴增實境於微組裝系統之定位校準以提升組裝成功率
Using Augmented Reality in Positioning Calibration of Micro-Assembly System for Improving Assembly Success Rate
指導教授: 張仁宗
Chang, Ren-Jung
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 123
中文關鍵詞: 擴增虛擬攝影機校正微組裝系統影像伺服三維重建
外文關鍵詞: augmented virtual, camera calibration, micro assembly system, image servo, 3D reconstruction
相關次數: 點閱:89下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究將擴增虛擬應用於微組裝系統,在虛擬環境中加入真實的資訊,重建出與實際機台狀況相符合的虛擬環境。為了改善組裝成功率,首先採用三維棋盤格精準估測攝影機內參數與外參數,為了進一步提高定位精準度,首先對於組裝物軸孔的三維姿態估測,採用擴增實境的方式使虛擬影像與真實影像達到一致,以獲取其三維座標;爾後透過雙 攝影機的估測方法估測軸孔的三維座標,再利用兩種估測方法的誤差校正攝影機外參數,最後使用校準後之攝影機參數估測組裝件端點的三維座標,完成組裝物軸孔和組裝件端點的三維姿態估測。
    將微物件之三維座標經網路傳輸至虛擬環境中,使虛擬物件的位置資訊與真實世界位置對位,以便虛擬環境中使用者可以對組裝物軸孔與組裝件端點進行局部放大,並以不同的視角觀測組裝情況,讓使用者能夠即時微調組裝件位置,幫助組裝件完全插入組裝物軸孔中,同時使用者可以透過虛擬環境中的尺規了解組裝件插入組裝物之深度,確認是否組裝成功。本研究測試之組裝件直徑為190微米,組裝物孔洞直徑為240微米,組裝間隙比為0.208。

    This research applies augmented virtuality to the micro-assembly system, adds real information to the virtual environment, and reconstructs a virtual environment that matches the actual condition. In order to improve the assembly success rate, first use a three-dimensional chessboard to calibrate camera. In order to get the three-dimensional coordinates of the micro object, firstly, Using augmented reality to estimate the three-dimensional coordinates of the hole in the glass tube; afterwards, Estimating the three-dimensional coordinates of the end points of the copper rod using the dual-camera estimation method. After getting the three-dimensional coordinates of the micro object, These messages are transmitted to the virtual environment via the network, so that the position information of the virtual object is match the real-world position. In the virtual environment, the user can partially enlarge the hole in the glass tube and the assembly end point, and observe the assembly situation from different perspectives. At the same time, the user can understand the depth of the assembly inserted into the assembly through the ruler in the virtual environment. The virtual environment can help the user to fine-tune the position of the micro object and confirm whether the micro-assembly operations is successful. The diameter of the copper rod in this study is 190 μm, the diameter of the hole in the glass tube is 240 microns, and the gap ratio is 0.208.

    摘要 I Extended Abstract II 誌謝 VI 目錄 VII 圖目錄 XII 表目錄 XIX 符號表 XXI 第一章 緒論 1 1-1 前言 1 1-2 研究動機 2 1-3 文獻回顧 2 1-3-1 虛擬實境、擴增實境和擴增虛擬應用於微組裝系統 3 1-3-2 攝影機校正 8 1-3-3 使用三維校正裝置進行姿態估測 11 1-4 研究目標與方法 14 1-5 本文架構 16 第二章 攝影機模型與校正 17 2-1 攝影機模型 17 2-1-1 針孔成像模型 18 2-1-2 攝影機內參數 19 2-1-3 攝影機外參數 22 2-2 成像畸變 23 2-2-1 徑向畸變(Radial distortion) 23 2-2-2 切向畸變(Tangential distortion) 24 2-3 校正方法 26 2-3-1 計算初始攝影機內參數 26 2-3-2 初始攝影機姿態估測 28 2-3-3 極大似然估計(Maximum likelihood estimation) 30 2-4 本章總結 32 第三章 自動對焦與特徵提取 33 3-1 自動對焦 33 3-1-1 自動對焦方法 33 3-1-2 搜尋策略 36 3-1-3 自動聚焦評價函數 39 3-1-4 自動對焦過程 40 3-2 特徵提取 42 3-2-1 Canny邊緣檢測 42 3-2-2 顏色識別 45 3-2-3 特徵提取測試 51 3-3 棋盤格角點擷取 53 3-4 本章總結 55 第四章 攝影機三維姿態估測 56 4-1 組裝件之三維座標估測 56 4-1-1 對極幾何(Epipolar Geometry) 57 4-1-2 對極約束式(Epipolar Constraint) 58 4-1-3 基礎矩陣 60 4-1-4 雙攝影機估測三維座標 61 4-1-5 雙攝影機三維姿態估測流程 63 4-2 組裝物軸孔之三維座標估測 65 4-2-1 尋找組裝物軸孔 65 4-2-2 虛實影像疊合求取三維座標 68 4-3 本章總結 70 第五章 微組裝系統之實現 71 5-1 系統介紹 71 5-1-1 真實微組裝系統介紹 72 5-1-2 虛擬微組裝系統介紹 76 5-1-3 使用者介面 80 5-2 微組裝實驗驗證 82 5-2-1 攝影機內參數與畸變參數估測 83 5-2-2 棋盤格角點擷取 88 5-2-3 攝影機外參數估測 90 5-2-4 三維姿態估測 92 5-2-5 TCP/IP網路傳輸 101 5-2-6 擴增虛擬遠端操控微組裝系統 106 5-3 實驗結果討論 111 5-4 本章總結 112 第六章 結論與未來展望 113 6-1 結論 113 6-2 未來展望 115 參考資料 116 附錄A SMA致動器之實現 120

    [1] 吳秉謙(2020)。基於頭戴式顯示器與擴增虛擬技術實現遠端操控微組裝系統。碩士論文,國立成功大學機械工程學系,台南市。
    [2] Ferreira, A., Cassier, C., & Hirai, S. (2004). Automatic microassembly system assisted by vision servoing and virtual reality. IEEE/ASME Transactions on Mechatronics, 9(2), 321-333. doi: 10.1109/TMECH.2004.828655
    [3] Chang, R. J., Lin, C., & Lin, P. (2011). Visual-Based Automation of Peg-in-Hole Microassembly Process. Journal of Manufacturing Science and Engineering, 133, 041015. doi: 10.1115/1.4004497
    [4] Cecil, J., & Jones, J. (2014). VREM: An advanced virtual environment for micro assembly. The International Journal of Advanced Manufacturing Technology, 72(1), 47-56. doi: 10.1007/s00170-014-5618-9
    [5] Chang, R. J., & Jau, J.-C. (2016). Augmented Reality in Peg-in-Hole Microassembly Operations. International Journal of Automation Technology, 10. doi: 10.20965/ijat.2016.p0438
    [6] Chang, R. J., & Liu, J.-F. (2018). Model-Based Coarse-Fine Virtual Calibration and Visual Servo for Augmented Reality-Assisted Peg-in-Hole Microassembly. Journal of Micro and Nano-Manufacturing, 6, 041002. doi: 10.1115/1.4041531
    [7] Sun, J., Wang, P., Qin, Z., & Qiao, H. (2014, 29 June-4 July 2014). Overview of camera calibration for computer vision. Paper presented at the Proceeding of the 11th World Congress on Intelligent Control and Automation.
    [8] Abdel-aziz, Y. I., & Karara, H.M (1971). Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry. Photogrammetric Engineering and Remote Sensing, 81, 103-107.
    [9] Faig, W. (1975). Calibration of Close-Range Photogrammetric Systems: Mathematical Formulation. Photogrammetric Engineering and Remote Sensing, 41.
    [10] Tsai, R. (1986). An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision. Paper presented at the CVPR 1986.
    [11] Heikkila, J., & Silven, O. (1997, 17-19 June 1997). A four-step camera calibration procedure with implicit image correction. Paper presented at the Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
    [12] Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330-1334. doi: 10.1109/34.888718
    [13] Xu, G., Li, X., Su, J., Pan, H., & Tian, G. (2011). Precision Evaluation of Three-dimensional Feature Points Measurement by Binocular Vision. Journal of the Optical Society of Korea, 15, 30-37. doi: 10.3807/JOSK.2011.15.1.030
    [14] Xu, G., Chen, J., & Li, X. (2017). 3-D Reconstruction of Binocular Vision Using Distance Objective Generated from Two Pairs of Skew Projection Lines. IEEE Access, 5, 27272-27280. doi: 10.1109/ACCESS.2017.2777818
    [15]清晰說明針孔相機的內部參數與外部參數矩陣。https://blog.techbridge.cc/2018/04/22/intro-to-pinhole-camera-model/, 2021/06/23.
    [16]針孔相機模型 Pinhole Camera Model。https://allen108108.github.io/blog/2020/02/06/%E9%87%9D%E5%AD%94%E7%9B%B8%E6%A9%9F%E6%A8%A1%E5%9E%8B%20%20Pinhole%20Camera%20Model/, 2021/06/23.
    [17] What Is Camera Calibration?https://www.mathworks.com/help/vision/ug/camera-calibration.html, 2021/06/23.
    [18]相機標定的來龍去脈。https://blog.csdn.net/weixin_42398658/article/details/105919829, 2021/06/23.
    [19] Brown, D. C. (1971). Close-Range Camera Calibration. Photogrammetric Engineering, 37.
    [20]簡佑丞(2011)。影像伺服追蹤液體環境中之微粒子。碩士論文,國立成功大學機械工程學系,台南市。
    [21]相位對焦(Phase Detection)資料整理。https://cuteparrot.pixnet.net/blog/post/209573872, 2021/06/23.
    [22] Chern, N. N. K., Poo Aun, N., & Ang, M. H. (2001). Practical issues in pixel-based autofocusing for machine vision. Paper presented at the Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation.
    [23] Baina, J., & Dublet, J. (1995). Automatic focus and iris control for video cameras. Paper presented at the Fifth International Conference on Image Processing and its Applications, 1995.
    [24] Talbi, E., & Muntean, T. (1993). Hill-climbing, simulated annealing and genetic algorithms: a comparative study and application to the mapping problem. Paper presented at the Proceedings of the Twenty-sixth Hawaii International Conference on System Sciences.
    [25] Yeo, T. T. E., Ong, S. H., Jayasooriah, & Sinniah, R. (1993). Autofocusing for tissue microscopy. Image and Vision Computing, 11(10), 629-639. doi: 10.1016/0262-8856(93)90059-P
    [26] Canny, J. (1986). A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(6), 679-698. doi: 10.1109/TPAMI.1986.4767851
    [27] Canny邊緣檢測原理。https://www.itread01.com/content/1548989121.html, 2021/06/23.
    [28] OpenCV學習筆記(7)- Canny Edge Detection Canny邊緣檢測https://www.itread01.com/content/1544684053.html, 2021/06/23.
    [29] RGB色彩空間和HSV色彩空間的理解。https://blog.csdn.net/u010429424/article/details/76577399, 2021/06/23.
    [30] TAG ARCHIVES: CV2.ERODEhttps://theailearner.com/tag/cv2-erode/, 2021/06/23.
    [31] Thinning, https://homepages.inf.ed.ac.uk/rbf/HIPR2/thin.htm, 2021/06/23.
    [32] Longuet-Higgins, H. C. (1981). A computer algorithm for reconstructing a scene from two projections. Nature, 293, 133-135.
    [33]賴禹亨(2011)。形狀記憶合金驅動微夾持器之控制研究。碩士論文,國立成功大學機械工程學系,台南市。

    無法下載圖示 校內:2026-07-22公開
    校外:2026-07-22公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE