簡易檢索 / 詳目顯示

研究生: 劉俊甫
Liu, Jun-Fu
論文名稱: 三維模型基礎之擴增實境與微組裝之應用
3D-Model Based Augmented Reality and Micro-Assembly Applications
指導教授: 張仁宗
Chang, Ren-Jung
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 111
中文關鍵詞: 虛擬攝影機校正視覺伺服擴增實境微組裝系統
外文關鍵詞: virtual camera calibration, visual servo, augmented reality, micro-assembly system
相關次數: 點閱:99下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本文透過三維模型基礎之虛擬攝影機模型,進行校正與虛擬視覺伺服,並應用於擴增實境來輔助手動微組裝之操作。首先,使用線性與非線性的模型進行虛擬攝影機校正,調整攝影機的姿態讓CAD模型投射出的虛擬影像與真實影像達到一致。接著,由物件速度與其投影至影像座標點速度之關係,以及校正所估測的攝影機參數來建立虛擬視覺伺服模型,使虛擬物件追蹤真實影像上的物件;若真實物件移動時發生旋轉,本研究提出的演算法可使虛擬物件隨之調整姿態。對於攝影機校正與視覺伺服所需的影像特徵點,本文透過影像處理與曲線擬合來獲得。在進行組裝時,真實組裝件的軸孔被隱藏於內部且不可見,藉由表面剃除讓虛擬軸孔顯現出來並疊加至真實影像上,可使操作者很清楚地觀測到軸孔的位置,以增加組裝的成功率。

    The objective of this thesis is to implement augment reality in manual micro-assembly operation through 3D model-based virtual camera calibration and visual servo, where “3D” means the proposed algorithms take object 3D coordinates in virtual environment as control points. First of all, apply virtual camera calibration by the linear model, non-linear model and stepwise method to adjust camera pose, which can let the appearance of virtual objects on the screen is the same as the scene captured by real camera. Next, use the relationship between object velocity and its reprojection point velocity on image plane to build the virtual visual servo model, which can be used to derive the control law to let the virtual object track an assembled part dynamically. If the assembled part carried by linear stage rotates during transportation, pose of the virtual object can be adjusted by the proposed control law. Moreover, the feature points used in calibration and visual servo are obtained by image processing and curve fitting. When executing assembly operation, the invisible hole inside the assembled part can be revealed by overlapping virtual object on real image and utilizing face culling method, which can assist the operator in locating position of the hole and increasing the success rate in assembly mission.

    摘要 I Extended Abstract II 目錄 VI 表目錄 X 圖目錄 XI 符號表 XXI 第一章 緒論 1 1-1 前言 1 1-2 研究動機 1 1-3 文獻回顧 2 1-3-1 真實攝影機校正 2 1-3-2 虛擬實境校正 2 1-3-3 視覺伺服 5 1-4 研究目標與方法 7 1-5 本文架構 8 第二章 虛擬環境之建立 9 2-1 虛擬模型的簡介 9 2-1-1 立體模型的建構方式 9 2-1-2 表面渲染種類 10 2-2 虛擬模型成像之座標轉換 11 2-2-1 虛擬環境之座標轉換 11 2-2-2 模型矩陣(Model matrix)轉換 12 2-2-3 視圖矩陣(View matrix)轉換 15 2-2-4 投影矩陣(Projection matrix)轉換 17 2-2-5 視埠矩陣(Viewport matrix)轉換 21 2-3 虛擬與真實攝影機模型之比較 22 2-4 虛擬微組裝系統 25 2-5 本章總結 27 第三章 虛擬實境之攝影機校正 28 3-1 特徵點擷取之方法 28 3-1-1 影像特徵辨識 28 3-1-2 擷取三維座標點 30 3-2 虛擬攝影機校正 31 3-2-1 線性估測 32 3-2-2 非線性估測 41 3-2-3 特徵點逐步估測法 46 3-3 校正流程 48 3-4 本章總結 51 第四章 真實與虛擬視覺伺服 52 4-1 動態影像特徵擷取 52 4-1-1 樣板比對法 52 4-1-2 TM-RSEF Algorithm 55 4-2 視覺伺服 57 4-2-1 Eye-in-hand數學模型與控制律之推導 58 4-2-2 Interaction matrix之形式 61 4-2-3 應用於eye-to-hand微組裝系統之模型 63 4-3 模擬驗證 68 4-4 本章總結 80 第五章 擴增實境於微組裝系統之應用 81 5-1 系統介紹 81 5-1-1 硬體配置 81 5-1-2 軟體整合 84 5-2 擴增實境之應用 87 5-2-1 虛擬影像表面剃除 88 5-2-2 擴增實境影像 89 5-3 微組裝實驗驗證 90 5-3-1 虛擬攝影機校正 93 5-3-2 虛擬視覺伺服 101 5-3-3 擴增實境輔助之組裝 104 5-3-4 實驗結果討論 106 5-4 本章總結 106 第六章 結論與未來展望 107 6-1 結論 107 6-2 未來展望 108 參考文獻 109

    [1] Z.Y. Zhang, “Flexible Camera Calibration by Viewing a Plane from Unknown Orientations,” IEEE Int. Conf. on Computer Vision, 1999, pp.666-673.
    [2] R. J. Chang, J. C. Jau, “Error Measurement and Compensation in Developing Virtual-Reality-Assisted Vision-Based Microassembly Machine,” The 18th International Conference On Mechatronics Technology, 2014.
    [3] W. S. Kim, “Computer Vision Assisted Virtual Reality Calibration,” IEEE Robotic and Automation, 1999, pp. 450-464.
    [4] L. Wang, S. You and U. Neumann, “Single View Camera Calibration for Augmented Virtual Environments,” 2007 IEEE Virtual Reality Conference, Charlotte, NC, 2007, pp. 255-258.
    [5] K. B. Lim, W. L. Kee and D. Wang, “Virtual camera calibration and stereo correspondence of single-lens bi-prism stereovision system using geometrical approach,” Signal Processing: Image Communication, vol. 28, Issue 9, October 2013, pp. 1059-1071.
    [6] A. Krupa, “Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing,” IEEE Robotic and Automation, Oct. 2003, vol. 19, no. 5, pp. 842-853.
    [7] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics & Automation Magazine, vol. 13, no. 4, Dec. 2006, pp. 82-90.
    [8] A. I. Comport, E. Marchand, M. Pressigout and F. Chaumette, "Real-time markerless tracking for augmented reality: the virtual visual servoing framework," IEEE Transactions on Visualization and Computer Graphics, July-Aug. 2006, vol. 12, no. 4, pp. 615-628.
    [9] J.Y. Hervé, C. Duchesne and V. Pradines, “Dynamic Registration for Augmented Reality in Telerobotics Applications,” IEEE, Systems, Man, and Cybernetics, 2000, pp. 1348-1353 vol.2.
    [10] “OpenGL Transformation”, http://www.songho.ca/opengl/gl_transform.html, 2017/05/06
    [11] “OpenGL viewport transformation matrix”, http://www.thecodecrate.com/opengl-es/opengl-viewport-matrix/, 2017/05/06
    [12] R. J. Chang, C. Y. Lin, and Lin P. S, “Visual-Based Automation of Peg-in-Hole Microassembly Process,” ASME, Journal of Manfacturing Science and Engineering 133.4, 2011.
    [13] Abdel-Aziz, Y. I. & Karara, H. M. “Direct linear transformation into object space coordinates in close-range photogrammetry,” Proc. Symposium on Close-Range Photogrammetry, Urbana, Illinois, 1971, pp. 1-18.
    [14] J. Heikkila and O. Silven, "A four-step camera calibration procedure with implicit image correction," Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, 1997, pp. 1106-1112.
    [15] H. Zhuang and Wen-Chiang Wu, “Camera Calibration with a Near-Parallel (Ill-Conditioned) Calibration Board Configuration,” IEEE Transcations on Robotics and Automation, Vol.12, No.6, Dec. 1996.
    [16] R. J. Chang, J. C. Jau, “Error Measurement and Compensation in Developing Virtual-Reality-Assisted Microassembly System,” Int. J. Auto. Tech., Vol.9, No.6, 2015, pp. 619-628.
    [17] R. C. Gonzalez and Richard E. Woods, Digital Image Processing, 2nd ed., Prentice-Hall, New York, U.S.A., Pages: 693-704, 2002.
    [18] Pattern Matching Techniques, http://zone.ni.com/reference/en-XX/help/372916P-01/nivisionconcepts/pattern_matching_techniques/, 2017/06/02.
    [19] A. I. Comport, E. Marchand, M. Pressigout and F. Chaumette, “Real-time markerless tracking for augmented reality: the virtual visual servoing framework,” IEEE Transactions on Visualization and Computer Graphics, July-Aug. 2006, vol. 12, no. 4, pp. 615-628.
    [20] F. Chaumette, “Potential problems of stability and convergence in image-based and position-based visual servoing,” The Confluence of Vision and Control, LNCIS Series, No.237, Springer-Verlag, 1998, pp. 66-78.
    [21] R.Azuma, “A Survey of Augmented Reality,” In Presence: Teleoperators and Virtual Environments, pp. 355–385, August 1997.
    [22] R. J. Chang, J. C. Jau, “Augmented Reality in Peg-in-Hole Microassembly Operations,” Int. J. Auto. Tech., Vol.10, No.3, 2016, pp. 438-446.
    [23] 趙家成, “擴增實境於微物件組裝之實現,” 國立成功大學機械工程所碩士論文, 2014.

    下載圖示 校內:立即公開
    校外:2020-08-11公開
    QR CODE