| 研究生: |
劉俊甫 Liu, Jun-Fu |
|---|---|
| 論文名稱: |
三維模型基礎之擴增實境與微組裝之應用 3D-Model Based Augmented Reality and Micro-Assembly Applications |
| 指導教授: |
張仁宗
Chang, Ren-Jung |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 中文 |
| 論文頁數: | 111 |
| 中文關鍵詞: | 虛擬攝影機校正 、視覺伺服 、擴增實境 、微組裝系統 |
| 外文關鍵詞: | virtual camera calibration, visual servo, augmented reality, micro-assembly system |
| 相關次數: | 點閱:99 下載:5 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本文透過三維模型基礎之虛擬攝影機模型,進行校正與虛擬視覺伺服,並應用於擴增實境來輔助手動微組裝之操作。首先,使用線性與非線性的模型進行虛擬攝影機校正,調整攝影機的姿態讓CAD模型投射出的虛擬影像與真實影像達到一致。接著,由物件速度與其投影至影像座標點速度之關係,以及校正所估測的攝影機參數來建立虛擬視覺伺服模型,使虛擬物件追蹤真實影像上的物件;若真實物件移動時發生旋轉,本研究提出的演算法可使虛擬物件隨之調整姿態。對於攝影機校正與視覺伺服所需的影像特徵點,本文透過影像處理與曲線擬合來獲得。在進行組裝時,真實組裝件的軸孔被隱藏於內部且不可見,藉由表面剃除讓虛擬軸孔顯現出來並疊加至真實影像上,可使操作者很清楚地觀測到軸孔的位置,以增加組裝的成功率。
The objective of this thesis is to implement augment reality in manual micro-assembly operation through 3D model-based virtual camera calibration and visual servo, where “3D” means the proposed algorithms take object 3D coordinates in virtual environment as control points. First of all, apply virtual camera calibration by the linear model, non-linear model and stepwise method to adjust camera pose, which can let the appearance of virtual objects on the screen is the same as the scene captured by real camera. Next, use the relationship between object velocity and its reprojection point velocity on image plane to build the virtual visual servo model, which can be used to derive the control law to let the virtual object track an assembled part dynamically. If the assembled part carried by linear stage rotates during transportation, pose of the virtual object can be adjusted by the proposed control law. Moreover, the feature points used in calibration and visual servo are obtained by image processing and curve fitting. When executing assembly operation, the invisible hole inside the assembled part can be revealed by overlapping virtual object on real image and utilizing face culling method, which can assist the operator in locating position of the hole and increasing the success rate in assembly mission.
[1] Z.Y. Zhang, “Flexible Camera Calibration by Viewing a Plane from Unknown Orientations,” IEEE Int. Conf. on Computer Vision, 1999, pp.666-673.
[2] R. J. Chang, J. C. Jau, “Error Measurement and Compensation in Developing Virtual-Reality-Assisted Vision-Based Microassembly Machine,” The 18th International Conference On Mechatronics Technology, 2014.
[3] W. S. Kim, “Computer Vision Assisted Virtual Reality Calibration,” IEEE Robotic and Automation, 1999, pp. 450-464.
[4] L. Wang, S. You and U. Neumann, “Single View Camera Calibration for Augmented Virtual Environments,” 2007 IEEE Virtual Reality Conference, Charlotte, NC, 2007, pp. 255-258.
[5] K. B. Lim, W. L. Kee and D. Wang, “Virtual camera calibration and stereo correspondence of single-lens bi-prism stereovision system using geometrical approach,” Signal Processing: Image Communication, vol. 28, Issue 9, October 2013, pp. 1059-1071.
[6] A. Krupa, “Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing,” IEEE Robotic and Automation, Oct. 2003, vol. 19, no. 5, pp. 842-853.
[7] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics & Automation Magazine, vol. 13, no. 4, Dec. 2006, pp. 82-90.
[8] A. I. Comport, E. Marchand, M. Pressigout and F. Chaumette, "Real-time markerless tracking for augmented reality: the virtual visual servoing framework," IEEE Transactions on Visualization and Computer Graphics, July-Aug. 2006, vol. 12, no. 4, pp. 615-628.
[9] J.Y. Hervé, C. Duchesne and V. Pradines, “Dynamic Registration for Augmented Reality in Telerobotics Applications,” IEEE, Systems, Man, and Cybernetics, 2000, pp. 1348-1353 vol.2.
[10] “OpenGL Transformation”, http://www.songho.ca/opengl/gl_transform.html, 2017/05/06
[11] “OpenGL viewport transformation matrix”, http://www.thecodecrate.com/opengl-es/opengl-viewport-matrix/, 2017/05/06
[12] R. J. Chang, C. Y. Lin, and Lin P. S, “Visual-Based Automation of Peg-in-Hole Microassembly Process,” ASME, Journal of Manfacturing Science and Engineering 133.4, 2011.
[13] Abdel-Aziz, Y. I. & Karara, H. M. “Direct linear transformation into object space coordinates in close-range photogrammetry,” Proc. Symposium on Close-Range Photogrammetry, Urbana, Illinois, 1971, pp. 1-18.
[14] J. Heikkila and O. Silven, "A four-step camera calibration procedure with implicit image correction," Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, 1997, pp. 1106-1112.
[15] H. Zhuang and Wen-Chiang Wu, “Camera Calibration with a Near-Parallel (Ill-Conditioned) Calibration Board Configuration,” IEEE Transcations on Robotics and Automation, Vol.12, No.6, Dec. 1996.
[16] R. J. Chang, J. C. Jau, “Error Measurement and Compensation in Developing Virtual-Reality-Assisted Microassembly System,” Int. J. Auto. Tech., Vol.9, No.6, 2015, pp. 619-628.
[17] R. C. Gonzalez and Richard E. Woods, Digital Image Processing, 2nd ed., Prentice-Hall, New York, U.S.A., Pages: 693-704, 2002.
[18] Pattern Matching Techniques, http://zone.ni.com/reference/en-XX/help/372916P-01/nivisionconcepts/pattern_matching_techniques/, 2017/06/02.
[19] A. I. Comport, E. Marchand, M. Pressigout and F. Chaumette, “Real-time markerless tracking for augmented reality: the virtual visual servoing framework,” IEEE Transactions on Visualization and Computer Graphics, July-Aug. 2006, vol. 12, no. 4, pp. 615-628.
[20] F. Chaumette, “Potential problems of stability and convergence in image-based and position-based visual servoing,” The Confluence of Vision and Control, LNCIS Series, No.237, Springer-Verlag, 1998, pp. 66-78.
[21] R.Azuma, “A Survey of Augmented Reality,” In Presence: Teleoperators and Virtual Environments, pp. 355–385, August 1997.
[22] R. J. Chang, J. C. Jau, “Augmented Reality in Peg-in-Hole Microassembly Operations,” Int. J. Auto. Tech., Vol.10, No.3, 2016, pp. 438-446.
[23] 趙家成, “擴增實境於微物件組裝之實現,” 國立成功大學機械工程所碩士論文, 2014.