| 研究生: |
簡威任 Chien, Wei-Jen |
|---|---|
| 論文名稱: |
擴增實境於具手眼配置微組裝系統之實現 Implementation of Augmented Reality in Eye-In-Hand Micro-Assembly Systems |
| 指導教授: |
張仁宗
Chang, Ren-Jung |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2018 |
| 畢業學年度: | 106 |
| 語文別: | 中文 |
| 論文頁數: | 110 |
| 中文關鍵詞: | 眼看手配置 、手眼配置 、攝影機校正 、擴增實境 、影像伺服 、微組裝系統 |
| 外文關鍵詞: | eye-in-hand & eye-to-hand, virtual camera calibration, augmented reality, visual servo, micro-assembly system |
| 相關次數: | 點閱:99 下載:3 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本文架設攝影機方式分別使用眼看手配置(Eye-to-hand)與手眼配置(Eye-in-hand),在不同階段輔助安裝微型夾爪之目標,在擴增實境與影像伺服部分,透過三維模型之虛擬攝影機,進行校正與追蹤。首先以「眼看手配置」的攝影機,先初步尋找物件與組裝件粗略位置,並將其靠近。進行組裝時,由於以「手眼配置」攝影機的架設方式,能夠輕易觀察到組裝件孔,先以非線性的模型進行虛擬攝影機校正,使虛擬影像和真實影像達成疊合,再以影像伺服,由物件與攝影機移動速度和投影至影像座標之速度關係,讓建立之虛擬模型能追蹤真實物件在攝影機中的移動狀態,透過擴增實境的輔助,能夠準確得知當下組裝件孔的世界座標位置,提升微組裝的成功機會。
We operate the micro-assembly system in different state with eye-to-hand camera or eye-in-hand camera. In assembly part, we complete the task with augmented reality (AR). We make 3D virtual model overlay the object on image through virtual camera calibration and visual servo method. First, clip the pin with eye-to-hand camera and determine the distance from pin to object. If the distance is small, change eye-to-hand camera to eye-in-hand camera. Next, in eye-in-hand camera, apply virtual camera calibration by estimation of intrinsic parameters, the linear model and the nonlinear model. If the real camera parameters are equal to the virtual camera parameters, the images of the object captured by the real camera and virtual camera are same. Finally, in dynamic assembly part, we make 3D model track real object on image by visual servo method. Use the relationship between object velocity and reprojection point velocity on image plane to build servo model. Calculate the control law and adjust virtual object or camera pose to track. When we assemble the pin and object, we can observe the object hole easily with eye-in-hand camera and always get clear edge of object hole by overlapping virtual object on real image. The series of method can assist the operator and increase the success rate in assembly task.
參考文獻
[1] D. Kragic & H. I. Christensen, “Survey on Visual Servoing for Manipulation,” 2002.
[2] G. Flandin, F. Chaumette & E. Marchand, “Eye-in-hand / Eye-to-hand Cooperation for Visual Servoing,” IEEE International Conference on Robotics & AutomationSan Francisco, CA, April 2000.
[3] R. Tsai, “A Versatile Camera Calibration Techniaue for High-Accuracy 3D Machine Vision Metrology Using Off-the-shelf TV Cameras and Lenses,” IEEE Journal of Robotics and Automation Society, vol. 3, no. 4, August 1987, pp. 324-344.
[4] J. Weng, P. Cohen, and M. Herniou, “Camera Calibration with Distortion Models and Accuracy Evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 10, October 1992, pp. 965-980.
[5] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, November 2000, pp. 1330-1334.
[6] W. S. Kim, “Computer Vision Assisted Virtual Reality Calibration,” IEEE Robotic and Automation, June 1999, pp. 450-464.
[7] A. C. Sanderson and L. E. Weiss, “Image-based visual servo control using relat- ional graph error signals,” Proceedings of the IEEE International Conference on Cybernetics and Society, 1980, pp. 1074-1077.
[8] L. E. Weiss, A. C. Sanderson and C. P. Neuman, “Dynamic Sensor-Based Control of Robots with Visual Feedback,” IEEE Journal of Robotics & Automation, vol. RA-3, no. 5, October 1987, pp. 404-417.
[9] S. Hutchinson, G. D. Hager and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Transaction on Robotics & Automation, vol. 12, no. 5, October 1996, pp. 651-670.
[10] W. J. Wilson, C. C. W. Hulls, and G. S. Bell, “Relative End-Effector Control Using Cartesian position Based Visual Servoing,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, October 1996, pp. 684-696.
[11] K. Hashimoto, T. Kimoto, T. Ebine and H. Kimura, “Manipulator Control with Image-Based Visual Servo,” Proceedings of the 1991 IEEE International Conference on Robotics and Automation Sacramento, California, April 1991.
[12] E. Malis, F. Chaumette and S. Boudet, “2½D visual servoing,” IEEE Journal of Robotics & Automation, vol. 15, no. 2, April 1999, pp. 238-250.
[13] S. Hutchinson, G. D. Hager and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Transaction on Robotics & Automation, vol. 12, no. 5, October 1996, pp. 651-670.
[14] “OpenGL Transformation”, http://www.songho.ca/opengl/gl_transform.html, 2018/03/19.
[15] W. H. Besant, “Conic Sections Treated Geometrically,” Cambridge : Deighton, Bell; London : G. Bell and sons, 1890.
[16] J. He, R. Zhou, Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consumer Electron, June 2003, pp. 257-262.
[17] R. J. Chang, J. C. Jau, “Augmented Reality in Peg-in-Hole Microassembly Operations,” Int. J. of Automation Technology vol. 10 no. 3, 2016, pp. 438-446.
[18] R. J. Chang, J. C. Jau, “Error Measurement and Compensation in Developing Virtual-Reality-Assisted Microassembly System,” Int. J. Auto. Tech., vol. 9, no. 6, 2015, pp. 619-628.
[19] Z. Y. Zhang, “Flexible Camera Calibration by Viewing a Plane from Unknown Orientations,” IEEE Int. Conf. on Computer Vision, September 1999, pp. 666-673.
[20] A. Aziz, Y. I. & Karara, H. M. “Direct linear transformation into object space coordinates in close-range photogrammetry,” Photogrammetric Engineering & Remote Sensing, vol. 81, no. 2, February 2015, pp. 103-107.
[21] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics & Automation Magazine, vol. 13, no. 4, December 2006, pp. 82-90.
[22] F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches,” IEEE Robotics & Automation Magazine, vol. 14, no. 1, March 2007, pp. 109-118.
[23] B. Espiau, F. Chaumette, and P. Rives, “A new approach to visual servoing in robotics,” IEEE Trans. Robotics and Automation, vol. 8, no.3, June 1992, pp. 313-326.
[24] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” IEEE Trans. Robot. Automat., vol. 12, no. 5, October 1996, pp. 651-670.
[25] E. Malis, “Improving vision-based control using efficient second-order minimization techniques,” IEEE International Conference on Robotics and Automation, April 2004, pp. 1843–1848.
[26] 劉俊甫, “三維模型基礎之擴增實境與微組裝之應用,” 國立成功大學機械工程所碩士論文, 2017。