簡易檢索 / 詳目顯示

研究生: 戴嘉潁
Tai, Chia-Ying
論文名稱: 基於不同特徵點選取方式之視覺伺服架構軌跡規劃研究
Study on Trajectory Planning for Visual Servoing Structures Based on Image Feature Point Selection
指導教授: 鄭銘揚
Cheng, Ming-Yang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 69
中文關鍵詞: 視覺伺服軌跡規劃特徵點選取最佳化演算法
外文關鍵詞: Visual Servoing, Trajectory planning, Feature Point Selection, Optimization function
相關次數: 點閱:122下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,自動化產業發展日漸成熟,人們對於自動化的要求越來越高。而突如其來之COVID-19新冠肺炎大流行,更導致人力吃緊,使得全自動化的需求越來越大。因此在工業界,攝影機已逐漸取代人眼成為基於視覺之控制應用中之視覺輔助工具。而在結合了電腦視覺與機械手臂之視覺伺服應用中,如何能夠規劃適當之影像特徵空間命令軌跡,使得機械手臂能在影像空間中進行精確的循跡或定位控制任務,更是重要且值得進一步深入研究之議題。因此在本論文中,實現一種近年來所提出之不同的特徵點選取方式來完成軌跡規劃,旨在於解決傳統基於影像之視覺伺服架構的缺點如:只能在目標姿態附近的區域確保基於影像之視覺伺服架構的穩定性;在眼在手的攝影組態中,所選取的特徵點可能會離開攝影機的視野等等。另外本論文也使用最佳化演算法來求得最佳化的速度命令。而透過將機械手臂末端效應器之速度矩中的旋轉速度與平移速度解耦,以解決計算時間過久的問題,最後達成基於影像之視覺伺服軌跡規劃任務。

    In recent years, as the development of the automation industry has become more mature, expectations have been heightened accordingly. Moreover, the COVID-19 pandemic has also led to reduced access to manpower, adding to extant demands on full automation. Therefore, cameras have gradually replaced human vision as a visual aid tools for vision based control applications in industry. In visual servoing applications that combine computer vision and robot manipulators, one worthwhile avenue of research is an investigation into exactly how to plan a trajectory by choosing the appropriate image feature points in the image space for the robot manipulator to move correctly. Therefore, this thesis implements an alternative feature point selection method that has been developed in recent years in trying to overcome the drawbacks of traditional Image-Based Visual Servoing (IBVS) such as the stability of IBVS can only be ensured in the area near the target object; in eye-in-hand camera configuration, the selected feature points may leave the field-of-view (FOV), etc. In addition, it also uses an optimization algorithm to compute an optimized velocity profile. By decoupling the translation velocity and the orientation velocity in the robot manipulator end-effector’s velocity screw, computation time can be considerably decreased. In the end, trajectory planning for IBVS structures can be achieved.

    中文摘要 I EXTENDED ABSTRACT II 致謝 X 目錄 XIII 表目錄 XV 圖目錄 XVI 第一章、緒論 1 1.1 研究動機與目的 1 1.2 文獻回顧 3 1.3 論文架構 5 第二章、攝影機模型與相機校正 6 2.1 Intel RealSense攝影機簡介 6 2.2 攝影機校正 7 2.2.1 攝影機內部參數 8 2.2.2 攝影機外部參數 9 2.3 手眼校正 10 第三章、機械手臂運動學模型 13 3.1 順向運動學 13 3.2 雅可比矩陣(Jacobian Matrix) 17 第四章、視覺伺服架構及軌跡規劃任務 21 4.1 視覺伺服架構 21 4.1.1 基於影像之視覺伺服架構 23 4.1.2 交互作用矩陣 24 4.2 軌跡規劃 25 4.2.1 影像特徵 26 4.2.2 平移規劃與旋轉規劃 30 4.2.3 速度命令規劃 30 4.3 機械手臂限制 31 4.3.1 工作空間限制 32 4.3.2 軸空間限制 33 4.4 尋找最佳解 34 4.4.1 粒子群演算法 34 第五章、電腦模擬與結果分析 40 5.1 模擬環境設定與場景 40 5.1.1 模擬環境設定 40 5.1.2 實驗場景 42 5.2 模擬方法與結果 44 5.2.1 模擬方法 44 5.2.2 模擬結果 45 5.2.3 結果分析 61 第六章、結論與建議 63 6.1 結論 63 6.2 未來展望與建議 63 參考文獻 65

    [1] “2015年行政院生產力4.0科技發展策略會議,” Internet: https://bost.ey.gov.tw/Page/502DB8CA26866758
    [2] S. Hutchinson, G. D. Hager and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation vol. 12, no. 5, pp. 651-670, 1996.
    [3] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics & Automation Magazine vol. 13, no. 4, pp. 82-90, 2006.
    [4] F. Chaumette, “Potential problems of stability and convergence in image-based and position-based visual servoing,” The confluence of vision and control: Springer, 1998, pp. 66-78.
    [5] P. I. Corke and S. A. Hutchinson, “A new partitioned approach to image-based visual servo control,” IEEE Transactions on Robotics and Automation vol. 17, no. 4, pp. 507-515, 2001.
    [6] F. Chaumette, “Image moments: a general and useful set of features for visual servoing,” IEEE Transactions on Robotics vol. 20, no. 4, pp. 713-723, 2004.
    [7] O. Tahri and F. Chaumette, “Point-based and region-based image moments for visual servoing of planar objects,” IEEE Transactions on Robotics vol. 21, no. 6, pp. 1116-1127, 2005.
    [8] G. Allibert, E. Courtial and F. Chaumette, “Predictive control for constrained image-based visual servoing,” IEEE Transactions on Robotics vol. 26, no. 5, pp. 933-939, 2010.
    [9] A. Hajiloo, M. Keshmiri, W.-F. Xie and T.-T. Wang, “Robust online model predictive control for a constrained image-based visual servoing,” IEEE Transactions on Industrial Electronics vol. 63, no. 4, pp. 2242-2250, 2015.
    [10] M. Keshmiri, W.-F. Xie and A. Mohebbi, “Augmented image-based visual servoing of a manipulator using acceleration command,” IEEE Transactions on Industrial Electronics vol. 61, no. 10, pp. 5444-5452, 2014.
    [11] G. Chesi, “Visual servoing path planning via homogeneous forms and LMI optimizations,” IEEE Transactions on Robotics vol. 25, no. 2, pp. 281-291, 2009.
    [12] G. Chesi and Y. S. Hung, “Global path-planning for constrained and optimal visual servoing,” IEEE Transactions on Robotics vol. 23, no. 5, pp. 1050-1060, 2007.
    [13] M. Keshmiri, M. Keshmiri and A. Mohebbi, “Augmented online point to point trajectory planning, a new approach in catching a moving object by a manipulator,” in Proceedings of the IEEE ICCA 2010, 2010: IEEE, pp. 1349-1354.
    [14] Y. Mezouar and F. Chaumette, “Path planning for robust image-based control,” IEEE transactions on robotics and automation vol. 18, no. 4, pp. 534-549, 2002.
    [15] L. Deng, F. Janabi-Sharifi and W. J. Wilson, “Hybrid motion control and planning strategies for visual servoing,” IEEE Transactions on Industrial Electronics vol. 52, no. 4, pp. 1024-1040, 2005.
    [16] S. A. Masoud and A. A. Masoud, “Motion planning in the presence of directional and regional avoidance constraints using nonlinear, anisotropic, harmonic potential fields: a physical metaphor,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans vol. 32, no. 6, pp. 705-723, 2002.
    [17] A. A. Masoud and M. M. Bayoumi, “Intercepting a maneuvering target in a multidimensional stationary environment using a wave equation potential field strategy,” in Proceedings of the 1994 9th IEEE International Symposium on Intelligent Control, 1994: IEEE, pp. 243-248.
    [18] K. Hosoda, K. Sakamoto and M. Asada, “Trajectory generation for obstacle avoidance of uncalibrated stereo visual servoing without 3d reconstruction,” Journal of the Robotics Society of Japan vol. 15, no. 2, pp. 290-295, 1997.
    [19] J. S. Park and M. J. Chung, “Path planning with uncalibrated stereo rig for image-based visual servoing under large pose discrepancy,” IEEE Transactions on Robotics and Automation vol. 19, no. 2, pp. 250-258, 2003.
    [20] M. Kazemi, M. Mehrandezh and K. Gupta, “Kinodynamic planning for visual servoing,” in Proceedings of the 2011 IEEE International Conference on Robotics and Automation, 2011: IEEE, pp. 2478-2484.
    [21] M. Kazemi, K. Gupta and M. Mehrandezh, “Global path planning for robust visual servoing in complex environments,” in Proceedings of the 2009 IEEE International Conference on Robotics and Automation, 2009: IEEE, pp. 326-332.
    [22] Rincón, J. Valderrama, B. Muñoz and J. L. Orozco, “Kinect™ and Intel RealSense™ D435 comparison: a preliminary study for motion analysis,” in Proceedings of the 2019 IEEE International Conference on E-health Networking, Application & Services (HealthCom), 2019: IEEE, pp. 1-4.
    [23] L. Keselman, J. Iselin Woodfill, A. Grunnet-Jepsen and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 1-10.
    [24] 李哲良,基於影像之視覺伺服應用於循跡控制研究,碩士論文,國立成功大學,電機工程學系研究所,台灣,2016
    [25] 林潔君,基於視覺之工業用機械手臂物件夾取研究,碩士論文,國立成功大學,電機工程學系研究所,台灣,2015
    [26] 江宗錡,六軸關節型機械手臂之手眼校正研究,碩士論文,國立成功大學,電機工程學系研究所,台灣,2014
    [27] 張庭育,虛擬視覺伺服估測器及動態視覺伺服架構之研究,碩士論文,國立成功大學,電機工程學系研究所,台灣,2018
    [28] A. Muis and K. Ohnishi, “Eye-to-hand approach on eye-in-hand configuration within real-time visual servoing,” IEEE/ASME Transactions on Mechatronics vol. 10, no. 4, pp. 404-410, 2005.
    [29] F. Dornaika and R. Horaud, “Simultaneous robot-world and hand-eye calibration,” IEEE Transactions on Robotics and Automation vol. 14, no. 4, pp. 617-622, 1998.
    [30] K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quarterly of Applied Mathematics vol. 2, no. 2, pp. 164-168, 1944.
    [31] D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” Journal of the Society for Industrial and Applied Mathematics vol. 11, no. 2, pp. 431-441, 1963.
    [32] Stanford open course, “Introduction to Robotics.”
    [33] J. C. John, Introduction to Robotics: Mechanics and Control, Reading: Addison-Wesley 1989.
    [34] L. Sciavicco and B. Siciliano, Modelling and Control of Robot Manipulators, Springer-Verlag,” New York, 2000.
    [35] P. I. Corke, “A simple and systematic approach to assigning Denavit–Hartenberg parameters,” IEEE Transactions on Robotics vol. 23, no. 3, pp. 590-594, 2007.
    [36] K. S. Fu, R. Gonzalez and C. G. Lee, Robotics: Control Sensing. Vis. Tata McGraw-Hill Education, 1987.
    [37] M. Keshmiri and W.-F. Xie, “Image-based visual servoing using an optimized trajectory planning technique,” IEEE/ASME Transactions on Mechatronics vol. 22, no. 1, pp. 359-370, 2016.
    [38] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the ICNN'95-International Conference on Neural Networks, 1995, vol. 4: IEEE, pp. 1942-1948.
    [39] E. Rohmer, S. P. Singh and M. Freese, “V-REP: A versatile and scalable robot simulation framework,” in Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013: IEEE, pp. 1321-1326.

    下載圖示 校內:2022-10-19公開
    校外:2022-10-19公開
    QR CODE