簡易檢索 / 詳目顯示

研究生: 陳冠如
Chen, Guan-Ru
論文名稱: 非線性系統的最佳學習控制
Linear Quadratic Optimal Learning Control for Nonlinear System
指導教授: 蔡聖鴻
Tsai, Sheng-Hong Jason
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2005
畢業學年度: 93
語文別: 英文
論文頁數: 78
中文關鍵詞: 學習控制非線性系統
外文關鍵詞: learning control, nonlinear system
相關次數: 點閱:58下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   針對一個已知的非線性系統,提出一個線性二次最佳化學習控制的方法,在有限時間內達到最佳化控制的目的。雖然此系統會被不明但重覆的干擾所影響,但是此學習效果依然會達到最佳化。此方法在線性系統的限制是特徵值必須落在某個範圍,可以用極點安置的方式來加以改善。另一方面,將最佳線性化的方式使用在非線性系統,在每次疊代前,針對操作點產生相對應的最佳線性化的模型,再使用極點安置來移動特徵值,並結合線性二次最佳化學習控制,來設計非線性系統的控制器追蹤所預定的軌跡。在這篇論文中,以例子及模擬的結果來說明這個機制以及顯示出改善的狀況。

      A linear quadratic optimal learning control solution to the problem of finding a finite-time optimal control history for a nonlinear system is proposed in my thesis. This mechanism without the detailed information of the system that is influenced by unknown but repetitive disturbances yields the learning to achieve optimization. But for the linear systems, it is restricted to the range of eigenspectrums, the absolute eigenvalues of the linear systems. In order to improve the mechanism for more linear systems, the technique of pole placement is used. In the other hand, optimal linearization technique is used for the nonlinear systems. Before starting every iteration, we produce the relative optimal linear model at the operating point. Then, we design the controller of nonlinear systems to track the designed trajectories with linear quadratic learning control that is modified by pole placement. Illustrative examples and the results of simulation are proposed in my thesis to illustrate the mechanism and show how the improvement about the idea is.

    List of Contents Page Acknowledgments Chinese Abstract………………………………………………………………… I Abstract…………………………………………………………………………… Ⅱ List of Contents………………………………………………………………… III List of Figures…………………………………………………………………… Ⅴ List of Table……………………………………………………………………… Ⅷ Chapter 1 Introduction…………………………………………………………… 1-1 1.1 Introduction…………………………………………………… 1-1 1.2 Organization and contributions of the thesis………… 1-3 2 Linear quadratic optimal learning control for multivariable systems ……………………………………………………………………………… 2-1 2.1 The problem that we solve……………………………… 2-2 2.2 Theoretic deriving………………………………………… 2-3 2.3 Conjugate basis vectors…………………………………… 2-5 2.4 Iterative update of the optimal coefficients……… 2-8 2.5 Implementation……………………………………………… 2-9 2.6 Illustrative examples…………………………………… 2-10 3 Remarkable observations of LQL mechanism………………………… 3-1 3.1 Some observations on the proposed approach for various system matrices and the outputs………………………… 3-1 3.2 Illustrative examples……………………………………… 3-3 3.3 Pole placement………………………………………………… 3-17 3.4 Illustrative examples after pole placement…………… 3-18 4 Linear quadratic optimal learning control for a nonlinear system ………………………………………………………………………………… 4-1 4.1 Optimal linearization of nonlinear system…………… 4-1 4.2 Implement of LQL mechanism for the nonlinear system 4-6 4.3 Illustrative example for the given nonlinear system 4-6 5 Conclusions………………………………………………………………… 5-1 References Appendix A 誌謝

    [1] Uchiyama, M., “Formulation of high-speed motion pattern of a mechanical arm by trial,” Transactions of the Society for Instrumentation and Control Engineers, vol. 14, pp. 706-712, 1978.
    [2] Arimoto, S., Kawamura, S. and Miyazaki, F., “Bettering operation of robots by learning,” Journal of Robotic Systems, vol. 1, pp. 123-140, 1984.
    [3] Casalino, G.. and Bartolini, B., “A learning procedure for the control of movements of robotic manipulators,” 4th IASTED Symposium on Robotics and Automation Amsterdam, pp. 108-111, 1984.
    [4] Craig, J. J., “Adaptive control of manipulators through repeated trials,” Proceedings of the American Control Conference, pp. 1566-1573, 1984.
    [5] Inoue, T., Nakano, M. and Iwai, S., “High accuracy control of a proton synchrotron magnet power supply,” Proceedings of the 8th World Congress of IFAC, pp. 216-221, 1981.
    [6] Hara, S., and Yamamoto, Y., “Synthesis of repetitive control systems and its applications,” Proceedings of the 24th IEEE Conference on Decision and Control, pp. 326-327, 1985.
    [7] Middleton, R. H., Goodwin, G. C. and Longman, R. W., “A method for improving dynamic accuracy of a robot performing a repetitive task,” International Journal of Robotics Research, vol. 8, pp. 67-74, 1989.
    [8] Tomizuka, M., Tsao, T. C. and Chew, K. K., “Discrete-time-domain analysis and synthesis of repetitive controllers,” ASME Journal of Dynamics, Measurements, and Control, vol. 3, pp. 353-358, 1989.
    [9] Longman, R. W., “Supplementary material: tutorial on learning control,” IEEE Conferences on Intelligent Control, Arlington, VA, 1991.
    [10] Horowitz, R., “Learning control of robot manipulators,” ASME Journal of Dynamic Systems, Measurement, and Control, vol. 115, pp. 402-411, 1993.
    [11] Moore, K. L., Iterative Learning Control for Deterministic Systems (London, UK: Springer-Verlag), 1993.
    [12] Bien, Z. and Xu, J. X. (Eds), Recent Advances in Iterative Learning Control (Boston, Massachusetts, USA: Kluwer Academic Publishers), 1998.
    [13] Frueh, J. A. and Phan, M. Q., “Linear quadratic optimal learning control (LQL),” International Journal of Control, vol. 73, no. 10, pp. 832-839, 2000.
    [14] Phan, M. Q. and Longman, R. W., “A mathematical theory of learning control for linear discrete multivariable systems,” Proceedings of the AIAA/AAS Astrodynamics Specialist Conference, Minneapolis, Minnesota, pp. 740-746, 1988.
    [15] Phan, M. Q., and Juang, J. N., “Design of learning controllers based on anauto-regressive representation of a linear system,” Journal of Guidance, Control, and Dynamics, vol.19, pp. 355-362, 1996.
    [16] Tsay, Y. T., and Shieh, L. S., “State-space approach for self-tuning feedback control with pole assignment,” IEE Proc., vol. 128, no. 3, pp. 93-101, 1981.
    [17] Frueh, J. A., and Phan, M. Q., “System identification and inverse control using input-output data from repetitive trials,” Proceedings of the 2nd Asian Control Conference, ASCC, Seoul, South Korea, pp. 251-254, 1997.
    [18] Longman, R. W., and Chang, C. K., “Learning control for minimizing a quadratic cost during repetitions of a task,” AIAA/AAS Astrodynamics Conference: A Collection of Technical, Papers, vol. 2, pp. 530-536., 1990.
    [19] Phan, M. Q., and Frueh, J. A., “Learning control for trajectory tracking using basis functions,” Proceedings of the IEEE Conference on Decision and Control, Kobe, Japan, pp. 2490-2492, 1996.
    [20] Phan, M. Q., and Frueh, J. A., System Identification and Learning Control (Boston, Massachusetts, USA: Kluwer Academic Publishers), chapter 15, 1998.
    [21] Phan, M. Q., and Rabitz, H., “Learning control of quantum-mechanical systems by identification of effective input-output maps,” Chemical Physics, vol. 217, pp. 389-400, 1997
    [22] G. Chen, and T. Ueta, “Yet another chaotic attractor,” Int. J. Bifurcation and Chaos, vol. 9, pp.1465-1466, 1999.
    [23] Guo, S. M., Shieh, L. S., Chen, G. R. and Lin, C. F., “Effective Chaotic Orbit Tracker: A Prediction-Based Digital Redesign Approach,” IEEE Trans. On Circuits and Systems, vol. 47, no.11, 2000.

    下載圖示 校內:2007-07-25公開
    校外:2008-07-25公開
    QR CODE