研究生: |
陳彥冰 Chen, Yen-Ping |
---|---|
論文名稱: |
遞迴類神經網路之最小體現於動態系統鑑別 Minimal Realization of Recurrent Neural Networks for Dynamic System Identification |
指導教授: |
王振興
Wang, Jeen-Shing |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
論文出版年: | 2004 |
畢業學年度: | 92 |
語文別: | 英文 |
論文頁數: | 74 |
中文關鍵詞: | 遞迴類神經網路 、最小體現 、系統鑑別 、馬可夫參數 |
外文關鍵詞: | Markov parameter, system identification, minimal realization, recurrent neural network |
相關次數: | 點閱:143 下載:4 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出一個新型的遞迴類神經網路,其結合了架構學習演算法以對動態系統鑑別找出最小狀態空間表示式。此新型的類神經網路結構設計是為了將動態系統體現成狀態空間方程式,即此結構能自我映射成狀態空間方程式來表示動態系統。架構學習演算法是藉由所測量的系統輸入輸出值來鑑別出遞迴類神經網路的最小表示法,其包含了最小體現技術與遞迴參數學習方法。就最小表示式而言,其意思為在神經元的個數最少且能滿足系統性能的需求下,所能達到最精簡的遞迴類神經網路架構。架構習演算法主要包含兩個機制:(1) 利用以馬可夫參數為基礎的最小體現技術來得到狀態空間最小維度的鑑別,(2) 利用以ordered derivatives為基礎的遞迴參數學習方法來使最小狀態空間表示的參數達到最佳化。因為在架構上使用最少的資源,也因此達到最佳效率。經由電腦模擬可得知,利用本論文所提出的方法用來做動態系統鑑別已經成功地驗證出下列二點:(1)此遞迴網路的維度是最小的,(2)此網路能非常接近地捕捉到未知系統的動態行為並達到滿意的效能。
This thesis presents a novel recurrent neural network coupled with a structure learning algorithm for identification of dynamic systems with minimal state-space representations. The novelty of the proposed recurrent neural network is that the network structure was designed to realize dynamic systems into state-space equations. That is, the structure itself can be mapped into a set of state-space equations to represent the dynamic systems. A structure learning algorithm, consisting of a minimal realization technique and a recursive parameter learning method, has been developed to identify the minimal representation of the proposed recurrent network through input-output measurements. By the minimal representation, we mean that the resultant structure of the recurrent network is parsimonious in terms of the minimal number of neurons with satisfied system performance. The main features of the proposed structure learning algorithm include that: 1) the minimal order of the state-space representation can be identified by the minimal realization technique based on the Markov parameter estimation, and 2) the parameters of the minimal state-space representation are optimized by the recursive learning algorithm based on the ordered derivative. Because the resource used in the proposed network is minimal, its efficiency is therefore maximized. Computer simulations on dynamic system identification have successfully validated the followings: 1) the order of the recurrent network representation is minimal, and 2) the proposed network is able to closely capture the dynamical behavior of the unknown system with a satisfactory performance.
[1] G. Gilbert, “Controllability and observability in multivariable control systems,” SIAM J. Control, vol. 2, no. 1, pp. 128-151, 1963.
[2] D. Wang and A. Zilouchian, “Identification of discrete linear system in state Space form using neural network,” Proc. of the 1998 Second IEEE Intl’ Conf. Devices, Circuits and Systems, pp. 338-342, March 1998.
[3] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Trans. on Neural Networks, vol. 1, no. 1, pp. 4-27, 1990.
[4] G. I. Kechriotis and E. S. Manolakos, “Hopfield neural network implementation of the optimal CDMA multiuser selector, ” IEEE Trans. on Neural Networks, vol. 7, no. 1, pp. 131-141, 1996.
[5] J. Connors, D. Martin and L. Atlas, “Recurrent neural networks and robust time series prediction,” IEEE Trans. on Neural Networks, vol. 5, no. 2, pp. 240-254, 1994.
[6] S. Lawrence, C. L. Giles and S. Fong, “Natural language grammatical inference with recurrent neural networks,” IEEE Trans. on Knowledge and Data Engineering, vol. 12, no. 1, pp. 126-140, 2000.
[7] C. W. Omlin, K. K. Thornber and C. L. Giles, “Fuzzy finite-state automata can be deterministrically encoded into recurrent neural networks,” IEEE Trans. on Fuzzy Syst., vol. 6, no. 1, pp. 76-89, Feb. 1998.
[8] P. F. Baldi and K. Hornik, “Learning in linear neural networks: a survey,” IEEE Trans. on Neural Networks, vol. 6, no. 4, pp. 837-858, July 1995.
[9] P. P. Kanjilal and D. N. Banerjee, “On the application of orthogonal transformation for the design and analysis of feedforward networks,” IEEE Trans. on Neural Networks, vol. 6, pp. 1061-1070, 1995.
[10] W. Kaminski and P. Strumillo, “Kernal orthonormalization in radial basis function neural networks,” IEEE Trans. on Neural Networks, vol. 8, no. 5, pp. 1177-1183, 1997.
[11] D. E. Goldberg, Genetic algorithms in search, optimization and machine learning, Addison-Wesley, 1989.
[12] M. T. Mitchell, Machine learning, McGraw-Hill, 1997.
[13] J. B. McQueen, “Some methods of classification and analysis of multivariate observations,” Proc. 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281-297, 1967.
[14] J. Bezdek, Pattern recognition with fuzzy objective function algorithms, New York: Plenum, 1981.
[15] B. Fritzke, “Growing cell structures: a self-organizing network for unsupervised and supervised learning,” Neural Networks, vol. 7, no. 9, pp. 1411-1460, 1994.
[16] S. G. Tzafestas and G. G. Rigatos, “Neural and neurofuzzy FELA adaptive robot control using feedforward and counterpropagation networks,” Journal of Intelligent Robotic Syst., vol. 23, pp. 291-330, 2000.
[17] A. Tether, “Construction of minimal linear state-variable models from finite input-output data,” IEEE Trans. on Automatic Control, vol. 15, no. 4, pp. 427-436, Aug. 1970.
[18] A. C. Tsoi and A. D. Back, “Locally recurrent globally feedforward networks: a critical review of architectures,” IEEE Trans. on Neural Networks, vol. 5, no. 2, pp. 229-239, March 1994.
[19] R. J. Williams and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks,” Neural Comput., vol. 1, no. 2, pp. 270-280, 1989.
[20] J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, pp. 179-211, 1990.
[21] P. Frasconi, M. Gori and G. Soda, “Local feedback multilayered networks,” Neural Comput., vol. 4, no. 1, pp. 120-130, 1992.
[22] P. Werbos, “Beyond regression: new tools for prediction and analysis in the behavior sciences,” Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1974.
[23] J. S. R. Jang, C. T. Sun and E. Mizutani, Neuro-fuzzy and soft computing a computational approach to learning and machine intelligence, NJ: Prentice-Hall, 1997.
[24] R. E. Kalman, “Canonical structure of linear dynamical systems,” Proc. Natl. Acad. Sci. U.S., vol. 48, pp. 596-600, 1962.
[25] L. Ho and R. E. Kalman, “Effective construction of linear state-variable models from input/output functions,” Proc. 3rd. Ann. Allerton conf. Circuit and System Theory, pp. 449-459, 1965.
[26] B. L. Ho and R. E. Kalman, “Effective construction of linear, state-variable models from input/output functions,” Regelungstechnik, vol. 14, pp. 545-548, 1966.
[27] R. E. Kalman, On minimal partial realizations of a linear input/output map, in Aspects of Network and System Theory (R. E. Kalman and N. de Claris, eds.), Holt, Reinhart and Winston, pp. 385-408, 1971.
[28] R. E. Kalman, P. L. Falb and M. A. Arbib, Topics in Mathematical System Theory, New York: McGraw-Hill, 1957.
[29] B. D. Schutter, “Minimal state-space realization in linear system theory: an overview,” Journal of Computational and Applied Mathematics, vol. 121, no. 1-2, pp. 331-354, 2000.
[30] K. L. Moore, “An iterative learning control algorithm for systems with measurement noise,” IEEE Proc. on Decision and Control, vol. 1, pp. 270-275, Dec. 1999.
[31] N. L. C. Chui and J. M. Maciejowski, “Uniqueness of minimal partial realizations and markov parameter identification,” IEEE Proc. on Decision and Control, vol. 4, pp. 3642-3647, Dec. 1996.
[32] J. N. Juang, Applied system identification, Prentice-Hall, 1994.
[33] J. S. Wang, “An efficient recurrent neuro-fuzzy system for identification and control of dynamic systems,” Proc. of 2003 IEEE Int'l Conf. on Systems, Man & Cybern., pp. 2833-2838, Oct. 2003.
[34] C. F. Juang and C. T. Lin, “A recurrent self-organizing neural fuzzy inference network,” IEEE Trans. on Neural Networks, vol. 10, no. 4, pp. 828-845, July 1999.
[35] H. Zeiger and A. McEwen, “Approximate linear realizations of given dimension via Ho’s algorithm,” IEEE Trans. on Automatic Control, vol. 19, no. 2, pp. 153, April 1974.
[36] S. Kung, “A new identification and model reduction algorithm via singular value decomposition,” 12th Asilomar Conf. on Circuits, Systems and Computers, pp. 705-714, Nov. 1978.