| 研究生: |
高旻琪 Kao, Min-Chi |
|---|---|
| 論文名稱: |
人類搭檔系統之手語辨識與視覺感測功能之設計與實現 Design and Implementation of Human Partner System with Sign Language Recognition and Visual Sensing Functions |
| 指導教授: |
李祖聖
Li, Tzuu-Hseng S. |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2015 |
| 畢業學年度: | 103 |
| 語文別: | 英文 |
| 論文頁數: | 101 |
| 中文關鍵詞: | 人類搭檔系統 、蜂群演算法 、基於熵之K-means演算法 、隱藏式馬可夫模型 、台灣手語 、手語辨識功能 、視覺感測功能 |
| 外文關鍵詞: | Human Partner System (HPS), Artificial Bee Colony (ABC) Algorithm, Entropy-Based K-means Algorithm, Hidden Markov Models (HMM), Taiwan Sign Language (TSL), Sign Language Recognition Function, Visual Sensing Function |
| 相關次數: | 點閱:167 下載:9 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文建構一個人類搭檔系統(HPS),此系統包含聽覺、說話、手語與視覺功能。聽覺系統利用本論文提出之聲音偵測與分割方法萃取出聲音中有意義之成份。利用說話系統,HPS可以讀網頁、email、並與人對話。在視覺感測系統中,除了基本的人臉偵測、人臉辨識功能,本論文另引入變異係數(CV)去評估每個臉部特徵之重要程度,並運用於部份人臉偵測。並且,為讓HPS能在場景中搜尋目標物並辨識出此物品,本論文提出雙峰式―CAMSHIFT 演算法與擬人式學習法。此外,為服務聽障人士,本論文亦設計並實作出台灣手語辨識系統。此辨識系統採用隱藏式馬可夫模型(HMM)辨識手語之時序資料。然而,HMM有二個待解決之問題:一為狀態數之決定,二為結構之決定。為解決HMM之狀態數問題,本論文提出基於熵之K-means演算法,此方法提供一個視覺化的方式來判斷資料集的群數。最後,本論文利用蜂群(ABC)演算法來決定HMM的結構。
This dissertation constructs the HPS and introduces four sub-systems, the hearing system, speaking system, sign language recognition system and vision system. The hearing system utilizes the proposed voice detection and segmentation method to extract the meaning part of speech. The speaking system allows the HPS to read the webpage, read the new email and talk to the person. In the visual sensing system, besides the fundamental functions, face detection and face recognition, the partial face detection method also has been proposed and the CV value is introduced to appraise the degree of importance of the features. Besides, to allow the HPS to search for a target object and identify what this object is, the DPCAMSHIFT algorithm and the anthropomorphic learning process have been proposed to achieve these goals. For the hearing-impaired people, the sign language recognition system for home-service-related TSL words is also implemented in this dissertation so that HPS can assist them in their daily lives. The sign language recognition system utilizes HMM to classify the sequential data. However, there are two issues to be solved in HMM. One is the number of states, and the other is the determination of structure of HMM. To determine the number of states in HMM, this dissertation proposes the Entropy-Based K-means algorithm to plot the entropy diagram, which provides a visualization method to judge the number of clusters for real datasets. Finally, this dissertation utilizes the ABC algorithm to establish the structure of HMM.
Bibliography
[1] C. Alippi, D. Liu, D. Zhao, L. Bu, “Detecting and Reacting to Changes in Sensing Units: The Active Classifier Case,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 44, no. 3, pp. 353–362, 2014.
[2] K. Assaleh, T. Shanableh, M. Fanaswala, H. Bajaj, and F. Amin, “Vision-based system for continuous Arabic Sign Language recognition in user dependent mode,” in Proc. 5th Int. Symp. Mechatron. Appl., Amman, Jordan, pp. 1–5, 2008.
[3] L. Antwarg, L. Rokach, and B. Shapira, “Attribute-Driven Hidden Markov Model Trees for Intention Prediction,” J. Syst. Eng. Electron, vol. 24, no. 3, pp. 1103–1119, 2012.
[4] A. Aztiria, J.C. Augusto, R. Basagoiti, A. Izaguirre, D. J. Cook, “Learning Frequent Behaviors of the Users in Intelligent Environments,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 43, no. 6, pp. 1265–1278, 2013.
[5] X. Bai, S. Luo, and Y. Zhao, “Entropy based soft K-means clustering,” in Proc. IEEE Int. Conf. GrC, pp.107–110, 2008.
[6] L.E. Baum, T. Petrie, G. Soules and N. Weiss, “A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains,” Ann. Math. Stat., vol. 41, no. 1, pp. 164–171, 1970.
[7] H. Bay, T. Tuytelaars and L.V. Gool, “SURF:Speeded Up Robust Features,” in Proc. 9th Eur. Conf. Comput. Vis., pp.404–417, 2006.
[8] H. Bay, E. Andreas, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, 2008.
[9] P.N. Belhumeour, J.P. Hespanha and D.J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711–720, 1997.
[10] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 4, pp. 509–522, Apr. 2002.
[11] A. Berg, T. Berg, and J. Malik, “Shape Matching and Object Recognition Using Low Distortion Correspondences,” in Proc. IEEE Computer Soc. Conf. Comput. Vis. Pattern Recognit., 2005.
[12] A. Bhattacharyya, “On a measure of divergence between two statistical populations defined by probability distributions”, Bull. Calcutta Math. Soc., vol. 35, no. 1, pp. 99–109, 1943.
[13] D. Bouchaffra, “Conformation-based hidden Markov models: Application to human face identification,” IEEE Trans. Neural Netw., vol. 21, no. 4, pp. 597–608, Apr. 2010.
[14] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library. Sebastopol, CA, USA: O’Reilly Media, Sep. 2008.
[15] T. D. Bui and L. T. Nguyen, “Recognizing postures in Vietnamese sign language with MEMS accelerometers,” IEEE Sensors J., vol. 7, no. 5, pp. 707–712, May 2007.
[16] T. Carlson and J. d. R. Millán, “Brain-controlled wheelchairs: A robotic architecture,” IEEE Robot. Autom. Mag., vol. 20, no. 1, pp. 65–73, 2013.
[17] B. W. Chen, C. Y. Chen, J. F. Wang, “Smart Homecare Surveillance System: Behavior Identification Based on State-Transition Support Vector Machines and Sound Directivity Pattern Analysis,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 43, no. 6, pp. 1279–1289, 2013.
[18] X. Chen, L. An, B. Bhanu, “Multitarget Tracking in Nonoverlapping Cameras Using a Reference Set,” IEEE Sensors J., vol. 15, no. 5, pp. 2692–2704, 2015.
[19] F. Chersi, “Learning through imitation: A biological approach to robotics,” IEEE Trans. Auton. Mental Develop., vol. 4, no. 3, pp. 204–214, 2012.
[20] Y.-H. Chiu, C.-H. Wu, H.-Y. Su, and C.-J. Cheng, “Joint optimization of word alignment and epenthesis generation for Chinese to Taiwanese sign synthesis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 1, pp. 28–39, Jan. 2007.
[21] G. Chuang and C. Kuo, “Wavelet descriptor of planar curves: Theory and applications,” IEEE Trans. Image Processing, vol. 5, no. 1, pp. 56–70, 1996.
[22] T.M. Cover and J.A. Thomas, Entropy, Relative Entropy and Mutual Information. New York, NY, USA: Wiley, 1991, Ch. 2.
[23] K. Etemad and R. Chellappa, “Discriminant Analysis for Recognition of Human Face Images,” J. Optical Soc. Am., vol. 14, pp. 1724–1733, 1997.
[24] G. Fang, W. Gao, and D. Zhao, “Large vocabulary sign language recognition based on fuzzy decision trees,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 34, no. 3, pp. 305–314, May 2004.
[25] G. Fang, W. Gao, and D. Zhao, “Large-vocabulary continuous sign language recognition based on transition-movement models,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 37, no. 1, pp. 1–9, Jan. 2007.
[26] W. Gao, J. Ma, J.Wu, and C.Wang, “Sign language recognition based on HMM/ANN/DP,” Int. J. Pattern Recognit. Artif. Intell., vol. 14, no. 5, pp. 587–602, 2000.
[27] W. Gao, J. Ma, S. Shan, X. Chen,W. Zeng, H. Zhang, J. Yan, and J.Wu, “Handtalker: A multimodal dialog system using sign language and 3-d virtual human,” in Proc. Int. Conf. Adv. Multimodal Interfaces, pp. 564–571, 2000.
[28] M. Garcia, H. Hidalgo, and E. Chavez , “Contextual Entropy and Text Categorization,” in Proc. 4th Latin American Web Congress. LA-WEB. IEEE Computer Society, Washington, DC, pp. 147–153, 2006.
[29] L. Gupta and S. Ma, “Gesture-based interaction and communication: automated classification of hand gesture contours,” IEEE Trans. Syst. Man Cybern. C, Appl. Rev., vol. 31, no. 1, pp. 114–120, Feb. 2001.
[30] N. Habili, C. C. Lim, and A. Moini, “Segmentation of the face and hands in sign language video sequences using color and motion cues,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 8, pp. 1086–1096, Aug. 2004.
[31] C. Harris and M. Stephens, “A combined corner and edge detection,” in Proc. 4th Alvey Vis. Conf., pp. 147–151, 1988.
[32] J. L. Hernandez-Rebollar, N. Kyriakopoulos, and R. W. Lindeman, “A new instrumented approach for translating American Sign Language into sound and text,” in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., pp. 547–552, 2004.
[33] K. Homma, E. Takenaka, “An Image Processing Method for Feature Extraction of Space-occupying Lesions,” J. Nucl. Med., vol. 26, pp. 1472–1477, 1985.
[34] M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Information Theory, vol. 8, no. 2, pp. 179–187, 1962.
[35] I. Infantino, R. Rizzo, S. Gaglio, “A Framework for Sign Language Sentence Recognition by Commonsense Context,” IEEE Trans. on SMC Part C, vol.37, no.5, pp. 1034–1039, 2007.
[36] L. Jia, R.J. Radke, “Using Time-of-Flight Measurements for Privacy-Preserving Tracking in a Smart Room,” IEEE Trans. Ind. Informat., vol. 10, no. 1, pp. 689–696, Feb. 2014.
[37] M. C. Kao and T. H. S. Li, “Design and implementation of interaction system between humanoid robot and human hand gesture,” in SICE Annual Conf. 2010, pp. 1616–1621, 2010.
[38] D. Karaboga, “An idea based on bee swarm for numerical optimization,” Erciyes Univ., Turkey, Tech. Rep. TR06, 2005.
[39] I. K. Kazmi, L. You, J. J. Zhang, “A Survey of 2D and 3D Shape Descriptors,” in Int. Conf. 10th Computer Graphics, Imaging and Visualization, pp. 1–10, 2013.
[40] D. G. Kendall, “Shape Manifolds, Procrustean Metrics and Complex Projective Shape,” Bull. London Math. Soc., vol. 16, no. 2, pp. 81–121, 1984.
[41] J. S. Kim, W. Jang, and Z. Bien, “A dynamic gesture recognition system for the Korean sign language (KSL),” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 26, no. 2, pp. 354–359, Apr. 1996.
[42] Y. Kim, W.C. Yoon, “Generating Task-Oriented Interactions of Service Robots,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 44, no. 8, pp. 981–994, 2014.
[43] W. Y. Kim, Y.S. Kim, “A region-based shape descriptor using Zernike moments,” Signal Process., Image Commun., vol. 16, no. 2, pp. 95–102, 2000.
[44] V. E. Kosmidou, L. J. Hadjileontiadis, and S. M. Panas, “Evaluation of surface EMG features for the recognition of American Sign Language gestures,” in Proc. IEEE 28th Annu. Int. Conf. Eng. Med. Biol. Soc., pp. 6197–6200, Aug. 2006.
[45] V. E. Kosmidou and L. J. Hadjileontiadis, “Sign language recognition using intrinsic mode sample entropy on sEMG and accelerometer data,” IEEE Trans. Biomed. Eng., vol. 56, no. 12, pp. 2879–2890, Dec. 2009.
[46] H. Li, Y. Yi, X. Li, and Z. Guo, “Human activity recognition based on HMM by improved PSO and event probability,” J. Syst. Eng. Electron, vol. 24, no. 3, pp. 545–554, 2013.
[47] R. H. Liang and M. Ouhyoung, “A real-time continuous alphabetic sign language to speech conversion VR system,” Comput. Graph. Forum, vol. 14, no. 3, pp. 67–76, 1995.
[48] R. H. Liang and M. Ouhyoung, “A sign language recognition system using hidden Markov Model and context sensitive search,” in Proc. ACM Symp. Virtual Reality Softw. Technol., pp. 59–66, 1996.
[49] J. Liao, Z. Wang, L. Wan, Q.C. Cao, H. Qi, “Smart Diary: A Smartphone-Based Framework for Sensing, Inferring, and Logging Users’ Daily Life,” IEEE Sensors J., vol. 15, no. 5, pp. 2761–2773, 2015.
[50] J. Lichtenauer, E. Hendriks, and M. Reinders, “Sign Language Recognition by Combining Statistical Dtw and Independent Classification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 11, pp. 2040–2046, Nov. 2008.
[51] S. Liwicki and M. Everingham, “Automatic recognition of fingerspelled words in british sign language,” in Proc. Comput. Vis. Pattern Recognit., pp. 50–57, Jun. 2009.
[52] P. Loncomilla, C. Tapia, O. Daud, J. Ruiz-del-Solar, “A Novel Methodology for Assessing the Fall Risk Using Low-Cost and Off-the-Shelf Devices,” IEEE Trans. Human-Mach. Syst., vol. 44, no. 3, pp. 406–415, 2014.
[53] D. G. Lowe, “Object recognition from local scale-invariant features,” in Proc. Int. Conf. Comput. Vis., vol. 2, pp. 1150–1157, 1999.
[54] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.
[55] Z. Lu, X. Chen, Q. Li, X. Zhang, P. Zhou, “A Hand Gesture Recognition Framework and Wearable Gesture-Based Interaction Prototype for Mobile Devices,” IEEE Trans. Human-Mach. Syst., vol. 44, no. 2, pp. 293–299, 2014.
[56] S. O. H. Madgwick, A. J. L. Harrison, and R. Vaidyanathan, “Estimation of IMU and MARG orientation using a gradient descent algorithm,” in Proc. IEEE Int. Conf. Rehabil. Robot., pp. 1–7, 2011.
[57] D. Malyszko, J. Stepaniuk, “Rough entropy based k-means clustering,” in Proc. 12th Int. Conf. Rough Sets, Fuzzy Sets, Data Mining and Granular Computing, vol. 5908, pp. 406–413, 2009.
[58] C. Manresa, J. Varona, R. Mas, and F. J. Perales, “Hand tracking and gesture recognition for human-computer interaction,” Electronic Letters on Computer Vision and Image Analysis, vol. 5, no. 3, pp. 96–104, 2005.
[59] S. A. Mehdi and Y. N. Khan, “Sign language recognition using sensor gloves,” in Proc. Int. Conf. Neural Inf. Process., vol. 5, pp. 2204–2206, 2002.
[60] F. Mokhtarian, S. Abbasi, and J. Kittler, “Robust and Efficient Shape Indexing through Curvature Scale Space,” in Proc. British Machine and Vision Conf., pp. 53–62, 1996.
[61] H. Moon and P.J. Phillips, “Analysis of PCA-based Face Recognition Algorithms,” in Empirical Evaluation Techniques in Computer Vision, pp. 57–71, 1998.
[62] C.-H. Moon and Y.-C. Kim, “Hybrid Gesture Classifying Method Using K-NN and DTW for Smart Remote Control,” in Proc. IEEE Int. Conf. Information Science, Electronics and Electrical Engineering, pp. 1298–1300, 2014.
[63] M. Panwar and P. S. Mehra, “Hand Gesture Recognition for Human Computer Interaction,” in Proc. IEEE Int. Conf. Image Information Processing, pp. 1–7, 2011.
[64] M. Panwar, “Hand gesture based interface for aiding visually impaired,” IEEE Int. Conf., Recent Advances in Computing and Software Systems, pp. 80–85, 2012.
[65] K.-H. Park and Z. Z. Bien, M. Mokhtari, Ed., “Intelligent sweet home for assisting the elderly and the handicapped,” in Proc. Int. Conf. Smart Homes and Health Telematics, vol. 12, pp. 151–158, 2003.
[66] G. Poli, A. L. M. Levada, J. F. Mari, J. H. Satio, “Voice Command Recognition with Dynamic Time Warping (DTW) using Graphics Processing Units (GPU) with Compute Unified Device Architecture (CUDA),” in Proc. 19th Int. Symp. Computer Architecture and High Performance Computing, pp. 19–25, 2007.
[67] B. Poon, M.A. Amin, H. Yan, “Performance evaluation and comparison of PCA Based human face recognition methods for distorted image,” Int. J., Mach. Learn. Cyber., pp. 245–259, 2011.
[68] L.R. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” Proc. IEEE, vol. 77, no. 2, pp. 257–286, Feb. 1989.
[69] C. Rao, A. Yilmaz, and M. Shah, “View-invariant representation and recognition of actions,” Int. J. Comput. Vis., vol. 50, no. 2, pp. 203–226, 2002.
[70] S. S. Rautaray, A. Agrawal , “Real time hand gesture recognition system for dynamic applications,” Int. J. UbiComp, vol. 3, no.1, pp. 21–31, 2012.
[71] E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proc. Int. Conf. Comput. Vis., pp. 1508–1511, 2005.
[72] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proc. Eur. Conf. Comput. Vis., pp. 430–443, 2006.
[73] C. Rother, V. Kolmogorov, and A. Blake, “GrabCut-Interactive Foreground Extraction using Iterated Graph Cuts,” ACM Trans. Graph., vol. 23, no. 3, pp. 309–314, 2004.
[74] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, pp. 23–38, 1998.
[75] G. Saha, S. Chakraborty, and S. Senapati, “A new silence removal and endpoint detection algorithm for speech and speaker recognition applications,” in Proc. Nat. Conf. Communications, India, pp. 291–295, 2005.
[76] T. Shanableh, K. Assaleh, and M. Al-Rousan, “Spatio-temporal featureextraction techniques for isolated gesture recognition in Arabic Sign Language,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 3, pp. 641–650, Jun. 2007.
[77] T. Starner, J. Weaver, and A. Pentland, “Real-time American Sign Language recognition using desk and wearable computer based video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12, pp. 1371–1375, Dec. 1998.
[78] D. H. Stefanov, Z. Bien, and W.-C. Bang, “The smart house for older persons and persons with physical disabilities: Structure, technology arrangements, and perspectives,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 12, pp. 228–250, 2004.
[79] M. Suk, H. Kang, “New Measures of Similarity between Two Contours Based on Optimal Bivariate Transforms,” Comput. Vis., Graph., and Image Process., vol. 26, no. 2, pp. 168–182, 1984.
[80] P.-H. Tsai, Y.-J. Lin, Y.-Z. Ou, E.T.-H. Chu, “A Framework for Fusion of Human Sensor and Physical Sensor Data,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 44, no. 9, pp. 1248–1261, 2014.
[81] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
[82] M. Turk and A. Pentland, “Face Recognition Using Eigenfaces,” in Proc. IEEE Comput. Vis. Pattern Recognit., pp. 586–591, 1991.
[83] P. Vamplew and A. Adams, “Recognition of sign language gestures using neural networks,” Australian J. Intelligence Information Processing Systems, vol. 5, no. 2, pp. 94–102, 1998.
[84] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. IEEE Comput. Vis. Pattern Recognit., vol. 1, pp. I-511–I-518, 2001.
[85] P. Viola and M. Jones, “Robust Real-Time Face Detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
[86] I. Vlasenko, I. Nikolaidis, E. Stroulia, “The Smart-Condo: Optimizing Sensor Placement for Indoor Localization,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 45, no. 3, pp. 436–453, 2015.
[87] C. Wang, W. Gao, and S. Shan, “An approach based on phonemes to large vocabulary Chinese sign language recognition,” in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., pp. 393–398, 2002.
[88] C.H. Wu, Y.H. Chiu, and K.W. Cheng, “Error-Tolerant Sign Retrieval Using Visual Features and Maximum A Posteriori Estimation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 4, pp. 495–508, Apr. 2004.
[89] C.-H. Wu, Y.-H. Chiu, and C.-S. Guo, “Text generation from Taiwanese sign language using a PST-based language model for augmentative communication,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 12, no. 4, pp. 441–454, Dec. 2004.
[90] C. H. Wu, J. C. Lin, and W. L. Wei, “Two-Level Hierarchical Alignment for Semi-Coupled HMM-Based Audiovisual Emotion Recognition with Temporal Course,” IEEE Trans. On Multimedia, vol. 15, no. 8, pp. 1880–1895, 2013.
[91] Q. Wu, Z. Wang, F. Deng, Z. Chi, D.D. Feng, “Realistic Human Action Recognition With Multimodal Feature Selection and Fusion,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 43, no. 4, pp. 875–885, 2013.
[92] R. Xie, X. Sun, X. Xia, J. Cao, “Similarity Matching-Based Extensible Hand Gesture Recognition,” IEEE Sensors J., vol. 15, no. 6, pp. 3475–3483, 2015.
[93] J. Yang, D. Zhang, A.F. Frangi and J. Yang, “Two-dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 1, pp. 131–137, Jan 2004.
[94] H.-D. Yang, A.-Y. Park, and S.-W. Lee, “Gesture Spotting and Recognition for Human-robot Interaction,” IEEE Trans. Robotics, vol. 23, no. 2, pp. 256–270, 2007.
[95] X. Yang, S. Yuan, Y. Tian, “Assistive Clothing Pattern Recognition for Visually Impaired People,” IEEE Trans. Human-Mach. Syst., vol. 44, no. 2, pp. 234–243, 2014.
[96] A. Zaraki, D. Mazzei, M. Giuliani, D. De Rossi, “Designing and Evaluating a Social Gaze-Control System for a Humanoid Robot,” IEEE Trans. Human-Mach. Syst., vol. 44, no. 2, pp. 157–168, 2014.
[97] D. Zhang and G. Lu, “A comparative study on shape retrieval using Fourier descriptors with different shape signatures,” in Proc. Int. Conf. Intelligent Multimedia and Distance Education, pp. 1–9, 2001.
[98] X. Zhang, X. Chen, Y. Li, V. Lantz, K. Wang, and J. Yang, “A framework for hand gesture recognition based on accelerometer and emg sensors,” IEEE Trans. Syst., Man, Cybern., A, Syst., Humans, vol. 41, no. 6, pp. 1064–1076, Nov. 2011.
[99] Tianzhu Zhang, Si Liu, Changsheng Xu, Hanqing Lu, “Mining Semantic Context Information for Intelligent Video Surveillance of Traffic Scenes,” IEEE Trans. Ind. Informat., vol. 9, no. 1, pp. 149–160 , Feb. 2013.
[100] W. Zhao and Nandhakumar N., “Linear discriminant analysis of MPF for face recognition,” in Proc. 14th Int. Conf. Pattern Recognit., vol. 1, pp.185–188, 1998.
[101] K. Zheng, D.F. Glas, T. Kanda, H. Ishiguro, N. Hagita, “Designing and Implementing a Human–Robot Team for Social Interactions,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 43, no. 4, pp. 843–859, 2013.
[102] Q. Zhu, L. Wang, Y. Wu, “Contour Context Selection for Object Detection:A Set-to-Set Contour Matching Approach,” in Proc. Eur. Conf. on Comput. Vis., pp. 774–787, 2008.
[103] Website of “Clustering datasets,” July 2, 2015. [Online]. http://cs.joensuu.fi/sipu/datasets/
[104] Website of “X-IMU,” July 2, 2015. [Online]. http://www.x-io.co.uk/