簡易檢索 / 詳目顯示

研究生: 蘇弘毓
Su, Hung-Yu
論文名稱: 使用統計式方法及小量語料庫於中文轉譯台灣手語之研究
A Study on Chinese to Taiwanese Sign Language Translation Using Statistical Approach with Small Corpus
指導教授: 吳宗憲
Wu, Chung-Hsien
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2009
畢業學年度: 97
語文別: 英文
論文頁數: 95
中文關鍵詞: 文法規則語法結構語義角色機器翻譯台灣手語
外文關鍵詞: syntactic structure, Grammar rule, Thematic role, Machine Translation, Taiwanese Sign Language
相關次數: 點閱:98下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 手語是一種藉由手勢及肢體表達的視覺性語言,手語也是聽語障人士藉以溝通的一個主要途徑,如同口述語言之於一般人,手語在近數十年來也已被視為一個有完整結構的自然語言。雖然手語是聽語障人士的母語,且是最自然直接的溝通方式,但聽語障人士在日常生活中仍常常被迫使用口語與其它聽人溝通,為了協助他們在教育、工作及社會上能順利的與人互動,手語機器翻譯技術能讓他們有一個直覺的溝通方式來獲得資訊,並能用以提供特殊教育手語學習一個輔助工具。有別於一般口述語言之機器翻譯,應用機器翻譯於手語的研究上,不足的平行對照語料是最重要的問題,另外,手語的呈現方式亦是一項挑戰。
    為了解決機器翻譯應用於手語時平行語料不足的問題,本研究從最具代表性之統計式機器翻譯模型:IBM Model著手,並引入發展完善之中文語言學知識來解決傳統機器翻譯謹使用文字為單元造成之資料稀疏問題。首先,以IBM Model為主,修改並提出了一個以詞組片段為基礎之統計式轉譯模型;接著以語法結構樹的觀念來將詞組片段的觀念系統化,提出以統計模型為軸的語言轉換(transfer-based)機器翻譯模型,最後,本研究以前述之經驗,提出一個結合了知識庫與統計式翻譯之創新的機器翻譯架構,翻譯記憶是一個由小量對照語料中抽取出來的範例庫,並輔以統計式方法產生語法暨語意結構之轉換規則來達到更精確的翻譯成果。在平行語料庫的發展上,本研究提出了一個完整的流程用以收集一個有限數量的文字語料並以人工標記對應之手語詞序列,另外,在手語呈現上,提出了一個以真人影像為基礎的串接合成系統,並建立起一個手勢動作平衡之影帶資料庫。
    經由實驗驗證後,本文所中提出的三種機器翻譯模型的架構皆能有效的提昇翻譯的可靠性及穩定性,實驗分為客觀性及主觀性的評估兩部份,客觀性的分析主要是由Bilingual Evaluation Understudy (BLEU)及字元錯誤率來對機器翻譯的結果提供一個標準的驗證,在主觀性評估方面,則透過受測者對翻譯結果的主觀評分(mean opinion scores, MOS)及理解性評估,來了解使用者對系統的滿意度。此外,在一系列的研究中,提出了數個有關語料庫建構的方法,而透過分析,所提出的語料建構方式能有效的符合預期達到的目標。最後在手語呈現的結合上,所提出的基於影帶之串接合成方法亦能成功的保留住翻譯結果所欲表達的意義。
    在應用面上,所提出之中文轉譯手語之技術可整合應用於多媒體,將所欲傳遞之資訊自動轉譯為手語影像,以提供聽語障人士日常生活及與人溝通之輔助;本文中所提出之利用發展完善之語言學資訊輔助次要語言(minor language)之方式,不只可應用於台灣手語之研究,亦可應用於發展其它次要或少數語言之機器翻譯之研究。然而如何有效的使用非手勢外的手語特徵則是未來改進的一大目標,隨著台灣手語語言學的發展,這些手語獨特的表達方式在將來應能有系統的被利用。

    Sign language is a visual/gestural language that serves as the primary means of communication for deaf individuals, just as spoken languages are used among the hearing. Even though Sign Language (SL) is their first language (L1), the Deaf are usually forced to use spoken languages in daily life for communicating with hearing people. To assist their participation in education, employment, and society at large, Sign language machine translation (SLMT) technologies could provide the Deaf an intuitive way of communicating and a sign language learning assistance in special education. Distinct from ordinary statistical machine translation, the most crucial problem is the insufficient bilingual corpus with unified annotations. Besides, the output of sign language is quiet a challenge.
    In order to deal with the problem of sign language machine translation, this study started with traditional statistical machine translation model: IBM models, then several linguistic information were conducted for improving the sparseness of using small corpus in statistical approach. At the first, a statistical alignment based on phrase fragments is proposed to reduce the complexity of word level alignments. Further, phrase structure was adopted for systemizing the linear phrase fragment defined in sentences. At last, an innovation mechanism cooperated with knowledge-based and statistical approach was presented. Translation memory, an example base retrieved from parallel corpus was used to generate transferred rules in statistical way. Develop the bilingual corpus is an important issue for this work. This paper presented a complete procedure to collect and annotate a size-limited parallel corpus due to that the annotation requires labor efforts. For sign language displaying, a synthesis approach is proposed which is developed with a designed transition-balanced video corpus.
    The designed corpora were analyzed for verifying the reliabilities and shown that they can be satisfied with the goal we wish to attain. Several experiments were conducted to evaluate the translation performance of proposed machine translation architectures, and the comprehension effectiveness for the Deaf. BLEU metric, and word error rate were conducted in objective evaluations, and means of opinion scores and reading comprehension test were proposed for subjective evaluations. The experiments demonstrate that the proposed approaches had outperformed to previous MT system in time and all of them were more outstanding than traditional SMT. According to evaluations, the architecture using linguistic information makes the translation quality more robust, especially translating longer sentences. For reading comprehensions, deaf students perceived sign sequences generated with/and synthesis videos by the proposed method to be satisfactory.
    In terms of applications, the proposed approach can be integrated into the electronic public services for providing the Deaf an intuitive interface for receiving information. In this paper, the use of linguistic information presented in this study will not only assist the analysis of TSL, but also provide a solution to develop machine translation for other minor languages. c In the future, more TSL characteristics can be considered and exploited with the development of TSL linguistics.

    中文學位審查證明 I 英文學位審查證明 III 中文摘要 V ABSTRACT VII 誌謝 IX CONTENTS XI LIST OF FIGURES XIII LIST OF TABLES XV CHAPTER 1. INTRODUCTION 1 1.1 MOTIVATION 3 1.1.1 Purpose and Specific Aim 3 1.1.2 Significances 4 1.2 BACKGROUND AND LITERATURE REVIEW 5 1.2.1 Machine Translation 5 1.2.2 Sign Language Synthesis 8 1.3 ORGANIZATION 9 CHAPTER 2. CORPUS DEVELOPMENT 11 2.1 LINGUISTIC RESOURCES 11 2.1.1 HowNet 11 2.1.2 Sinica Treebank 11 2.2 BILINGUAL CORPUS 13 2.2.1 Sign Language Annotation 13 2.2.2 TSL Dictionary 14 2.2.3 Chinese/TSL Parallel Text Corpus 16 2.3 SIGN VIDEO CORPUS 17 2.3.1 Transition-Balanced Corpus Development 17 2.3.2 Annotation for Video Clips 22 CHAPTER 3. FRAGMENT-BASED ALIGNMENT TRANSLATION 25 3.1 TRANSLATION MODEL 28 3.2 ALIGNMENT PROBABILITY ESTIMATION 31 3.3 TSL LANGUAGE MODEL PROBABILITY CONSIDERING INTER-SIGN EPENTHESIS 32 3.4 EVALUATION AND EXPERIMENTS 34 3.4.1 Bilingual Corpus Analysis 34 3.5 SIGN VIDEO SYNTHESIS 35 3.5.1 Evaluation on the Sign Video Concatenation 36 3.5.2 Case Study 38 CHAPTER 4. STRUCTURAL MACHINE TRANSLATION WITH STATISTICAL TRANSFERRING 41 4.1 PROBABILITY ESTIMATION OF CFG RULES 41 4.1.1 Probability Estimation of CFG Rule 42 4.2 TRANSLATION MODEL 44 4.2.1 Phrase Structure Tree of a Chinese sentence 47 4.2.2 CFG Rule Transfer Probability 49 4.2.3 TSL PCFG Training 51 4.3 EVALUATION AND EXPERIMENTS 52 4.3.1 Corpus analysis 52 4.3.2 Evaluation of Translation Quality 55 4.3.3 Objective Evaluation 55 4.3.4 Subjective Evaluation 58 CHAPTER 5. STRUCTURAL STATISTICAL TRANSLATION VIA TRANSLATION MEMORY 61 5.1 TRANSLATION MEMORY EXTRACTION 62 5.2 STRUCTURAL TRANSLATION MODEL VIA TRANSLATION MEMORY 65 5.2.1 Translation Model 66 5.2.2 Thematic Role Modeling 68 5.2.3 Word-based Translation and Agreement Determination 69 5.3 EVALUATION AND EXPERIMENTS 70 5.3.1 Analyses on the Bilingual Corpus and the Extracted Translation Memory 70 5.3.2 Word Error Rate 72 5.3.3 BLEU Metric 73 5.3.4 Means of Scores 75 5.3.5 Case Studies 76 CHAPTER 6. CONCLUSIONS 79 APPENDIX 81 REFERENCE 83 BIOGRAPHY 93 PUBLICATIONS 95

    [Aho, 1969] Aho, A. V., and Ullman, J. D., “Syntax Directed Translations and the Pushdown Assembler”, in Journal of Computer and System Sciences, vol. 3, pp. 37-56, 1969.
    [Alonso, 1995] Alonso, F., Antonio, A., Fuertes, J. L., and Montes, C., “Teaching communication skills to hearing-impaired children,” IEEE Trans. Multimedia, pp. 55-67, 1995.
    [Alshawi, 2000] Alshawi, H., Bangalore, S. and Douglas, S. “Learning dependency translation models as collections of finite state head transducers,” Computational Linguistics, vol. 26 no.1, 2000
    [Arikan, 2002] Arikan, O. and Forsyth, D. A., ”Interactive motion generation from examples,” in Proc. of the 29th annual conference on Computer graphics and interactive techniques, ACM Press, pp. 483-490, 2002.
    [Baker, 1979] Baker, J. K., “Trainable grammars for speech recognition.” In Wolf, J. J. and Klatt, D. H., editors, Speech communication papers presented at the 97th Meeting of the Acoustical Society of America, Acoustical Society of America, MIT Press, pp.547–550, 1979.
    [Bregler, 1997] Bregler, C., Covell, M. and Slaney, M., “Video rewrite: driving visual speech with audio,” in Proc. of ACM SIGGRAPH 1997, pp. 353-360, 1997.
    [Brown, 1992] Brown, C., “Assistive technology computers and persons with disabilities,” Communications of the ACM, vol. 5, pp. 36-46, 1992.
    [Brown, 1993] Brown, P. F., Della Pietra, S. A., Della Pietra, V. J., and Mercer, R. L., “The mathematics of statistical machine translation: parameter estimation,” Computational Linguistics, vol. 19, no. 2, pp.263-311, 1993.
    [Bungeroth, 2004] Bungeroth, J. and Ney, H., “Statistical Sign Language Translation,” in Workshop on Representation and Processing of Sign Languages, 4th International Conference on Language Resources and Evaluation (LREC 2004) , pp. 105-108 , Lisbon, Portugal, 2004.
    [Bungeroth, 2006] Bungeroth, J., Stein, D., Dreuw, P., Zahedi, M., and Ney, H., “A German Sign Language Corpus of the Domain Weather Report,” in 5th International Conference on Language Resources and Evaluation, pp. 2000-2003, Genoa, Italy, May 2006.
    [Bungeroth, 2008] Bungeroth, J., Stein, D., Ney, H., Morrissey, S., Way, A. and van Zijl, L., “The ATIS Sign Language Corpus,” in Proc. of the Sixth International Conference on Language Resources and Evaluation (LREC-08), Marrakech, Morocco, [no page numbers], 2008.
    [CKIP, 1993] The Chinese Knowledge Information Processing Group, Analysis of Chinese Part of Speech. CKIP Technical Report (in Chinese), no. 93-05, Institute of Information Science, Academic Sinica, Taipei, 1993.
    [Cao, 2004] Cao, Y., Faloutsos, P., Kohler, E., and Pighin, F. “Real-time speech motion synthesis from recorded motions,” in Proc. of 2004 ACM SIGGRAPH Eurographics Symposium on Computer Animation, pp. 347-355, 2004.
    [Chen, 1996] Chen, K. J., and Huang, C. R., “Information-based Case Grammar: A Unification-based Formalism for Parsing Chinese,” in Chinese Natural Language Processing, Journal of Chinese Linguistics, Monograph, no. 9, pp. 23-46., 1996.
    [Chen, 2003] Chen, Y., Gao, W., Fang, G., Yang, C. S., and Wang, Z. Q., “CSLDS: Chinese sign language dialog system,” IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 236- 237, 2003.
    [Chiu, 2007] Chiu, Y. H., Wu, C. H., Su, H. Y., and Cheng, C. J., “Joint Optimization of Word Alignment and Epenthesis Generation for Chinese to Taiwanese Sign Synthesis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp.28-39, 2007.
    [Dong, 2003] Dong, Z. D. and Dong, Q., “HowNet - a hybrid language and knowledge resource,” in International Conference on Natural Language Processing and Knowledge Engineering, pp. 820-824, 2003.
    [Ezzat, 2002] Ezzat, T., Geiger, G., and Poggio, T., “Trainable video-realistic speech animation,” in Proc. of SIGGRAPH 2002, vol. 21, pp. 388-397, 2002.
    [Fromkin, 1998] Fromkin, V., and Rodman, R., An Introduction to Language, Harcourt Brace College Publishers , 1998.
    [Gildea, 2004] Gildea, D., “Dependencies vs. Constituents for Tree-Based Alignment,” in Conference on Empirical Methods in Natural Language Processing (EMNLP), Barcelona , 2004.
    [Grieve-Smith, 2001] Grieve-Smith, A. B., “SignSynth: A Sign Language Synthesis Application Using Web3D and Perl,” in Gesture Workshop 2001, pp. 134-145, 2001.
    [Hassan, 2008] Hassan, H., Sima'an, K., and Way, A., “Syntactically Lexicalized Phrase-Based SMT,” IEEE Trans. on Audio, Speech and Language Processing, vol. 16, no. 7, pp. 1260-1273, 2008.
    [Holden, 2005] Holden, E. J., Wong, J. C. and Owens, R., "An effective sign language display system", in Proc. of the Eighth International Symposium on Signal Processing and Its Applications. vol. 1, pp. 54 - 57, 2005.
    [Hsieh, 2004] Hsieh, Y. M., Yang, D. C., and Chen, K. J., “Grammar Extraction, Generalization and Specialization,” in Proc. of ROCLING XVI, pp.141-150, 2004.
    [Huenerfauth, 2004] Huenerfauth, M., Survey and Critique of American Sign Language Natural Language Generation and Machine Translation Systems, Technical Report MS-CIS-03-32, Computer and Information Science, University of Pennsylvania, 2004.
    [Huenerfauth, 2008] Huenerfauth, M., Zhou, L., Gu, E. and Allbeck, J., "Evaluation of American Sign Language Generation by Native ASL Signers," ACM Tran. on Accessible Computing, vol. 1, no. 1, article 3, 2008.
    [Irving, 2005] Irving, A. and Foulds, R., “A Parametric Approach to Sign Language Synthesis,” in SIGACCESS 2005, pp. 212-213, 2005.
    [Kennaway, 2001] Kennaway, R., “Synthetic animation of deaf signing gestures,“ Lecture Notes in Computer Science, 4th International Workshop on Gesture and Sign Language Based Human-Computer Interaction Lecture Notes in Artificial Intelligence, Vol. 2298, 2001.
    [Kim, 2004] Kim, S. W., Li, Z. X. and Aoki, Y., "On intelligent avatar communication using Korean, Chinese and Japanese sign-languages: an overview," in Control, Automation, Robotics and Vision Conference 8th, vol. 1, pp. 747 - 752, 2004.
    [Koehn, 2003] Koehn, P., Och, F. J. and MArcu, D., “Statistical phrase-based translation,” in Proc. of HLT-NAACL, pp. 127-133, 2003.
    [Kohavi, 1995] Kohavi, R., “A study of cross-validation and bootstrap for accuracy estimation and model selection.” in Proc. of the Fourteenth International Conference on Artificial Intelligence (IJCAI), pp. 1137–1143, San Mateo, Canada, 1995.
    [Kovar, 2002] Kovar, L., Gleicher, M. and Pighin, F., “Motion graphs”. in Proc. of ACM SIGGRAPH 2002, ACM Press, pp. 473-482, 2002.
    [Lee, 2002] Lee, J., Chai, J., Reitsma, P. S. A., Hodgins, J. K., and Pollard, N. S., “Interactive control of avatars animated with human motion data,” in Proc. of ACM SIGGRAPH 2002, ACM Press, pp. 491-500, 2002.
    [Levenshtein, 1966] Levenshtein, V. I., “Binary codes capable of correcting deletions, insertions, and reversals.” In Soviet Physics Doklady, vol.10 pp.707–710, 1966.
    [Lewis, 1968] Lewis, P. M., and Stearns, R. E., “Syntax Directed Transduction”, Journal of the ACM, vol. 15, pp. 465-488, 1968.
    [Li, 2002] Li, Y., Wang, T. and Shum, H. Y., “Motion texture: A two-level statistical model for character motion synthesis,” ACM Transactions on Graphics, vol. 21, no.3, pp.465-472, 2002.
    [Liang 1997] Liang, R., Continuous Gesture Recognition System for Taiwanese Sign Language, Ph.D. dissertation, National Taiwan University, Taiwan, 1997.
    [Liddell, 1984] Liddell, S. K., “Think and Believe: Sequentiality in American Sing Language Signs”, Language 60, pp. 372-399, 1984.
    [Lloyd, 1997] Lloyd, L. L., Fuller, D. R., and Arvidson, H. H., Augmentative and Alternative Communication: A Handbook of Principles and Practices. Allyn and Bacon, Inc., 1997.
    [MOE, 2000] Ministry of Education, Division of Special Education, Changyong Cihui Shouyu Huace (Sign Album of Common Words), vol. 1. Taipei: Ministry of Education, 2000.
    [MOI] Available: http://sowf.moi.gov.tw/stat/month/m3-05.xls
    [Ma, 2008] Ma, Y., Sun, Y., Ozdowska, S., and Way, A., “Improving Word Alignment Using Syntactic Dependencies,” in Proc. of the Second Workshop on Syntax and Structure in Statistical Translation (SSST-2), pp. 69-77, Columbus, USA, 2008.
    [Marcu, 2006] Marcu, D., Wang, W., Echihabi, A., and Knight, K., “SPMT: Statistical Machine Translation with Syntactified Target Language Phrases,” in Proc. of EMNLP-2006, pp. 44-52, Sydney, Australia, 2006.
    [Marshall, 2002] Marshall, I. and Safar, E., “Sign Language Generation using HPSG”, in Proc. of the 9th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI-02), Keihanna, Japan, pp. 105-114, 2002.
    [Morrissey, 2005] Morrissey, S. and Way, A., “An Example-Based Approach to Translating Sign Language,” in Proc. Workshop Example-Based Machine Translation (MT-X 05), pp. 109-116, 2005.
    [Morrissey, 2007] Morrissey, S. and Way, A., “Joining Hands: Developing a Sign Language Machine Translation System with and for the Deaf Community”, in Proc. of CVHI - Conference and Workshop on Assistive Technology for People with Vision and Hearing Impairments, Granada, Spain, [no page numbers], 2007.
    [Morrissey, 2008] Morrissey, S., “Assistive Technology for Deaf People: Translating into and Animating Irish Sign Language,” in Proc. of the Young Researchers' Consortium at ICCHP Linz, Austria, [no page numbers], 2008.
    [Nagao, 1984] Nagao, M., “A Framework of a Mechanical Translation between Japanese and English by Analogy Principal”, In Proc. of the international NATO symposium on Artificial and Human Intelligence, Lyon, France, pp. 173-180, 1984.
    [Nina, 2002] Nina, s., Szmal, p. and Francik, j., “Translating Polish Texts into Sign Language in the TGT System,” in 20th IASTED International Multi-Conference Applied Informatics AI, Innsbruck, Austria, pp. 282-287, 2002.
    [Och, 2000] Och, F. J. and Ney, H., “Improved Statistical Alignment Models,” in Proc. of the 38th Annual Meeting of the Association for Computation Linguistics , 2000.
    [Ong, 2005] Ong, S. C. W. and Ranganath, S., “Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 873-891, 2005.
    [Papineni, 2001] Papineni, K., Roukos, S., Ward, T., and Zhu, W., “BLEU: a Method for Automatic Evaluation of Machine Translation.” in Proc. of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318, Philadelphia, USA. 2001.
    [Prillwitz, 1990] Prillwitz, S. and Zienert, H., ”Hamburg Notation System for Sign Language: Development of a sign writing with computer application,” in International Studies on Sign Language and Communication of the Deaf, vol. 9, Hamburg: Signum Press, pp. 355-379, 1990.
    [SINICA] Available: http://turing.iis.sinica.edu.tw/treesearch/
    [Smith, 1976] Smith, W. H., Taiwan Sign Language. Manuscript, Northridge: California State University, 1976.
    [Smith, 1997] Smith, W. H., and Ting, L. F., Shou Neng Sheng Chyau (Your Hands Can Become a Bridge), vol. 1 and 2, Taipei, R.O.C.: Deaf Sign Language Research Association, 1997.
    [Smith, 2005] Smith, W. H., “Taiwan Sign Language research: An historical overview,” In Language and Linguistics vol. 6, no. 2, pp. 187-215, 2005.
    [Solina, 1999] Solina, F. and Krape?, S., “Synthesis of the sign language of the deaf from the sign video clips,” Electrotechnical Review, vol. 66, pp.260-265, 1999.
    [Speers, 2002] Speers, A. L., Representation of American Sign Language for Machine Translation, Ph.D. thesis, Georgetown University, Washington D.C., 2002.
    [Starner, 1998] Starner, T., Weaver, J. and Pentland, A., “Real-time American sign language recognition using desk and wearable computer-based video,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1371-1375, 1998.
    [Stokoe, 1960] Stokoe, W., Sign language structure: An outline of the visual communication systems of the American Deaf, Silver Spring, MD: Linstok press, 1960.
    [Su, 2001] Su, M. C., Zhao, Y. X., Huang, H., and Chen, H. F., “A Fuzzy Rule-based approach to recognizing 3-D Arm Movements,” IEEE Trans. On Neural Systems and Rehab. Eng., vol. 9, no. 2, 2001.
    [Sutton, 1995] Sutton, V. and Gleaves, R., SignWriter - The world’s first sign language processor, Deaf Action Committee for SignWriting, La Jolla, Canada., 1995.
    [Traxler, 2000] Traxler, C. B., “The Stanford Achievement Test – 9th Edition: National Norming and performance standards for Deaf and Hard of Hearing Students,” Journal of Deaf Studies and Deaf Education, vol. 5, pp. 337–348, 2000.
    [Vauquois, 1968] Vauquois, B., “A survey of formal grammars and algorithms for recognition And transformation in machine translation, “ in IFIP Congress-68 (Edinburgh), pp. 254- 260, 1968.
    [Veale, 1998] Veale, T., Conway, A., and Collins, B., “The Challenge of Cross-Modal Translation: English to Sign Language Translation in the Zardoz System,” Machine Translation, vol. 13, issue 1, pp. 81-106, 1998.
    [Vogler, 1999] Vogler, C. and Metaxas, D., “Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes,” in Lecture Notes in Artificial Intelligence, vol. 1739, pp. 211-224, 1999.
    [Vogler, 2001] Vogler, C. and Metaxas, D., “A Framework fro Recognizing the Simultaneous Aspects of American Sign Language,” Computer Vision and Image Understanding, no. 81, pp. 358-384, 2001.
    [Wahlster, 2000] Wahlster, W., Verbmobil: Foundations of Speech-to-Speech Translation, Springer-Verlag Press., 2000
    [Wilcox, 1997] Wilcox, S. and Wilcox, P. P., Learning to See, Gallaudet University Press, 1997.
    [Wu, 1997] Wu, D., “Stochastic inversion transduction grammars and bilingual parsing of parallel corpora,” Computational Linguistics, vol. 23, no. 3, 1997.
    [Wu, 2004] Wu, C. H., Chiu, Y. H., and Cheng, K. W., “Error-tolerant sign retrieval using visual features and maximum a posteriori estimation”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 26, No. 4, pp.495-508, 2004.
    [Wu, 2004] Wu, C. H., Chiu, Y. H., and Guo, C. S., “Text Generation from Taiwanese Sign Language Using a PST-based Language Model for Augmentative Communication,” IEEE Trans. Neural Systems and Rehabilitation Engineering, vol. 12, no. 4, pp.441-454, 2004.
    [Wu, 2007] Wu C. H., Su, H. Y., Chiu, Y. H., Lin, C. H., “Transfer-Based Statistical Translation of Taiwanese Sign Language Using PCFG,” ACM Trans. on Asian Language Information Processing, vol.6, no.1, pp.1~18, 2007.
    [Yamada, 2001] Yamada, K. and Knight, K., “A Syntax-Based Statistical Translation Model,” in Proc. of the Conference of the Association for Computational Linguistics, pp. 523-530, Toulouse, France, 2001.
    [Zens, 2003] Zens, R. and Ney, H., “A Comparative Study on Reordering Constraints in Statistical Machine Translation,” in Proc. of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 144-151, 2003
    [Zhou, 2000] Zhou, Q. and Feng, S., “Build a Relation Network Representation for HowNet,” in Proc. of International Conference on Multilingual Information Processing, pp. 139-145, 2000.
    [Zollmannm 2006] Zollmann, A. and Venugopal, A., “Syntax augmented machine translation via chart parsing with integrated language modeling,” in Proc. of NAACL 2006 -Workshop on statistical machine translation, [no page numbers], New York, USA. 2006.
    [van Zijl, 2008] van Zijl, L.,and Olivrin, G., “South African Sign Language Assistive Translation,” in Proc. of the IASTED International Conference on Assistive Technologies, Baltimore, USA. 2008.

    下載圖示 校內:2014-07-02公開
    校外:2019-07-02公開
    QR CODE