| 研究生: |
劉義凡 Liu, Yi-Fan |
|---|---|
| 論文名稱: |
發展基於影像辨識技術的數位墨水應用於互動繪畫藝術 The Development of Image-Recognition-based Digital Ink Apply to Interactive Painting Art |
| 指導教授: |
沈揚庭
Shen, Yang-Ting |
| 學位類別: |
碩士 Master |
| 系所名稱: |
規劃與設計學院 - 科技藝術碩士學位學程 Master Program on Techno Art |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 131 |
| 中文關鍵詞: | 人機互動 、人工智慧 、資料視覺化 、手影 、科技藝術 |
| 外文關鍵詞: | Artificial Intelligence, Human-Computer Interaction, Information Visualization, Hand Shadows, Art & Technology |
| 相關次數: | 點閱:242 下載:55 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本次的系統開發是由人工智慧、人機互動以及資料視覺化,這三個要素所構成。透過人工智慧的技術,結合手影藝術的操作方式來達成人機互動,使參與者得以與機算機產生出互動交流,最後將計算機辨識到手影資料以資料視覺化來顯示於投影的畫布之中,並藉此來完成科技藝術的創作。
此繪畫藝術系統,定義了六種手勢來繪畫並在對應的位置上產生相對應的符號,讓參與者利用自身的手影在人工智慧的幫助之下,再創造出新的拼貼繪畫創作。六種手影的原型是根據動物的影子輪廓所設計,藉此將手影融入了故事性的情境;並在展場規劃中融入了賞月的詩意氛圍,將資料視覺化的畫面使用圓的方式來呈現出「月亮」的感覺。整套系統設計上試圖營造出,動物在月球漫步留下足跡的藝術情境氛圍,使原本無感情、無生命的科技技術,產生了寓言故事的描述效果,並且增添豐富的想像空間,將整套系統渲染成有生命力的意境感受。
此系統會根據參與者的互動過程來自動產生出獨特的樣貌,並藉此探討出以下三點:(1)科技與藝術之間的交集 (2)整合自然手勢與人工智慧的互動藝術 (3)基於影像辨識的數位墨水創作
The system research here is composed of three elements: artificial intelligence, human-computer interaction, and data visualization. Through artificial intelligence technology, combined with the hand shadow art, human-computer interaction is achieved, so that participants can interact with the computer. Finally, the computer recognizes hand shadows and displays on the projected canvas by data visualization, and use this to achieve the form of science and technology art.
This art painting system defines six hand poses to draw and generate corresponding symbols in corresponding positions, allowing participants to use their hand shadows with the help of artificial intelligence to create new collage painting creations. The prototypes of the six hand shadows are designed based on the silhouettes of the animal shadows, and the poetic atmosphere of the moon is added to the exhibition plan. The visualized images used a circular lighting to present the feeling of the "moon". The design of the whole system attempts to create an artistic surrounding vibe where animals walk on the moon and leave footprints, so that the originally emotionless and inanimate technology produces a fable description, and adds an abundance of imagination to render the whole system a feeling of vitality.
The system will develop a unique way of expression based on the interaction with the participants, and explore the following three points: (1) the intersection combining technology and art, (2) interactive art integrating hand poses and artificial intelligence, (3) digital ink creation based on images recognizable.
英文文獻
Almoznino, A., & Pinas, Y. (2002). The art of hand shadows. Courier Corporation.
Alatbani, A. (1995). Artistic Features of Selected Works of Contemporary Art Related to Modern Technology and their Role in Enriching the Artistic Taste. Unpublished MA Thesis. Faculty of Art Education, Helwan University, Cairo.
Beesley, P., Gorbet, R., & Armstrong, R. (2010). Hylozoic ground. Urban Environment Design.
Bughin, J., Hazan, E., Ramaswamy, S., Chui, M., Allas, T., Dahlstrom, P., ... & Trench, M. (2017). Artificial intelligence: The next digital frontier?.
Cho, Y. T., Kuo, Y. L., Yeh, Y. T., & Lee, Y. C. (2019, October). MovIPrint: Move, Explore and Fabricate. In Proceedings of the 27th ACM International Conference on Multimedia (pp. 1151-1152).
Fast, J. (1970). Body language (Vol. 82348). Simon and Schuster.
Harper, E. R., Rodden, T., Rogers, Y., Sellen, A., & Human, B. (2008). Human-Computer Interaction in the year 2020.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
Hsu, S. W., & Li, T. Y. (2005). Planning character motions for shadow play animations. Proc. CASA, 5, 184-190.
Harris, M. (1998). Tim Noble and Sue Webster.
Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., ... & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7310-7311).
Iverson, J. M., & Goldin-Meadow, S. (1998). Why people gesture when they speak. Nature, 396(6708), 228-228.
Ishii, H. (2008). The tangible user interface and its evolution. Communications of the ACM, 51(6), 32-36.
Ishii, H., & Ullmer, B. (1997, March). Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems (pp. 234-241).
Jansen, B. J. (1998). The graphical user interface. ACM SIGCHI Bulletin, 30(2), 22-26.
Khan, M., & Khan, S. S. (2011). Data and information visualization methods, and interactive mechanisms: A survey. International Journal of Computer Applications, 34(1), 1-14.
McNeill, D. (2008). Gesture and thought. University of Chicago press.
Othman, N. A., & Aydin, I. (2018, October). A new deep learning application based on movidius ncs for embedded object detection and recognition. In 2018 2nd international symposium on multidisciplinary studies and innovative technologies (ISMSIT) (pp. 1-5). IEEE.
Ponti, M. A., Ribeiro, L. S. F., Nazare, T. S., Bui, T., & Collomosse, J. (2017, October). Everything you wanted to know about deep learning for computer vision but were afraid to ask. In 2017 30th SIBGRAPI conference on graphics, patterns and images tutorials (SIBGRAPI-T) (pp. 17-41). IEEE.
Rogers, Y., Sharp, H., & Preece, J. (2011). Interaction design: beyond human-computer interaction. John Wiley & Sons.
Ryokai, K., Marti, S., & Ishii, H. (2004, April). I/O brush: drawing with everyday objects as ink. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 303-310).
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
Sanjay, N. S., & Ahmadinia, A. (2019, December). MobileNet-Tiny: A Deep Neural Network-Based Real-Time Object Detection for Rasberry Pi. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) (pp. 647-652). IEEE.
Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge University Press.
Shen, Y. T., & Do, E. Y. L. (2010, January). Making digital leaf collages with blow painting!. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (pp. 265-268).
Shaer, O., & Hornecker, E. (2010). Tangible user interfaces: past, present, and future directions. Now Publishers Inc.
Silverman, L. K. (2002). Upside-down brilliance: The visual-spatial learner. Denver, CO: DeLeon Publishing.
Viégas, F. B., & Wattenberg, M. (2007, July). Artistic data visualization: Beyond visual analytics. In International Conference on Online Communities and Social Computing (pp. 182-191). Springer, Berlin, Heidelberg.
Wang, R. J., Li, X., & Ling, C. X. (2018). Pelee: A real-time object detection system on mobile devices. arXiv preprint arXiv:1804.06882.
Wellner, P. (1999). Computer-Augmented Environments back to the real world. Comm. ACM, 36(7), 271-278.
Weiser, M. (1993). Hot topics-ubiquitous computing. Computer, 26(10), 71-72.
Zhang, F., Bazarevsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C. L., & Grundmann, M. (2020). MediaPipe Hands: On-device Real-time Hand Tracking. arXiv preprint arXiv:2006.10214.
Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6848-6856).
Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2018). Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8697-8710).
中文文獻
陳嘉懿. (2006). 由人機互動介面理論探討智慧空間設計.
陳建雄譯. (2013).《互動設計:跨越人—電腦互動》.台北:全華科技.
姚錫凡, 練肇通, 楊嶺, 張毅 & 金鴻 (2014). 智慧製造一面向未來互聯網的人機物協同製造新模式.
盧煥錡. (2015). 智慧文創:整合互動設計與創客精神 應用於互動體驗展場設計.
吳念瑾. (2017). BIM應用於智慧建築互動調適之研究.
廖士豪. (2020). 整合AI電腦視覺與BIM電子圍籬發展智慧維運平台.
網路資料
https://www.getit01.com/p2018062437545217/
https://tengshanyuan.com/2016/07/12/hci-an-interface-pov.html
https://tangible.media.mit.edu/person/hiroshi-ishii/
http://dacc.org.tw/files/common_unit/46261ba8-0676-47d3-b45a-dfa5cfb384bf/doc/%E6%89%8B%E8%AA%9E%E8%97%9D%E8%A1%93%E7%9A%84%E8%A6%96%E8%A6%BA%E7%AD%96%E7%95%A5.%202014.8.31.pdf
https://experiments.withgoogle.com/shadow-art
https://www.wikiwand.com/zh-hk/%E6%97%A5%E6%99%B7
https://zh.wikipedia.org/wiki/%E7%9A%AE%E5%BD%B1%E6%88%B2
http://www.chineseshadow.com/
https://kmsp.khcc.gov.tw/home02.aspx?ID=$1101&IDK=2&EXEC=L
http://www.timnobleandsuewebster.com/dirty_white_trash_1998.html
https://pruned.blogspot.com/2007/02/multi-touch-topography.html
http://blakew88.blogspot.com/2018/03/blog-post.html
https://medium.com/zajnocrew/using-metaphors-in-design-cef7c2fa9c64
Shewan, Dan (5 October 2016). "Data is Beautiful: 7 Data Visualization Tools for Digital Marketers"檢自https://www.business2community.com/online-marketing/data-beautiful-7-data-visualization-tools-digital-marketers-01668224
https://google.github.io/mediapipe/solutions/hands
https://refikanadol.com/works/wind-of-boston-data-paintings/
https://news.artnet.com/art-world/jerry-saltz-ai-art-1227932
http://www.ecloudproject.com/process.html