| 研究生: |
馬天彥 Ma, Tien-Yan |
|---|---|
| 論文名稱: |
使用行動運算與光學感測科技增進視覺輔助上的使用者體驗 Using Mobile Computing and Optical Sensing Technologies to Enhance User Experience in Vision Assistance |
| 指導教授: |
侯廷偉
Hou, Ting-Wei |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2013 |
| 畢業學年度: | 101 |
| 語文別: | 英文 |
| 論文頁數: | 59 |
| 中文關鍵詞: | 視覺輔助 、情境感知 、普適計算 、使用者體驗 、手持式裝置 |
| 外文關鍵詞: | vision assistance, context awareness, pervasive computing, user experience, handheld device |
| 相關次數: | 點閱:90 下載:2 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
行動運算結合情境感知提供了人機互動更多的可能性,本研究聚焦在行動裝置配合光學感測器以輔助使用者在視覺方面的便利體驗。在本研究中視覺輔助的對象主要分為兩類使用者:視力正常者與視障者。對於一般視力足以操作手持式行動裝置的使用者,期望可以提供更舒適的視覺體驗。另一方面對於具有視覺障礙因而行動定向能力有困難的使用者,期望行動裝置可以成為一種科技輔具,幫助視障者擁有更便利的行動定向體驗。在增進視覺舒適性方面包含了在低照度環境下的螢幕自動亮度調整以及在晃動環境中的螢幕顯示內容穩定方法。我們使用手持式裝置上的前置鏡頭感測使用者頭部的亮度與位移資訊,結合亮度與位移評估演算法計算出最佳化的顯示亮度與顯示內容位移補償,以降低使用者觀看螢幕的視覺負擔。在幫助視障者行動定向方面,我們整合智慧型手機與Kinect體感控制器成為一個障礙物偵測裝置,藉由Kinect描繪深度影像的能力來判斷視障者行走路徑上的障礙物方向及距離,並且以輕量的智慧型手機作為主要控制單元使得整個裝置得以輕鬆地穿戴於使用者腰間。最後我們提出了客觀測量這些系統效能的方式,藉由數據化的實驗結果進一步評估本研究所提出解決方案的效果與可行性,並指出未來可能的研究改進方向。
Combining mobile computing with context-aware technology provides more possibilities of human-computer interactions, and this research focuses on improving the user experiences in vision assistance by the combination of mobile devices and optical sensors. We divide the users who need the vision assistance into two groups: one is people with normal vision and the other is the visual impairments. For the former who are able to operate the handheld devices, a comfortable vision experience is supposed to be provided. On the other hand, for the later who have difficulties in orientation and mobility, a type of assistive technology device is proposed to help them have better user experiences in the mobility. In respect of visual comfort, we propose an automatic display brightness control with low illumination and a display content stabilization approach in a vibrating environment. We use the front panel camera of the handheld device to detect the brightness and the movement of the user’s head, and several estimation algorithms are adopted to calculate the brightness and movement compensation of the display content. For assisting the orientation and mobility of the visual impairments, we combine a smartphone with a Kinect motion controller to detect the orientation and distance of the obstacles on the walking path. The lightweight smartphone serves as a main control unit and the user could wear the whole device around his/her waist easily. Finally, we evaluate the performance of these systems objectively, and we could figure out the possible solutions of future improvements from the quantified experiment results.
[1] R. M. Soneira, “BrightnessGate for the iPhone & Android Smartphones and HDTVs”, DisplayMate Technologies Corporation, http://www.displaymate.com/AutoBrightness_Controls_2.htm, 2010. (Accessed on May 10, 2012)
[2] F. Jacob Seagull, and Christopher D. Wickens, “Vibration in command and control vehicles: visual performance, manual performance, and motion sickness,” Technical Report HFD-06-07/FEDLAB-06-01, Human Factors Division, Institute of Aviation, University of Illinois at Urbana-Champaign, 2006.
[3] C. J. Lin, Y.-H. Hsieh, H.-C. Chen, and J. C. Chen, “Visual performance and fatigue in reading vibrating numeric displays,” Displays, Vol. 29, Issue 4, 2008, pp. 386-392.
[4] A. H. Wertheim, "Working in a moving environment," Ergonomics, Vol.41, Issue 12, 1998, pp. 1845-1858.
[5] R. G. Long, and N. A. Giudice, “Establishing and maintaining orientation for mobility,” in Wiener, W.R., Blasch, B.B., & Welsh, R.L. (eds.), Foundations of Orientation and Mobility (3rd ed.), AFB Press, 2010, Chapter 2.
[6] G. Jansson, “Haptics as a substitute for vision,” in M.A Hersh, & M.A Johnson (Eds.), Assistive Technology for Visually Impaired and Blind People. London, Guildford, UK, Springer Verlag, 2008.
[7] J. E. Sheddy, “Office lighting for computer use,” in J. Anshel (Ed.), Visual Ergonomics Handbook, CRC Press, 2005, Chapter 5, pp. 37-51.
[8] Cambridge in Colour, “Dynamic range in digital photography,” retrieved from http://www.cambridgeincolour.com/tutorials/dynamic-range.htm. (Accessed on May 15, 2012)
[9] J. Ohta, “Fundamentals of CMOS image sensors,” in Smart CMOS Image Sensors and Applications, CRC Press, 2010, Chapter 2, pp. 11-55.
[10] J. Geng, “Structured-light 3D surface imaging: a tutorial,” Advances in Optics and Photonics, Vol. 3, 2011, pp. 128-160.
[11] M. Soriano, B. Martinkauppi, S. Huovinen, and M. Laaksonen, “Using the skin locus to cope with changing illumination conditions in color-based face tracking,” Proc. of the IEEE Nordic Signal Processing Symposium 13-15 June, KolmYarden, Sweden, 2000, pp. 383-386.
[12] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME - Journal of Basic Engineering, Vol. 82 (Series D), 1960, pp. 35-45.
[13] M. S. Grewal, L. R. Weill, and A. P. Andrews, “Kalman Filtering,” in Global positioning systems, inertial navigation, and integration, John Wiley & Sons, 2007, Chapter 8, pp. 255-315.
[14] H. Liu, H. Darabi, P. Banerjee, and J. Liu, "Survey of wireless indoor positioning techniques and systems," IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 37, Issue 6, 2007, pp.1067-1080.
[15] V. Honkavirta, T. Perala, S. Ali-Loytty, and R. Piche, “A comparative survey of WLAN location fingerprinting methods,” Proc. of the 6th IEEE Workshop on Positioning, Navigation and Communication (WPNC 2009), 2009, pp. 243-251.
[16] Y. Gu, A. Lo, and I. Niemegeers. "A survey of indoor positioning systems for wireless personal networks," IEEE Communications Surveys & Tutorials, Vol. 11, Issue 1, 2009, pp. 13-32.
[17] T. Ruiz-López, J. L. Garrido, K. Benghazi, and L. Chung, “A survey on indoor positioning systems: foreseeing a quality design,” Distributed Computing and Artificial Intelligence, Springer Berlin Heidelberg, 2009, pp. 373-380.
[18] H. Koyuncu, and S. H. Yang. "A survey of indoor positioning and object locating systems," International Journal of Computer Science and Network Security Vol. 10, Issue 5, 2010, pp. 121-128.
[19] K. M. Stanney, “Motion tracking requirements and technologies,” in Handbook of virtual environments: Design, implementation, and applications, Lawrence Erlbaum Associates Publishers, 2002, Chapter 7.
[20] M. Eskin, “Design of an inertial navigation unit using MEMS sensors,” Ph.D. Dissertation, Cornell University, 2006.
[21] J. Ohta, “Smart imaging,” in Smart CMOS image sensors and applications, CRC Press, 2010, Chapter 4, pp. 116-123.
[22] A. S. Huang, A. Bachrach, P. Henry, M. Krainin, D. Maturana, D. Fox, and N. Roy, "Visual odometry and mapping for autonomous flight using an RGB-D camera," Int. Symposium on Robotics Research (ISRR), Flagstaff, Arizona, USA, 2011.
[23] B. Abali, H. Franke, and M. E. Giampapa, “Method and apparatus for image stabilization in display device,” US patent US006317114, International Business Machines Corporation, Filed: 1999.
[24] A. Cornett, R. C. Becker, A. H. Johnson, and R. E. De Mers, “System and method for image stabilization,” US patent US2010/0321572, Honeywell International Inc., Filed: 2009.
[25] S. J. Daly, “Methods and systems for display viewer motion compensation based on user image data,” US patent US7903166, Sharp Laboratories of America, Inc., Filed: 2007.
[26] A. Rahmati, C. Shepard, and L. Zhong, “NoShake: Content stabilization for shaking screens of mobile devices,” Proc. of IEEE International Conference on Pervasive Computing and Communications (PerCom 2009), Galveston, Texas, March, 2009, pp. 1-6.
[27] M. Hersh, & M. Johnson, “Mobility: An overview,” in M.A Hersh, & M.A Johnson (Eds.), Assistive Technology for Visually Impaired and Blind People. London, Guildford, UK, Springer Verlag, 2008, chapter 5.
[28] A. Hub, J. Diepstraten, and T. Ertl, “Design and development of an indoor navigation and object identification system for the blind,” Proc. of 6th international ACM SIGACCESS conference on Computers and Accessibility, 2004, pp. 147-152.
[29] R. Farcy, R. Leroux, A. Jucha, R. Damaschini, C. Gregoire, A. Zogaghi, “Electronic travel aids and electronic orientation aids for blind people: technical, rehabilitation and everyday life points of view,” Proc. of the Conference & Workshop on Assistive Technologies for People with Vision Hearing Impairments Technology (CVHI 2006), M.A. Hersh (ed.), Kufstein, Austria, 2006.
[30] M. Zöllner, S. Huber, H.-C. Jetter, and H. Reiterer, “NAVI – A proof-of-concept of a mobile navigational aid for visually impaired based on the Microsoft Kinect,” Human-Computer Interaction (INTERACT 2011), Part IV. LNCS, 6949, 2011, pp. 584-587.
[31] A. Khan, F. Moideen, J. Lopez, W. L. Khoo, and Z. Zhu, “KinDectect: Kinect detecting objects,” Proc. of the 13th international conference on Computers Helping People with Special Needs, Volume Part II, 2012, pp. 588-595.
[32] S. Shoval, I. Ulrich, and J. Borenstein, “Navbelt and the guide-cane (obstacle-avoidance systems for the blind and visually impaired),” IEEE Robotics and Automation Magazine, Vol. 10, Issue 1, 2003, pp. 9-20.
[33] W. Penrod, M. Corbett, and B. B. Blasch, “A master trainer class for professionals in teaching the UltraCane electronic travel device,” Journal of Visual Impairment & Blindness, Vol. 99, Issue 11, 2005, pp. 711-715.
[34] J. D. Gomez, S. Mohammed, G. Bologna, T. Pun, “Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired,” Proc. of the 13th international ACM SIGACCESS conference on Computers and Accessibility, 2011, pp. 293-294.
[35] R. Farcy, R. Damaschini, “Triangulating laser profilometer as three-dimensional space perception system for the blind,” Applied Optics, vol. 36, Issue 31, 1997, pp. 8227-8232.
[36] S. T. Barnard, and M. A. Fischler, “Computational stereo,” ACM Computing Surveys, Vol. 14, Issue 4, 1982, pp. 553-572.
[37] M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, Issue 8, 2003, pp. 993-1008.
[38] R. Lagani`ere, OpenCV 2 Computer Vision Application Programming Cookbook, Packt Publishing, 2011.
[39] F. Golnaraghi, and B. C. Kuo, Automatic Control Systems, 9th ed., John Wiley & Sons, Inc., 2010, pp. 5-7.
[40] P. Read, and M.-P. Meyer, “Talking motion pictures,” in Restoration of motion picture film, Elsevier, Butterworth-Heinemann, 2000, pp. 24-26.
[41] H. Samet, and M. Tamminen, “An improved approach to connected component labeling of images,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, Miami, Florida, 1986, pp. 312-318.
[42] C.-C. Chiang, and C.-J. Huang, “A robust method for detecting arbitrarily tilted human faces in color images,” Pattern Recognition Letters, Vol. 26, No. 16, 2005, pp. 2518-2536.
[43] S. Liang, Java Native Interface: Programmer’s Guide and Reference, Addison-Wesley, 1999.
[44] F. J. Massey Jr., “The Kolmogorov-Smirnov test for goodness of fit,” Journal of the American Statistical Association, Vol. 46, Issue 253, 1951, pp. 68-78.
[45] Microsoft, “What is Kinect,” retrieved from http://www.xbox.com/en-US/kinect. (Accessed on February 22, 2012)
[46] S. Killig, “Nexus one USB host mode driver,” retrieved from http://sven.killig.de/android/N1/2.2/usb_host/. (Accessed on February 22, 2012)
[47] CyanogenMod, “CyanogenMod for the Desire (CDMA),” retrieved from https://github.com/cyanogenmod/android_device_htc_bravoc. (Accessed on February 22, 2012)
[48] OpenKinect Organization, “OpenKinect project,” retrieved from http://openkinect.org/wiki/Main_Page. (Accessed on February 22, 2012)
[49] S. B. Richards, R. L. Taylor, R. Ramasamy, and R. Y. Richards, Single Subject Research, Applications in Educational and Clinical Settings, New York, Wadsworth, 1999.
[50] S. Siegel, and N. J. Castellan, Nonparametric Statistics for the Behavioral Sciences, New York, McGraw-Hill Book Company, 1988.
[51] B. E. Luboshez, "Coincidence range finder," US Patent 2401692, Eastman Kodak Company, Filed: 1941.
[52] D. Scharstein, and R. Szeliski, "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms," International Journal of Computer Vision, Vol. 47, Issue 1-3, 2002, pp. 7-42.
[53] D. T. Delpy, M. Cope, P. Van der Zee, S. Arridge, S. Wray, and J. Wyatt, ”Estimation of optical pathlength through tissue from direct time of flight measurement,” Physics in Medicine and Biology, Vol. 33, Issue 12, 1988, pp. 1433-1442.
[54] A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Transactions on Graphics (TOG), Vol. 26, Issue 3, 2007, Article No. 70.
[55] R. T. Azuma, Predictive Tracking for Augmented Reality, Ph.D. Dissertation, University of North Carolina at Chapel Hill, 1995.
[56] S. Won, W. W. Melek, and F. Golnaraghi, "A Kalman/particle filter-based position and orientation estimation method using a position sensor/inertial measurement unit hybrid system," IEEE Transactions on Industrial Electronics, Vol. 57, Issue 5, 2010, pp. 1787-1798.
[57] E. Foxlin, "Pedestrian tracking with shoe-mounted inertial sensors," IEEE Computer Graphics and Applications, Vol. 25, Issue 6, 2005, pp. 38-46.
[58] A. R. Jiménez, F. Seco, J. C. Prieto, and J. Guevara, "Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IMU." Proc. of the 7th IEEE Workshop on Positioning Navigation and Communication (WPNC), 2010, pp. 135-143.