| 研究生: |
李元碩 Li, Yuan-Shuo |
|---|---|
| 論文名稱: |
基於CNN之樹種分類於無人機採集之高分辨率多光譜影像數據 Tree Species Identification Based on CNN by Employing High Resolution Multispectral Image of UAV Observation |
| 指導教授: |
黃悅民
Huang, Yueh-Min |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 71 |
| 中文關鍵詞: | 多光譜影像 、卷積神經網路 、影像融合 、作物分類 、無人飛行載具 |
| 外文關鍵詞: | Multispectral Image, Convolutional Neural Network, Image Fusion, Crop Classification, Unmanned Aerial Vechicles(VAU) |
| 相關次數: | 點閱:94 下載:3 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
為解決臺灣山坡地作物混雜、分布不均,且缺乏結構化管理之問題,本研究提出一套方法,使用無人飛行載具搭載多光譜和光學相機,在實驗研究坡地區域上空進行掃描式拍攝,進而製作該研究範圍的正射影像,正射影像可以概略地觀察研究區域內作物、建築、道路等等的分布,但無法提供其他詳細的資訊,例如樹種分布和所占面積資訊。因此,本研究主要針對該研究區域內的樹種、道路、建築進行分類,並透過卷積神經網路的預測進行判識,藉以達到準確的樹種辨識系統,依照辨識結果可將該研究區域的正射影像進行塗色,可以精準地顯示不同作物之間和其他目標物的分布,並且可依照該幅正射影像的地面採樣距離(Ground Sample Distance,GSD),計算並提供各類別作物所占面積和精準座標位置。
然而,本研究所採用的資料集影像除了一般光學影像外,也運用了多光譜影像(Multispectral Image)。多光譜影像可以反映光線照射在物體上實際反射的特定波長資訊,然而不同種類的物體對於特定光譜波段的反射和吸收會呈現不一樣的效果,例如植物會反射較多的近紅外光,所以植物在近紅外波段影像中會較為明顯。本研究透過多光譜影像顯示光譜波段的特性,藉以提升該辨識模型的精準度。雖然多光譜影像有較詳細波段的資訊內容,但受其先天的限制,空間解析度會相較一般光學影像低,因此本研究使用四種影像融合方法,分別為:Brovey轉換、HSV、主成分(Principal Components),以及Gram-Schmidt影像融合方法,在可見光波段多光譜影像進行空間解析度的提升,經過各別訓練比較其結果顯示,多光譜影像搭配光學影像使用主成分融合方法有最好的效果,最後使用多光譜的近紅外光、紅邊、常態化差異植生指標組合不同波段影像進行訓練,其結果顯示使用可見光波段融合影像搭配近紅外光或是紅邊有最好的預測準確率。
To solve mixed, unevenly distributed crops and lack of structured management in Taiwan’s mountain slopes, this study uses unmanned aerial vehicles equipped with multispectral and optical cameras to scan and photograph the slope in the sky make Orthophoto in this area. This research mainly focuses on the classification of tree species, roads and buildings in the study area and judges through the prediction of the convolutional neural network to realize the accurate identification and classification of tree species. According to the recognition results, an orthophoto of the study area can be obtained. Use different colors to distinguish, more accurately display the distribution of various crops and other targets.
However, in addition to general optical images, the dataset images used in this research also use the multispectral image. Multispectral images can reflect the specific wavelength information reflected by the light irradiated on the object. Different types of objects have different effects on the reflection and absorption of specific spectral bands, thereby improving the accuracy of the identification model. Although multispectral images have more detailed band information content, the spatial resolution will be lower than ordinary optical images. Therefore, this study uses four image fusion methods: Brovey transformation, HSV, Principal Components, and Gram-Schmidt to improve its spatial resolution. Finally, images of different bands are combined for training, and the results are integrated and compared.
[1] J. Grau et al., "Improved Accuracy of Riparian Zone Mapping Using Near Ground Unmanned Aerial Vehicle and Photogrammetry Method," Remote Sensing, vol. 13, no. 10, p. 1997, 2021.
[2] S. Egli and M. Höpke, "CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations," Remote Sensing, vol. 12, no. 23, p. 3892, 2020.
[3] J. Zheng et al., "Growing status observation for oil palm trees using Unmanned Aerial Vehicle (UAV) images," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 173, pp. 95-121, 2021.
[4] E. Salamí, S. Pedre, P. Borensztejn, C. Barrado, A. Stoliar, and E. Pastor, "Decision Support System for Hot Spot Detection," in Intelligent Environments, 2009, pp. 277-284.
[5] V. Ambrosia et al., "Unmanned airborne systems supporting disaster observations: Near-Real-Time data needs," Int. Soc. Photogramm. Remote Sens, vol. 144, pp. 1-4, 2011.
[6] Q. Feng, J. Liu, and J. Gong, "Urban flood mapping based on unmanned aerial vehicle remote sensing and random forest classifier—A case of Yuyao, China," Water, vol. 7, no. 4, pp. 1437-1455, 2015.
[7] R. E. Baker, "Combining micro technologies and unmanned systems to support public safety and homeland security," J. Civ. Eng. Archit, vol. 6, no. 10, p. 1399, 2012.
[8] H. Landau, U. Vollath, and X. Chen, "Virtual reference station systems," Journal of Global Positioning Systems, vol. 1, no. 2, pp. 137-143, 2002.
[9] V. Karathanassi, P. Kolokousis, and S. Ioannidou, "A comparison study on fusion methods using evaluation indicators," International Journal of Remote Sensing, vol. 28, no. 10, pp. 2309-2341, 2007.
[10] C. Zhang and Z. Xie, "Object-based vegetation mapping in the Kissimmee River watershed using HyMap data and machine learning techniques," Wetlands, vol. 33, no. 2, pp. 233-244, 2013.
[11] O. Nevalainen et al., "Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging," Remote Sensing, vol. 9, no. 3, p. 185, 2017.
[12] Z. Xie, Y. Chen, D. Lu, G. Li, and E. Chen, "Classification of land cover, forest, and tree species classes with ZiYuan-3 multispectral and stereo data," Remote Sensing, vol. 11, no. 2, p. 164, 2019.
[13] K. Y. Peerbhay, O. Mutanga, and R. Ismail, "Commercial tree species discrimination using airborne AISA Eagle hyperspectral imagery and partial least squares discriminant analysis (PLS-DA) in KwaZulu–Natal, South Africa," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 79, pp. 19-28, 2013.
[14] F. E. Fassnacht et al., "Review of studies on tree species classification from remotely sensed data," Remote Sensing of Environment, vol. 186, pp. 64-87, 2016.
[15] B. Ghimire, J. Rogan, and J. Miller, "Contextual land-cover classification: incorporating spatial dependence in land-cover classification models using random forests and the Getis statistic," Remote Sensing Letters, vol. 1, no. 1, pp. 45-54, 2010.
[16] G. Omer, O. Mutanga, E. M. Abdel-Rahman, and E. Adam, "Performance of support vector machines and artificial neural network for mapping endangered tree species using WorldView-2 data in Dukuduku forest, South Africa," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 10, pp. 4825-4840, 2015.
[17] E. Raczko and B. Zagajewski, "Comparison of support vector machine, random forest and neural network classifiers for tree species classification on airborne hyperspectral APEX images," European Journal of Remote Sensing, vol. 50, no. 1, pp. 144-154, 2017.
[18] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, "Pixel-level image fusion: A survey of the state of the art," information Fusion, vol. 33, pp. 100-112, 2017.
[19] Y. Li and R. Verma, "Multichannel image registration by feature-based information fusion," IEEE Transactions on Medical Imaging, vol. 30, no. 3, pp. 707-720, 2010.
[20] A. Stein, "Use of single-and multi-source image fusion for statistical decision-making," International journal of applied earth observation and geoinformation, vol. 6, no. 3-4, pp. 229-239, 2005.
[21] V. Naidu and J. R. Raol, "Pixel-level image fusion using wavelets and principal component analysis," Defence Science Journal, vol. 58, no. 3, p. 338, 2008.
[22] T. Tu, Y.-C. Lee, C.-P. Chang, and P. S. Huang, "Adjustable intensity-hue-saturation and Brovey transform fusion technique for IKONOS/QuickBird imagery," Optical Engineering, vol. 44, no. 11, p. 116201, 2005.
[23] 維基百科編者. (2021, 06-30). HSL和HSV色彩空間 [Online]. Available: https://zh.wikipedia.org/wiki/HSL%E5%92%8CHSV%E8%89%B2%E5%BD%A9%E7%A9%BA%E9%97%B4.
[24] A. Kamble, C. Maisheri, and P. Upadhyay, "HSV, IHS and PCA Based Image Fusion: A Review," International Journal of Engineering and Management Research (IJEMR), vol. 6, no. 1, pp. 164-167, 2016.
[25] C. A. Laben and B. V. Brower, "Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening," ed: Google Patents, 2000.
[26] F. Solano, S. Di Fazio, and G. Modica, "A methodology based on GEOBIA and WorldView-3 imagery to derive vegetation indices at tree crown detail in olive orchards," International Journal of Applied Earth Observation and Geoinformation, vol. 83, p. 101912, 2019.
[27] J. Wang, K. Chen, S. Yang, C. C. Loy, and D. Lin, "Region proposal by guided anchoring," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2965-2974.
[28] Y. Wang, X. Zhu, and B. Wu, "Automatic detection of individual oil palm trees from UAV images using HOG features and an SVM classifier," International Journal of Remote Sensing, vol. 40, no. 19, pp. 7356-7370, 2019.
[29] S. Malek, Y. Bazi, N. Alajlan, H. AlHichri, and F. Melgani, "Efficient framework for palm tree detection in UAV images," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 12, pp. 4692-4703, 2014.
[30] K. Johansen et al., "Mapping the condition of macadamia tree crops using multi-spectral UAV and WorldView-3 imagery," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 165, pp. 28-40, 2020.
[31] A. H. Özcan and C. Ünsalan, "Probabilistic object detection and shape extraction in remote sensing data," Computer Vision and Image Understanding, vol. 195, p. 102953, 2020.
[32] L. Windrim, A. J. Carnegie, M. Webster, and M. Bryson, "Tree Detection and Health Monitoring in Multispectral Aerial Imagery and Photogrammetric Pointclouds Using Machine Learning," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 2554-2572, 2020.
[33] R. Pu and S. Landry, "A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species," Remote Sensing of Environment, vol. 124, pp. 516-533, 2012.
[34] W. Yao and Y. Wei, "Detection of 3-D individual trees in urban areas by combining airborne LiDAR data and imagery," IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 6, pp. 1355-1359, 2013.
[35] R. Blomley, A. Hovi, M. Weinmann, S. Hinz, I. Korpela, and B. Jutzi, "Tree species classification using within crown localization of waveform LiDAR attributes," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 133, pp. 142-156, 2017.
[36] M. Weinmann, M. Weinmann, C. Mallet, and M. Brédif, "A classification-segmentation framework for the detection of individual trees in dense MMS point cloud data acquired in urban areas," Remote sensing, vol. 9, no. 3, p. 277, 2017.
[37] W. Dai, B. Yang, Z. Dong, and A. Shaker, "A new method for 3D individual tree extraction using multispectral airborne LiDAR point clouds," ISPRS journal of photogrammetry and remote sensing, vol. 144, pp. 400-411, 2018.
[38] Z. Ma et al., "Individual Tree Crown Segmentation of a Larch Plantation Using Airborne Laser Scanning Data Based on Region Growing and Canopy Morphology Features," Remote Sensing, vol. 12, no. 7, p. 1078, 2020.
[39] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[40] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[41] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, 2015: Springer, pp. 234-241.
[42] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Region-based convolutional networks for accurate object detection and segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 1, pp. 142-158, 2015.
[43] R. Girshick, "Fast r-cnn," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448.
[44] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, pp. 91-99, 2015.
[45] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[46] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[47] AgEagle. (2021). MicaSese Altum SPECIFICATIONS [Online]. Available: https://micasense.com/altum/.
[48] D. G. Lowe, "Object recognition from local scale-invariant features," in Proceedings of the seventh IEEE international conference on computer vision, 1999, vol. 2: Ieee, pp. 1150-1157.
[49] K. Mikolajczyk and C. Schmid, "A performance evaluation of local descriptors," IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 10, pp. 1615-1630, 2005.
[50] DJI. (2021). ZENMUSE X4S SPECIFICATIONS [Online]. Available: https://www.dji.com/tw/zenmuse-x4s.