簡易檢索 / 詳目顯示

研究生: 馮子明
Feng, Tzu-Ming
論文名稱: 使用類神經網路建構可見光波段植生指標
Construction of RGB-based Vegetation Indices using Artificial Neural Network
指導教授: 林昭宏
Lin, Chao-Hung
學位類別: 碩士
Master
系所名稱: 工學院 - 測量及空間資訊學系
Department of Geomatics
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 94
中文關鍵詞: 植生指標類神經網路無人飛行載具
外文關鍵詞: Unmanned Aerial Vehicle, Vegetation Index, Artificial Neural Network
相關次數: 點閱:69下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來無人飛行載具(Unmanned Aerial Vehicle, UAV)的發展日趨成熟,其體積小、機動性高的特性,可快速、大面積的取得田間作物影像,有效取代人類視覺上觀測工作。田間作物影像可藉由影像分析與數據化,精確掌握作物生育資訊,提供施肥、病蟲害及水分管理等田間操作依據,降低人力需求並且提高生產效率。無人飛行載具的相機通常只接收可見光波段,可安裝多光譜或高光譜相機於無人機上以獲取更廣範圍的輻射波段影像,而在預算有限的考量下,多光譜與高光譜相機的成本過高,使用無人機內建可見光波段相機的實用性相對較高。本研究使用無人機拍攝的可見光波段影像預測作物在田地的生長狀況與生育資訊。
    本研究將常態化差異植生指標(Normalized Difference Vegetation
    Index, NDVI) 視為作物生長狀況的標準指標,並且使用可見光波段數值來預測NDVI 數值。本研究可分為兩大部分:可見光波段植生指標與類神經網路模型建構。在影像的波長範圍有限的情況下,可見光植生指標亦可以作為作物生長狀況的評斷依據,但各種可見光植生指標皆有其優缺點,本研究分析並比較各種可見光植生指標在不同地物地貌情況下與NDVI 的線性關係,探討可見光植生指標是否能代替NDVI 作為作物生長狀況的標準指標;類神經網路模型建構為將可見光波段作為模型的輸入資料,經過類神經網路的運算來預測NDVI 值,其中會探討不同網路結構的使用,對於模型成果優劣的比較。實驗結果顯示,可見光波段植生指標在特定地貌上能與NDVI 有高相關性,但無法在所有地貌上使用;類神經網路模型能不同地貌上且不同影像空間解析度上預測NDVI 數值。

    Recently, the development and technology of unmanned aerial vehicles (UAV) are maturing. The advantages of UAV, lightweight and hight mobility, are able to facilitate efficient farm monitoring, which can greatly reduce manpower in crop monitoring. Precise farming information can be obtained from the analysis of crop images, which assist crop management of fertilizers, pests, and watering, reducing the requirement of manpower
    and increasing productivity. Using UAV to collect the crop images and calculating the vegetation index (VI) from UAV images is the main purpose of this paper. To achieve the objective, this study chooses the Normalized Difference Vegetation Index (NDVI) as the standard VI and use UAV as carrier of camera. However, the camera of UAV could not provide near-infrared (NIR) band information which causes the unavailable of NDVI. To find the alternative, the CNN model is proposed to estimate the NDVI by using RGB images.

    摘要 i 英文延伸摘要 iii 誌謝 x 目錄 xii 表目錄 xv 圖目錄 xvii 第一章緒論 1 1.1 前言 1 1.2 植生指標 2 1.3 飛行載具與影像的輻射波段 7 1.4 人工神經網路及卷積神經網路 9 1.5 研究動機、目的與貢獻 10 1.6 論文架構 12 第二章文獻回顧 13 2.1 植生指標 15 2.2 人工神經網路(Artificial Neural Network, ANN) 17 2.3 深度學習於植生指標的應用 19 第三章研究方法 21 3.1 系統架構 21 3.2 實驗區域 22 3.3 使用影像 23 3.4 影像預處理 27 3.5 模型架構 31 3.5.1. 輸入層影像尺寸 32 3.5.2. 卷積層數 32 3.5.3. 殘差神經網路(Residual Network) 33 3.5.4. 空間金字塔池化(Spatial Pyramid Pooling, SPP) 35 3.5.5. 全卷積網路(Fully Convolutional Network,FCN) 38 3.5.6. 合併層(Concatenation) 39 3.5.7. 權重初始化(Weight Initialization ) 41 3.6 總結比較 46 第四章實驗結果與分析 51 4.1 訓練集與測試集 52 4.2 模型架構的確立 53 4.2.1. 卷積層層數 53 4.2.2. 殘差網路 56 4.2.3. 輸入影像尺寸 59 4.2.4. 空間金字塔池化(SPP) 63 4.2.5. 全卷積神經網路 64 4.2.6. 權重初始化 67 4.2.7. 合併層 70 4.2.8. 模型決定 73 4.3 影像空間解析度分析 75 4.4 地物地貌與可見光植生指標分析 77 4.4.1. 可見光波段植生指標 77 4.4.2. 建物 79 4.4.3. 道路 80 4.4.4. 草地 82 4.4.5. 作物 83 4.4.6. 樹木 85 4.4.7. 水文 86 第五章結論 89 參考文獻 91

    Agapiou, A., Hadjimitsis, D., & Alexakis, D. (2012, 12). Evaluation of broadband and narrowband vegetation indices for the identification of archaeological crop marks. Remote Sensing, 4, 3892-3919.

    Chakraborty, B., Shaw, B., Aich, J., Bhattacharya, U., & Parui, S. K. (2018). Does deeper network lead to better accuracy: A case study on handwritten devanagari characters. 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), 411-416.

    Chen, W., Hong, H., Li, S., Shahabi, H., Wang, Y., Wang, X., & Ahmad, B. B. (2019). Flood susceptibility modelling using novel hybrid approach of reduced-error pruning trees with bagging and random subspace ensembles. Journal of Hydrology, 575, 864 - 873.

    Duan, T., Chapman, S., Guo, Y., & Zheng, B. (2017). Dynamic monitoring of ndvi in wheat agronomy and breeding trials using an unmanned aerial vehicle. Field Crops Research, 210, 71–80.

    Gupta, R. K. (1993). Comparative study of avhrr ratio vegetation index and normalized difference vegetation index in district level agricultural monitoring. International Journal of Remote Sensing, 14(1), 53-73.

    Gitelson, A., Stark, R., Grits, U., Rundqist, D., Kaufman, Y., & Derry, D. (2002). Vegetation and soil lines in visible spectral space: A concept and technique for remote estimation of vegetation fraction. International Journal of Remote Sensing, 23, 2537−2562.

    Glorot, X., & Bengio, Y. (2010, May). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 9, 249-256.

    Hunt, E. R., Doraiswamy, P. C., McMurtrey, J. E., Daughtry, C. S., Perry,
    E. M., & Akhmedov, B. (2013). A visible band index for remote sensing leaf chlorophyll content at the canopy scale. International Journal of Applied Earth Observation and Geoinformation, 21, 103 - 112.

    He, K., Zhang, X., Ren, S., & Sun, J. (2014). Spatial pyramid pooling in deep convolutional networks for visual recognition. Lecture Notes in Computer Science, 346–361.

    He, K., Zhang, X., Ren, S., & Sun, J. (2015, 02). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. IEEE International Conference on Computer Vision (ICCV 2015), 1502.

    He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778.

    Huang, G., Liu, Z., Maaten, L. V. D., & Weinberger, K. Q. (2017). Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261-2269.

    Jordan, C. F. (1969). Derivation of leaf area index from quality of light on the forest floor. Ecology, 50, 663-666.

    Jiang, T., Liu, X., & Wu, L. (2018, November). Method for mapping rice fields in complex landscape areas based on pre-trained convolutional neural network from hj-1 a/b data. ISPRS Internation journal of Geo- information, 7(11), 418.

    Jay, S., Baret, F., Dutartre, D., Malatesta, G., Héno, S., Comar, A., … Maupas,
    F. (2019). Exploiting the centimeter resolution of uav multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sensing of Environment, 231, 110898.

    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, 1097-1105.

    Kerkech, M., Hafiane, A., & Canals, R. (2018). Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in uav images. Computers and Electronics in Agriculture, 155, 237 - 243.

    Khan, Z., Rahimi-Eichi, V., & Haefele, S. (2018). Estimation of vegetation indices for high-throughput phenotyping of wheat using aerial imaging. Plant Methods, 14, 20.

    Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.

    Lyon, J., Yuan, D., Lunetta, R., & Elvidge, C. (1998). A change detection experiment using vegetation indices. Photogrammetric Engineering and Remote Sensing, 64, 143–150.

    Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431-3440.

    Limmer, M., & Lensch, H. (2016). Infrared colorization using deep convolutional neural networks. 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), 61-68.

    Mao, W., Wang, Y., & Wang, Y. (2003). Real-time detection of between-row weeds using machine vision. 2003 ASAE Annual Meeting, 031004.

    Rouse, J., J.W., Haas, R., Schell, J., & Deering, D. (1973). Monitoring the vernal advancement and retrogradation (green wave effect) of natural
    vegetation. RSC 1978–4, Remote Sensing Center, Texas A&M Univ.,
    College Station.

    Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations.

    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., … Rabinovich, A. (2015). Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1-9.

    Scarpa, G., Gargiulo, M., Mazza, A., & Gaetano, R. (2018, February). A cnn- based fusion method for feature extraction from sentinel data. REMOTE SENSING, 10(2).

    Tucker, C. J. (1979). Red and photographic infrared linear combinations for monitoring vegetation. Remote Sensing of Environment, 8(2), 127 - 150.

    Yue, J., Yang, G., Li, C., Li, Z., Wang, Y., Feng, H., & Xu, B. (2017).
    Estimation of winter wheat above-ground biomass using unmanned aerial vehicle-based snapshot hyperspectral sensor and crop height improved models. Remote Sensing, 9(7), 708.

    Zhou, X., Zheng, H., Xu, X., He, J., Ge, X., Yao, X., … Tian, Y. (2017).
    Predicting grain yield in rice using multi-temporal vegetation indices from uav-based multispectral and digital imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 246–255.

    Zhao, S., Liu, X., Ding, C., Liu, S., Wu, C., & Wu, L. (2020). Mapping rice paddies in complex landscapes with convolutional neural networks and phenological metrics. GIScience & Remote Sensing, 57(1), 37-48.

    林世峻, 莊智瑋, 何世華, & 林昭遠。(2008), 「植生指標對影像分類準 確度影響之研究」, 水土保持學報, 40(2), 181-193。

    黃筱梅 (2001),「Spot 衛星影像於裸露地變遷之偵測研究—以和社地 區為例」(碩士論文),國立臺灣大學森林學研究所。

    黃麗娟, 莊智瑋, 何世華, & 林昭遠。(2008), 「衛星影像植生指標優選 之研究」, 水土保持學報, 40(1), 39-50。

    下載圖示 校內:2025-08-22公開
    校外:2025-08-22公開
    QR CODE