簡易檢索 / 詳目顯示

研究生: 勞宏斌
Lo, Hong-Ping
論文名稱: 深度學習用於農地調查
FARMLAND STATUS RECOGNITION USING DEEP LEARNING
指導教授: 王驥魁
Wang, Chi-Kuei
學位類別: 碩士
Master
系所名稱: 工學院 - 測量及空間資訊學系
Department of Geomatics
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 45
中文關鍵詞: 深度學習影像辨識作物辨識
外文關鍵詞: Deep learning, Image Classification, Crop Recognition
相關次數: 點閱:133下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 農地調查資料主要用於產量估計。此資料包含農地的位置、大小及農地現地資訊。由於農地會經歷不同的耕期,為了達到良好的產量估計,一個持續更新的農地調查資料將是關鍵。現地調查是現在常用的方法,專業調查人員需要到現地辨識農地的狀況。雖然此方法準確率非常高,但也需要大量的人力。
    此研究的目的是為了評量深度學習用於農地調查的可行性。使用深度學習以減少專業人力的需求進而達到提高農地調查資料產量之目的。此研究使用了Inception-v3 深度卷積類神經網路對農地影像進行辨識。
    網路的訓練資料共有87,023張標記影像其中包含了25種農地現地資訊,其中包含了23種作物,如稻米、小麥、花生等,及2種常見的農地耕作狀態, 休耕和淹水。 各農地現地資訊中的100張影像被隨機選出成為驗證資料。 最終驗證的平均準確率達到了82.52%。 其中準確率最高的三種為淹水,香蕉及鳳梨,分別達到了97%,96%及95%的準確率。而準確率最低的三種為油菜,大豆及狼尾草,分別達到了64%,66%及66%。
    此外,這項研究也在彰化的崙角及嘉義的柴林腳進行了實驗。 崙角實驗區覆蓋了210塊農地並達到了81.90%準確率。而柴林腳實驗區覆蓋了506塊農地並達到了88.34%準確率。由此驗證了使用深度學習以減少專業人力的需求進而達到提高農地調查資料產量之可能性。

    Farmland status data are the main component for yield estimation. The data include the location, the size and the status of each farmland. However, crops are continuously going through cultivation cycle. Frequent update of the farmland status data is desired in order to have a good crop yield estimation. Currently, field survey is a commonly adopted method, which is carried out by experts to identify crop type in the field. However, this method is labor intensive.
    The purpose of this study is to evaluate the possibility of using deep learning to reduce the expert knowledge requirement of the field crew and scale up the production of the farmland status data. This study adopted the Inception-v3 network model, a convolution-based neural network to recognize the farmland status from images taken at above ground height of 1.5 to 1.7 m and cover the whole field.
    The training data were collected across Taiwan with several handheld camera and consist of 87,023 labeled images and 25 different farmland status, including 23 different types of common crops (rice, wheat, peanut, etc.) and 2 cultivation status (fallow and inundated). The average classification accuracy of 82.52% was achieved with 100 validation images for each 25 farmland status, respectively. Inundated, banana and pineapple achieved the highest accuracy of 97%, 96% and 95%, respectively. Cole, soybean and pennisetum gave the lowest accuracy of 64%, 66% and 66%, respectively.
    Furthermore, another two datasets that covered an area of 2 km² for each of Lunjiao (Changhua County) and Chailinjiao (Chiayi County) were conducted. The Lunjiao dataset has 210 farmlands and reached an accuracy of 81.90%; the Chailinjiao dataset has 506 farmlands and reached an accuracy of 88.34%. The experimental results demonstrate the proposed method has the potential to reduce the expert knowledge requirements of the field crew and scale up the generation of farmland status data.

    中文摘要 i Abstract iii Acknowledgement v Table of Content vi List of Figures viii List of Tables x Chapter 1: Introduction 1 Chapter 2: Data & Methodology 5 2.1 Network Architecture 5 2.2 Transfer Learning 12 2.3. Training Data and Augmentation 13 2.4 Lunjiao and Chailinjiao Experiment 19 Chapter 3: Result and Discussion 20 3.1 Network Training 20 3.2 Comparison of Data Augmentations Methods 21 3.3 Validation and Comparison of Different Input Image Size 23 3.4 Lunjiao and Chailinjiao Experiment Result 34 Chapter 4: Conclusion and Outlook 39 4.1 Conclusion 39 4.2 Outlook 40 References 42

    [1] P.Sidike et al., “dPEN: deep Progressively Expanded Network for mapping heterogeneous agricultural landscape using WorldView-3 satellite imagery,” Remote Sens. Environ., vol. 221, no. April 2018, pp. 756–772, 2019.
    [2] N.Kussul, M.Lavreniuk, S.Skakun, andA.Shelestov, “Deep Learning Classification of Land Cover and,” Ieee Geosci. Remote Sens. Lett., vol. 14, no. 5, pp. 778–782, 2017.
    [3] G.Sumbul, R. G.Cinbis, andS.Aksoy, “Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 2, pp. 770–779, 2018.
    [4] T.Koji, “Koji Tanaka-Transformation of Rice-based Cropping Patterns in the Mekong Delta From Intensification to Deversification,” Southeast Asian Stud., vol. Vol. 33, no. No. 3, pp. 363–378, 1995.
    [5] T.Jiang, X.Liu, andL.Wu, “Method for Mapping Rice Fields in Complex Landscape Areas Based on Pre-Trained Convolutional Neural Network from HJ-1 A/B Data,” ISPRS Int. J. Geo-Information, vol. 7, no. 11, p. 418, 2018.
    [6] S. L.Huang, Y. C.Lee, W. W.Budd, andM. C.Yang, “Analysis of changes in farm pond network connectivity in the peri-urban landscape of the taoyuan area, taiwan,” Environ. Manage., vol. 49, no. 4, pp. 915–928, 2011.
    [7] S.Siebert, M.Nagieb, andA.Buerkert, “Climate and irrigation water use of a mountain oasis in northern Oman,” Agric. Water Manag., vol. 89, no. 1–2, pp. 1–14, 2007.
    [8] Y.Liu, D. M.Nguyen, N.Deligiannis, W.Ding, andA.Munteanu, “Hourglass-shape network based semantic segmentation for high resolution aerial imagery,” Remote Sens., vol. 9, no. 6, pp. 1–24, 2017.
    [9] L.Zhong, L.Hu, andH.Zhou, “Deep learning based multi-temporal crop classification,” Remote Sens. Environ., vol. 221, no. March 2018, pp. 430–443, 2019.
    [10] J.Ringland, M.Bohm, andS. R.Baek, “Characterization of food cultivation along roadside transects with Google Street View imagery and deep learning,” Comput. Electron. Agric., vol. 158, no. December 2018, pp. 36–50, 2019.
    [11] A.Johannes et al., “Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case,” Comput. Electron. Agric., vol. 138, pp. 200–209, 2017.
    [12] S.Coulibaly, B.Kamsu-Foguem, D.Kamissoko, andD.Traore, “Deep neural networks with transfer learning in millet crop images,” Comput. Ind., vol. 108, pp. 115–120, 2019.
    [13] Z.Lin et al., “A unified matrix-based convolutional neural network for fine-grained image classification of wheat leaf diseases,” IEEE Access, vol. 7, pp. 11570–11590, 2019.
    [14] S. J.Pan andQ.Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010.
    [15] M.Castelluccio, G.Poggi, C.Sansone, andL.Verdoliva, “Land Use Classification in Remote Sensing Images by Convolutional Neural Networks,” pp. 1–11, 2015.
    [16] C.Szegedy, V.Vanhoucke, S.Ioffe, J.Shlens, andZ.Wojna, “Rethinking the Inception Architecture for Computer Vision,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 2818–2826, 2016.
    [17] L.Y, B.L, B.Y, andH.P, “Gradient-Based Learning Applied to Document Recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [18] S.Mittal, “A survey of FPGA-based accelerators for convolutional neural networks,” Neural Comput. Appl., 2018.
    [19] G.Zeng, Y.He, Z.Yu, X.Yang, R.Yang, andL.Zhang, “InceptionNet/GoogLeNet - Going Deeper with Convolutions,” Cvpr, vol. 91, no. 8, pp. 2322–2330, 2016.
    [20] O.Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
    [21] K.Simonyan andA.Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” pp. 1–14, 2014.
    [22] S.Ioffe andC.Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” 2015.
    [23] M.Mahdianpari, B.Salehi, M.Rezaee, F.Mohammadimanesh, andY.Zhang, “Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery,” Remote Sens., vol. 10, no. 7, p. 1119, 2018.
    [24] K.He, X.Zhang, S.Ren, andJ.Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1026–1034, 2015.
    [25] Z.Qin, F.Yu, C.Liu, andX.Chen, “How convolutional neural network see the world - A survey of convolutional neural network visualization methods,” vol. 1, no. 2, pp. 149–180, 2018.
    [26] N.Silberman andS.Guadarrama, “TensorFlow-Slim image classification model library,” 2016. .
    [27] M.Abadi et al., “TensorFlow: A system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265–283.

    無法下載圖示 校內:2022-07-15公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE