| 研究生: |
林子勝 Lin, Zih-Sheng |
|---|---|
| 論文名稱: |
基於人工智慧達成室內污染物感測器佈局最小化 Optimizing Indoor Pollutant Sensor Deployment through Artificial Intelligence |
| 指導教授: |
藍崑展
Lan, Kun-Chan |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 英文 |
| 論文頁數: | 106 |
| 中文關鍵詞: | 室內污染監測 、智慧感測器配置 、影像修補 、深度學習 |
| 外文關鍵詞: | Indoor Pollutant Monitoring, AI-based Sensor Placement, Image Inpainting, Deep learning |
| 相關次數: | 點閱:41 下載:1 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
準確且高效率地監測室內空氣污染物,對於保障職場健康與安全具有關鍵性。然而實際上由於成本、維護與空間上的限制,大規模部署感測器往往難以實現。本研究提出一套具備感測器使用效率的創新框架,採用自監督式 Transformer 架構的 VideoMAE,將影像修補技術應用於感測器資料重建任務中。與傳統方法或僅關注視覺品質的修復技術不同,所提出模型專注於恢復具物理意義的污染物濃度,並以相對誤差數值指標進行評估,確保結果可直接應用於真實的暴露風險分析。在模擬計算流體力學(CFD)數據與真實感測資料上的實驗結果顯示,僅使用 25% 的感測器數量,即可準確重建污染濃度,其中超過 99% 的預測值誤差小於 10%。即使在測試階段風速條件產生變化,模型依然展現出高度的穩健性,有超過 93% 的預測值維持在相同準確度。這些成果證明了以影像修補為基礎的重建方法具備高度潛力,能有效應用於感測器數量受限的室內污染監測場景,為實現低成本且可擴展的智慧環境監測奠定基礎。
Accurate and efficient monitoring of indoor air pollutants is essential for ensuring occupational health and safety. However, deploying dense sensor networks is often infeasible due to cost, maintenance, and spatial constraints. In this study, we propose a novel sensor-efficient framework using a video inpainting approach based on a self-supervised Transformer model, VideoMAE, to reconstruct missing sensor data. Unlike traditional methods or prior inpainting approaches that focus on visual quality, our model is tailored to recover physically meaningful pollutant concentrations and is evaluated using numerical accuracy metrics such as relative error. Experiments on both simulated Computational Fluid Dynamics(CFD) datasets and real-world sensor measurements demonstrate that our method can achieve over 99% of predicted sensor values within 10% relative error using only 25% of the full sensor grid. Moreover, even under changing airflow conditions during testing, the model maintains strong robustness with over 93% of predicted values staying within the same accuracy threshold. These results confirm the potential of inpainting-based reconstruction for sensor-efficient pollutant monitoring and pave the way for scalable environmental assessment using minimal hardware deployment.
[1] A. P. Jones, "Indoor air quality and health," Atmospheric environment, vol. 33, no. 28, pp. 4535-4564, 1999.
[2] L. Lottrup, U. K. Stigsdotter, H. Meilby, and S. S. Corazon, "Associations between use, activities and characteristics of the outdoor environment at workplaces," Urban Forestry & Urban Greening, vol. 11, no. 2, pp. 159-168, 2012/01/01/ 2012, doi: https://doi.org/10.1016/j.ufug.2011.12.006.
[3] M. Ezzati and D. M. Kammen, "The health impacts of exposure to indoor air pollution from solid fuels in developing countries: knowledge, gaps, and data needs," Environmental health perspectives, vol. 110, no. 11, pp. 1057-1068, 2002.
[4] C. Becchio et al., "The effects of indoor and outdoor air pollutants on workers’ productivity in office building," in E3S Web of Conferences, 2019, vol. 111: EDP Sciences, p. 02057.
[5] P. Kumar et al., "Real-time sensors for indoor air monitoring and challenges ahead in deploying them to urban buildings," Science of the Total Environment, vol. 560, pp. 150-159, 2016.
[6] P. Bohlin, K. C. Jones, and B. Strandberg, "Occupational and indoor air exposure to persistent organic pollutants: A review of passive sampling techniques and needs," Journal of Environmental Monitoring, vol. 9, no. 6, pp. 501-509, 2007.
[7] A. Gomes, N. M. Islam, and M. R. Karim, "Data-driven environmental risk management and sustainability analytics," Journal of Computer Science and Technology Studies, vol. 7, no. 3, pp. 812-825, 2025.
[8] D. H. Pashley, "Clinical considerations of microleakage," Journal of endodontics, vol. 16, no. 2, pp. 70-77, 1990.
[9] S. Steinle, S. Reis, and C. E. Sabel, "Quantifying human exposure to air pollution—Moving from static monitoring to spatio-temporally resolved personal exposure assessment," Science of the Total Environment, vol. 443, pp. 184-193, 2013.
[10] G. A. Mills et al., "Measurement of environmental pollutants using passive sampling devices–an updated commentary on the current state of the art," Environmental Science: Processes & Impacts, vol. 16, no. 3, pp. 369-373, 2014.
[11] P. A. Baron, "Direct-reading instruments for aerosols. A review," Analyst, vol. 119, no. 1, pp. 35-40, 1994.
[12] D. Kim, S. Woo, J.-Y. Lee, and I. S. Kweon, "Deep video inpainting," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5792-5801.
[13] T. C. Odubo and E. A. Kosoe, "Sources of air pollutants: impacts and solutions," in Air Pollutants in the Context of One Health: Fundamentals, Sources, and Impacts: Springer, 2024, pp. 75-121.
[14] K. Hess-Kosa, Indoor air quality: sampling methodologies. CRC Press, 2010.
[15] D. M. Mukesh and S. K. Akula, "Automated indoor air quality monitor and control," International Journal of Computer Applications, vol. 159, no. 6, pp. 0975-8887, 2017.
[16] Y. David, "Direct-reading instrumentation for workplace aerosol measurements. A review," Analyst, vol. 121, no. 9, pp. 1215-1224, 1996.
[17] S. Kim, H. D. Park, and E. Hwang, "Review paper for characterization of photoionization detector-direct reading monitors," Journal of Korean Society of Occupational and Environmental Hygiene, vol. 33, no. 2, pp. 93-102, 2023.
[18] M. L. Woebkenberg, "Direct-reading monitoring devices for carbon monoxide," Applied Occupational and Environmental Hygiene, vol. 13, no. 8, pp. 567-570, 1998.
[19] M. M. Dahm, D. E. Evans, M. K. Schubauer-Berigan, M. E. Birch, and J. A. Deddens, "Occupational exposure assessment in carbon nanotube and nanofiber primary and secondary manufacturers: mobile direct-reading sampling," Annals of Occupational Hygiene, vol. 57, no. 3, pp. 328-344, 2013.
[20] C. l. Lin, M. H. Tawhai, G. Mclennan, and E. A. Hoffman, "Computational fluid dynamics," IEEE Engineering in Medicine and Biology Magazine, vol. 28, no. 3, pp. 25-33, 2009, doi: 10.1109/MEMB.2009.932480.
[21] G. Łukaszewicz and P. Kalita, "Navier–stokes equations," Advances in Mechanics and Mathematics, vol. 34, 2016.
[22] T. D. Pigott, "A review of methods for missing data," Educational research and evaluation, vol. 7, no. 4, pp. 353-383, 2001.
[23] N. Bokde, M. W. Beck, F. M. Álvarez, and K. Kulat, "A novel imputation methodology for time series based on pattern sequence forecasting," Pattern recognition letters, vol. 116, pp. 88-96, 2018.
[24] N. Niako, J. D. Melgarejo, G. E. Maestre, and K. P. Vatcheva, "Effects of missing data imputation methods on univariate blood pressure time series data analysis and forecasting with ARIMA and LSTM," BMC Medical Research Methodology, vol. 24, no. 1, p. 320, 2024.
[25] C. Velasco-Gallego and I. Lazakis, "A novel framework for imputing large gaps of missing values from time series sensor data of marine machinery systems," Ships and Offshore Structures, vol. 17, no. 8, pp. 1802-1811, 2022.
[26] J. I. Porta, M. A. Domínguez, and F. Tamarit, "Automatic data imputation in time series processing using neural networks for industry and medical datasets," in Annual International Conference on Information Management and Big Data, 2021: Springer, pp. 3-16.
[27] Z. Guo, Y. Wan, and H. Ye, "A data imputation method for multivariate time series based on generative adversarial network," Neurocomputing, vol. 360, pp. 185-197, 2019.
[28] G. Chhabra, "Comparison of imputation methods for univariate time series," Int. J. Recent Innov. Trends Comput. Commun, vol. 11, no. 2s, pp. 286-292, 2023.
[29] Z. Han, J. Zhao, H. Leung, K. F. Ma, and W. Wang, "A review of deep learning models for time series prediction," IEEE Sensors Journal, vol. 21, no. 6, pp. 7833-7848, 2019.
[30] M. Arnold, X. Milner, H. Witte, R. Bauer, and C. Braun, "Adaptive AR modeling of nonstationary time series by means of Kalman filtering," IEEE transactions on biomedical engineering, vol. 45, no. 5, pp. 553-562, 1998.
[31] F. M. Khan and R. Gupta, "ARIMA and NAR based prediction model for time series analysis of COVID-19 cases in India," Journal of safety science and resilience, vol. 1, no. 1, pp. 12-18, 2020.
[32] B. Zhang, J.-L. Wu, and P.-C. Chang, "A multiple time series-based recurrent neural network for short-term load forecasting," Soft Computing, vol. 22, pp. 4099-4112, 2018.
[33] H. Lin et al., "Time series-based groundwater level forecasting using gated recurrent unit deep neural networks," Engineering Applications of Computational Fluid Mechanics, vol. 16, no. 1, pp. 1655-1672, 2022.
[34] Y. Yang and J. Lu, "Foreformer: an enhanced transformer-based framework for multivariate time series forecasting," Applied Intelligence, vol. 53, no. 10, pp. 12521-12540, 2023.
[35] O. Elharrouss, N. Almaadeed, S. Al-Maadeed, and Y. Akbari, "Image inpainting: A review," Neural Processing Letters, vol. 51, pp. 2007-2028, 2020.
[36] X. Zhang, D. Zhai, T. Li, Y. Zhou, and Y. Lin, "Image inpainting based on deep learning: A review," Information Fusion, vol. 90, pp. 74-94, 2023.
[37] K. Armanious, V. Kumar, S. Abdulatif, T. Hepp, S. Gatidis, and B. Yang, "ipA-MedGAN: inpainting of arbitrary regions in medical imaging," in 2020 IEEE international conference on image processing (ICIP), 2020: IEEE, pp. 3005-3009.
[38] M. Bertalmio, A. L. Bertozzi, and G. Sapiro, "Navier-stokes, fluid dynamics, and image and video inpainting," in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001, vol. 1: IEEE, pp. I-I.
[39] I.-M. Ciortan, S. George, and J. Y. Hardeberg, "Colour-balanced edge-guided digital inpainting: applications on artworks," Sensors, vol. 21, no. 6, p. 2091, 2021.
[40] N. Cai, Z. Su, Z. Lin, H. Wang, Z. Yang, and B. W.-K. Ling, "Blind inpainting using the fully convolutional neural network," The Visual Computer, vol. 33, pp. 249-261, 2017.
[41] U. Demir and G. Unal, "Patch-based image inpainting with generative adversarial networks," arXiv preprint arXiv:1803.07422, 2018.
[42] O. Elharrouss, R. Damseh, A. N. Belkacem, E. Badidi, and A. Lakas, "Transformer-based image and video inpainting: current challenges and future directions," Artificial Intelligence Review, vol. 58, no. 4, pp. 1-45, 2025.
[43] C. Zhang, W. Yang, X. Li, and H. Han, "Mmginpainting: Multi-modality guided image inpainting based on diffusion models," IEEE Transactions on Multimedia, 2024.
[44] J. Yang and N. I. R. Ruhaiyem, "Review of deep learning-based image inpainting techniques," IEEE Access, 2024.
[45] W. Quan, J. Chen, Y. Liu, D.-M. Yan, and P. Wonka, "Deep learning-based image and video inpainting: A survey," International Journal of Computer Vision, vol. 132, no. 7, pp. 2367-2400, 2024.
[46] L. Demanet, B. Song, and T. Chan, "Image inpainting by correspondence maps: a deterministic approach," Applied and Computational Mathematics, vol. 1100, no. 217-50, p. 99, 2003.
[47] A. Lugmayr, M. Danelljan, A. Romero, F. Yu, R. Timofte, and L. Van Gool, "Repaint: Inpainting using denoising diffusion probabilistic models," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11461-11471.
[48] Y. Zeng, J. Fu, and H. Chao, "Learning joint spatial-temporal transformations for video inpainting," in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16, 2020: Springer, pp. 528-543.
[49] R. Liu et al., "Decoupled spatial-temporal transformer for video inpainting," arXiv preprint arXiv:2104.06637, 2021.
[50] K. Zhang, J. Fu, and D. Liu, "Flow-guided transformer for video inpainting," in European conference on computer vision, 2022: Springer, pp. 74-90.
[51] S. Zhou, C. Li, K. C. Chan, and C. C. Loy, "Propainter: Improving propagation and transformer for video inpainting," in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 10477-10486.
[52] Z. Tong, Y. Song, J. Wang, and L. Wang, "Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training," Advances in neural information processing systems, vol. 35, pp. 10078-10093, 2022.
[53] L. Huang, Normalization Techniques in Deep Learning. Springer, 2022.
[54] D. E. Shier, "Well log normalization: Methods and guidelines," Petrophysics-The SPWLA Journal of Formation Evaluation and Reservoir Description, vol. 45, no. 03, 2004.
[55] A. S. Eesa and W. K. Arabo, "A normalization methods for backpropagation: a comparative study," Science Journal of University of Zakho, vol. 5, no. 4, pp. 319-323, 2017.
[56] S. Albert et al., "Comparison of Image Normalization Methods for Multi-Site Deep Learning," Applied Sciences, vol. 13, no. 15, p. 8923, 2023. [Online]. Available: https://www.mdpi.com/2076-3417/13/15/8923.
[57] D. E. Shier, "Well Log Normalization: Methods and Guidelines," Petrophysics - The SPWLA Journal, vol. 45, no. 03, 2004.
[58] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, "Masked autoencoders are scalable vision learners," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 16000-16009.
[59] K. Chen, R. Wang, M. Utiyama, and E. Sumita, "Recurrent Positional Embedding for Neural Machine Translation," Hong Kong, China, November 2019: Association for Computational Linguistics, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1361-1367, doi: 10.18653/v1/D19-1139. [Online]. Available: https://aclanthology.org/D19-1139/
https://doi.org/10.18653/v1/D19-1139
[60] L. Abualigah and A. Diabat, "Advances in Sine Cosine Algorithm: A comprehensive survey," Artif. Intell. Rev., vol. 54, no. 4, pp. 2567–2608, 2021, doi: 10.1007/s10462-020-09909-3.
[61] A. Dosovitskiy et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," p. arXiv:2010.11929doi: 10.48550/arXiv.2010.11929.
[62] D. Harvey, S. Leybourne, and P. Newbold, "Testing the equality of prediction mean squared errors," International Journal of Forecasting, vol. 13, no. 2, pp. 281-291, 1997/06/01/ 1997, doi: https://doi.org/10.1016/S0169-2070(96)00719-4.
[63] C. Wilt, J. Thayer, and W. Ruml, "A comparison of greedy search algorithms," in proceedings of the international symposium on combinatorial search, 2010, vol. 1, no. 1, pp. 129-136.
[64] H. Park and L. A. Stefanski, "Relative-error prediction," Statistics & Probability Letters, vol. 40, no. 3, pp. 227-236, 1998/10/15/ 1998, doi: https://doi.org/10.1016/S0167-7152(98)00088-1.
[65] F. Pianosi and T. Wagener, "A simple and efficient method for global sensitivity analysis based on cumulative distribution functions," Environmental Modelling & Software, vol. 67, pp. 1-11, 2015/05/01/ 2015, doi: https://doi.org/10.1016/j.envsoft.2015.01.004.
[66] S. Cant, "High-performance computing in computational fluid dynamics: progress and challenges," Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, vol. 360, no. 1795, pp. 1211-1225, 2002.
[67] D. Di Paola, A. Milella, G. Cicirelli, and A. Distante, "An autonomous mobile robotic system for surveillance of indoor environments," International Journal of Advanced Robotic Systems, vol. 7, no. 1, p. 8, 2010.
[68] R. Bommasani et al., "On the Opportunities and Risks of Foundation Models," p. arXiv:2108.07258doi: 10.48550/arXiv.2108.07258.