| 研究生: |
楊家豪 Yang, Jia-Hao |
|---|---|
| 論文名稱: |
一個應用於災害救援的可調變區塊大小之共享資料一致性維護機制 A Variable-Grain Consistency Maintenance Scheme for Shared Data on Emergency and Rescue Applications |
| 指導教授: |
謝錫堃
Shieh, Ce-Kuen |
| 共同指導: |
張志標
Chang, Jyh-Biau |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2010 |
| 畢業學年度: | 98 |
| 語文別: | 英文 |
| 論文頁數: | 47 |
| 中文關鍵詞: | 急難救援 、樂觀複製 、悲觀複製 |
| 外文關鍵詞: | emergency and rescue, hybrid replication, optimistic replication, pessimistic replication |
| 相關次數: | 點閱:109 下載:1 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在急難救援的環境中最重要的就是急救資料分享,且急救資料分享會影響到整個救援行動的成敗,然而在急難救援環境中急救資料分享是相當困難的,因為在救災現場可能沒有通訊設備。
在急難救援的環境中,一般來說同一地點內的網路情況是相對穩定的,而不同地點間的網路是相對較不穩定的,而網路分割的情況有可能發生在不同的地點間。急難救援可以採用在資料網格中的常見的複製技術來提高急救資料分享的效率。但如何維持急救資料的多份副本間的一致性就成為另一個要解決的問題。
在本文中我們提出了「海鷗」系統可以在急難救援的環境中透通的維持急救資料的多份副本間資料內容的一致性。海鷗系統在地點間採用樂觀複製(Optimistic replication)技術來提供高程度的可得性,相對的在地點內採用悲觀複製(Pessimistic replication)技術來提供較嚴格的一致性。同時,海鷗系統採用了動態調整一致性單位的方式來提高效能,因為動態調整一致性單位在偽分享(false sharing)的情況下,可以達到更高程度的平行度。最後,海鷗系統採用了透通的資料一致性管理機制,因此使用者不需要修改原有的程式就可以在海鷗系統上順利執行。
In the scenario of the emergency and rescue operations, information sharing is the most important factor that affects the success and failure of the entire operation. However, efficient information sharing is difficult to achieve in such a scenario because there is no communication infrastructure existed at the disaster sites.
Generally Speaking, the network condition is relatively reliable in the intra-site environment and relatively unreliable in the inter-site environment. Moreover, the network partitioning problem may occur between two sites. Therefore, the replication technique in data grid should be used in emergency and rescue application for improving the efficiency of the information sharing. However, the data consistency problem occurs between replicas.
In this context, we propose a middleware called Seagull that can transparently manage the data consistency, designated as Seagull for the situation of the emergency and rescue applications. Seagull adopts the optimistic replication technique that provides high availability, and adopts the pessimistic replication technique in the intra-site environment that provides strong consistency. Moreover, it adopts an adaptive consistency granularity strategy that achieves the better performance of the consistency management because this strategy provides higher parallelism when the false sharing happens. Lastly, Seagull adopts the transparency data consistency management scheme, and thus the users do not need to modify their source codes to run on the Seagull.
[1] G. Belalem and Y. Slimani, "Consistency Management for Data Grid in OptorSim Simulator," in Multimedia and Ubiquitous Engineering, 2007. MUE '07. International Conference on, 2007, pp. 554-560.
[2] Y. Chao-Tung, et al., "FRCS: A File Replication and Consistency Service in Data Grids," in Multimedia and Ubiquitous Engineering, 2008. MUE 2008. International Conference on, 2008, pp. 444-447.
[3] Y. Chao-Tung, et al., "A One-Way File Replica Consistency Model in Data Grids," presented at the Proceedings of the The 2nd IEEE Asia-Pacific Service Computing Conference, 2007.
[4] A. Domenici, et al., "Replica consistency in a Data Grid," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 534, pp. 24-28, 2004.
[5] C. Ruay-Shiung and C. Jih-Sheng, "Adaptable Replica Consistency Service for Data Grids," presented at the Proceedings of the Third International Conference on Information Technology: New Generations, 2006.
[6] L. Guy, et al., "Replica management in data grids," 2002, pp. 278-280.
[7] H. Al Mistarihi and C. Yong, "Replica management in data grid," IJCSNS, vol. 8, p. 22, 2008.
[8] D. Bernholdt, et al., "The earth system grid: Supporting the next generation of climate modeling research," Proceedings of the IEEE, vol. 93, pp. 485-495, 2005.
[9] V. Astakhov, et al., "Data Integration in the Biomedical Informatics Research Network (BIRN)," 2005, pp. 317-320.
[10] T. Plagemann, et al., "Reconsidering consistency management in shared data spaces for emergency and rescue applications."
[11] T. Plagemann, et al., "A data sharing facility for mobile ad-hoc emergency and rescue applications," 2007.
[12] D. Andrea, et al., "Relaxed Data Consistency with CONStanza," presented at the Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid, 2006.
[13] S. Yasushi and S. Marc, "Optimistic replication," ACM Comput. Surv., vol. 37, pp. 42-81, 2005.
[14] M. M. Deris, et al., "An efficient replicated data access approach for large-scale distributed systems," presented at the Proceedings of the 2004 IEEE International Symposium on Cluster Computing and the Grid, 2004.
[15] S. Tikar and S. Vadhiyar, "Efficient reuse of replicated parallel data segments in computational grids," Future Generation Computer Systems, vol. In Press, Corrected Proof.
[16] S. Yuzhong and X. Zhiwei, "Grid replication coherence protocol," in Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, 2004, p. 232.
[17] T. Hara and S. K. Madria, "Consistency Management Strategies for Data Replication in Mobile Ad Hoc Networks," IEEE Transactions on Mobile Computing, vol. 8, pp. 950-967, 2009.
[18] Z. Guessoum, et al., "Towards reliable multi-agent systems: An adaptive replication mechanism," Multiagent Grid Syst., vol. 6, pp. 1-24, 2010.
[19] R. Rodrigues and B. Liskov, "High availability in DHTs: Erasure coding vs. replication," Peer-to-Peer Systems IV, pp. 226-239, 2005.
[20] B. Charron-Bost, "Concerning the size of logical clocks in distributed systems," Information Processing Letters, vol. 39, p. 16, 1991.
[21] L. Lamport, "Time, clocks, and the ordering of events in a distributed system," Communications of the ACM, vol. 21, pp. 558-565, 1978.
[22] D. Mills, "Improved algorithms for synchronizing computer network clocks," IEEE/ACM Transactions on Networking (TON), vol. 3, pp. 245-254, 1995.
[23] W. Lei and Y. Chen, "TLDFS: A Distributed File System based on the Layered Structure," in Network and Parallel Computing Workshops, 2007. NPC Workshops. IFIP International Conference on, 2007, pp. 727-732.
[24] D. Hildebrand and P. Honeyman, "Scaling NFSv4 with parallel file systems," presented at the Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid'05) - Volume 2 - Volume 02, 2005.
[25] Z. Jiaying and H. Peter, "NFSv4 replication for grid storage middleware," presented at the Proceedings of the 4th international workshop on Middleware for grid computing, Melbourne, Australia, 2006.
[26] B. Pawlowski, et al., "The NFS version 4 protocol," 2000.
[27] J. Zhang and P. Honeyman, "Naming, Migration, and Replication for NFSv4," Ann Arbor, vol. 1001, pp. 48103-4978, 2006.
[28] L. Lamport, "Lower bounds for asynchronous consensus," Distributed Computing, vol. 19, pp. 104-125, 2006.
[29] S. Ghemawat, et al., "The Google file system," ACM SIGOPS Operating Systems Review, vol. 37, p. 43, 2003.
[30] Fuse http://fuse.sourceforge.net/
[31] WANem http://wanem.sourceforge.net/
[32] Ubuntu http://www.ubuntu.com/