簡易檢索 / 詳目顯示

研究生: 陳柏誠
Chen, Po-Cheng
論文名稱: 可用於格網上的分散式共用記憶體系統之多層次重組架構
A Multi-layer Reconfiguration Framework on the Grid-enabled DSM System: Teamster-G
指導教授: 謝錫堃
Shieh, Ce-Kuen
梁廷宇
Liang, Tyng-Yeu
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2006
畢業學年度: 94
語文別: 英文
論文頁數: 60
中文關鍵詞: 分散式共用記憶體系統重組非轉屬性動態性格網
外文關鍵詞: non-dedication, reconfiguration, DSM, dynamicity, Grid
相關次數: 點閱:79下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 過去有相當多的研究指出,在格網環境上存在著大量的計算資源可供利用。 然而,由於存在於格網環境上的計算資源都是共享的,它們分屬於各個不同的資源提供者,因此這些計算資源皆具有其動態性和非專屬性。而這兩種特性會同時對格網使用者所提交的工作以及資源提供者本身所執行的工作造成不良的影響;因為彼此互相競爭計算資源造成雙方面的工作之執行效能下降,同時也降低了這些計算資源的整體產出。為了解決此一問題,在這篇論文當中,我們提出了一個適用於格網環境的多層次重組架構,希望能有效率地利用格網上的共享資源。此一多層次重組架構提供了虛擬處理器重組、節點重組和叢集重組等三種不同的系統重組方式;它們分別適用於格網資源的各種不同負載狀況。我們並將這個多層次重組架構的概念實作在一個具透通性的分散式共享記憶體格網系統:Teamster-G上。最後我們的實驗結果也顯示,此一多層次重組架構不但可以協助Teamster-G充分地利用格網環境上可用的CPU週期,同時也盡量地降低對格網資源提供者的干擾。更重要的是我們的研究還能有效提升格網資源的整體產出。

    Many studies have shown that a large number of computational powers could be exploited in the Grid environment. However, there are two new problems needed to be solved due to the characteristics of the Grid environment, which are the dynamicity and the non-dedication of the Grids. These problems will cause the performance slowdown of both Grid systems’ guest applications and Grid resources providers’ host applications. For solving these problems, we present the multi-layer reconfiguration framework for the Grids in this thesis. Our multi-layer reconfiguration framework provides three incremental abilities, i.e. virtual processor reconfiguration, node reconfiguration and cluster reconfiguration, to adapt diverse workload conditions of the Grid resources. The proposed multi-layer reconfiguration framework is implemented on Teamster-G, which is a transparent Grid-enabled distributed shared memory system. According to our experiments, our multi-layer reconfiguration framework can allow Teamster-G not only to fully utilize abundant CPU cycles available in the Grid environment but also to minimize the resource contention with Grid resource providers. More importantly, our work can also effectively prompt the throughput of the whole Grid resources as well.

    Contents I Tables III Illustrations IV Chapter 1 Introduction 1 1.1 The purposes of Grid Computing 1 1.2 The Importance of Grid-enabled DSM Systems 2 1.3 Issues in Grid-enabled DSM Systems 3 1.4 The Multi-layer Reconfiguration Framework 4 1.5 The Outline of This Thesis 6 Chapter 2 Background 7 2.1 Distributed Shared Memory System 7 2.2 Our Experimental Environment: Teamster-G 8 2.2.1 The Overview of Teamster-G 8 2.2.2 The Resource Allocation and Program Execution of Teamster-G 9 2.2.3 The Two-layer thread Architecture of Teamster-G 11 Chapter 3 Multi-layer Reconfiguration Framework 14 3.1 The Multi-layer Reconfiguration Model 14 3.2 The System Architecture 18 3.3 The Operation of the Multi-layer Reconfiguration Framework 20 Chapter 4 Implementation 22 4.1 Monitoring Module 22 4.2 Working Thread Migration Manager 22 4.2.1 Working Thread Migration 22 4.2.2 The Synchronization of Working Threads 24 4.3 Virtual Processor Reconfiguration Manager 28 4.4 Node Reconfiguration Manager and Coordinator 28 4.4.1 Adding a Normal Node 29 4.4.2 Deleting a Normal Node 30 4.4.3 The Memory Consistency Maintenance 30 4.5 Cluster Reconfiguration Manager and Coordinator 35 Chapter 5 Experiments and Results 37 5.1 The Performance of Intra-cluster Reconfiguration 37 5.1.1 The Experimental Configuration and Evaluation Methods 37 5.1.2 The Analysis of Experimental Results 39 5.2 The Performance of Inter-cluster Reconfiguration 44 5.2.1 The Experimental Configuration and Evaluation Methods 44 5.2.2 The Analysis of Experimental Results 46 Chapter 6 Related Works 48 6.1 The SRS Migration Framework 48 6.2 The Cactus Migration Framework 49 6.3 The GridWay Project 50 6.4 The NDDE Project 51 Chapter 7 Conclusions and Future Works 53 Bibliography 55

    1. Eduardo Hudo, R.S.M. & Llorente, I.M., 'The GridWay Framework for Adaptive Scheduling and Execution on Grids', SCALABLE COMPUTING: PRACTICE AND EXPERIENCE Volume 6, No. 3, pp. 1 – 8, September 2005.

    2. Kaoutar El Maghraoui, B.S. & Varela, C., “An Architecture for Reconfigurable Iterative MPI Application in Dynamic Environments”, Proc. of the Sixth International Conference on Parallel Processing and Applied Mathematics (PPAM'2005)(number 3911 of LNCS, Poznan, Poland), pp. 258 – 271, September 2005.

    3. Reynaldo C. Novaes, P.R. & Cirne, W., “Non-Dedicated Distributed Environment: A Solution for Safe and Continuous Exploitation of Idle Cycles”, SCALABLE COMPUTING: PRACTICE AND EXPERIENCE Volume 6, No. 3, pp. 107 – 115, September 2005.

    4. Tyng-Yeu Liang, Chun-Yi Wu & Ce-Kuen Shieh, “Teamster-G: A Grid-enabled Software DSM System”, In Proc. Fifth International Workshop on Distributed Shared Memory (DSM 2005), Cardiff, UK, May 2005. Held in conjunction with. CCGrid 2005, 2005.

    5. Vadhiyar, S.S. & Dongarra, J.J., “Self Adaptivity in Grid Computing”, CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERINCE Vol. 17, Issue 2-4, pp. 235 – 257, 2005.

    6. Chao-Tung Yang, C.L. & Li, K., “A Resource Broker for Computing Nodes Selection in Grid Computing Environments”, Lecture Notes in Computer Science, Springer-Verlag Heidelberg Volume 3251, pp. 931-934, September 2004.

    7. Eduardo Huedo, R.S.M. & Llorente, I.M., “A framework for adaptive execution in grids”, Software Experience and Practice Volume 34, Issue 7, pp. 631 – 651, June 2004.

    8. I. Foster. & Kesselman, C., “The Grid2: Bluseprint for a New Computing Infrastructure”, Morgan Kaufmann: San Francisco, CA, 2004.

    9. Johan Tordsson, U.U. & Sweden, “A Grid Resource Broker Supporting Advance Reservations and Benchmark-Based Resource Selection”, PARA'04 workshop on state-of-the-art in scientific computing, June 2004.

    10. D.B. Weatherly, D.L., “Dyn-MPI: supporting MPI on non dedicated clusters (extended version)”, Technical Report 03-003, University of Georgia, Jan 2003.

    11. J. B. Chang, Tyng-Yeu Liang, Ce-Kuen Shieh, “A Transparent Distributed Shared Memory for Clustered Symmetric Multiprocessors”, accepted for publication in the special issue of The Journal of Supercomputing, September 6, 2003

    12. Nicholas T. Karonis, B.R.T. & Foster, I.T., “MPICH-G2: A Grid-enabled implementation of the Message Passing Interface”, Journal of Parallel Distributed Computing, pp.551-563, 2003.

    13. Y.-S. Kee, J.K. & Ha, S., “ParADE: An OpenMP Programming Environment for SMP Cluster Systems'” In Proceedings of Supercomputing (SC2003), November 2003.

    14. David Abramson, R.B. & Giddy, J., “A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Broker”, Future Generation Computer Systems (FGCS) Journal Volume 18, pp.1061-1074, 2002.

    15. Lawlor, M.B. & Kale, L., “Adaptive MPI”, Technical Report 02-05, University of Illinois, 2002.

    16. Czajkowski, K. & Kesselman, C., “Grid information services for distributed resource sharing”, In Proceedings of the 10th IEEE Symposium on High-Performance Distributed Computing (HPDC). IEEE Computer Society Press. 2001.

    17. Francine Berman, K.K. & Johnsson, L., 'THE GrADS PROJECT: SOFTWARE SUPPORT FOR HIGH-LEVEL GRID APPLICATION DEVELOPEMENT', International Journal of High-Performance Computing Applications Vol. 15, No.4, pp. 327 – 344, 2001.

    18. Frey J., T.T. & S, T., “Condor-G: A Computation Management Agent for Multi-Institutional Grids”, The Proceedings of 10th IEEE International Symposium on High Performance Distributed Computing, pp.55-63, 2001.

    19. Gabrielle Allen, D.A. & Shalf, J., “The Cactus Worm: Experiments with Dynamic Resource Selection and Allocation in a Grid Environment”, International Journal of High-Performance Computing Applications Vol. 15, No.4, pp. 345 – 358, 2001.

    20. I. Foster & C.K., “The Anatomy of the Grid: Enabling Scalable Virtual Organizations”, International J. Supercomputer Applications 15(3), 2001.

    21. J. B. Chang, T.L., “Teamster: A Transparent Distributed Shared memory for Clustered Symmetric Multiprocessors”, The 2001 Workshop on Distributed Shared Memory on Clusters at IEEE CCGrid2001, pp. 508-513, 16-18 May 2001.

    22. M. Bhandarkar, L.K. & Hoeflinger, J., “Adaptive load balancing for MPI programs”, in: International Conference on Computational Science, San Francisco, CA, pp. 108¡V117, May 2001.

    23. Buyya R, A.D. & J, G., “Nimrod/G: An architecture for a resource management and scheduling system in a global computational Grid”, The 4th International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia'2000), Beijing, china, 2000, IEEE Computer Society Press: Los Alamitos, CA.

    24. Chervenak, A. & Tuecke, S., “The Data Grid: Towards an architecture for the distributed management and analysis of large scientific data sets”, Journal of Network and Computer Applications, pp. 187-200, 2000.

    25. H. Casanova, G.O. & Wolski, R., “The AppLeS Parameter Sweep Template: User-Level Middleware for the Grid”, Proceedings of Super Computing, 2000.

    26. Thitikamol, K. & Keleher., P., “Thread migration and load balancing in non-dedicated environments”, In The Proceeding of the 14th International Parallel and Distributed Processing Symposium, pp. 583-588, 2000.

    27. Von Laszewski, G. & Tuecke, S., “CoG Kits: A Bridge between high Performance Grids Computing and High Performance Grids”, ACM 2000 Grade Conference, 2000.

    28. I. Foster, C.K. & Roy, A., “A Distributed Resource Management Architecture that Supports Advance Reservations and Co-Allocation”, International Workshop on Quality of Service, 1999.

    29. R. Wolski, N.S. & Hayes. J., “The network weather service: A distributed resource performance forecasting service for metacomputing”, Journal of Future Generation Computing Systems, pp. 757-768, October 1999.

    30. Weiwu Hu, W.S. & Tang, Z., “JIAJIA: An SVM System Based on A New Cache Coherence Protocol”, Proceedings of the High Performance Computing and Networking (HPCN'99), pp.463-472, 1999.

    31. Y. Charlie Hu, H.L. & Zwaenepoel, W., “OpenMP for Networks of SMPs”, Proceedings of the 13th International Parallel Processing Symposium, pp. 302-310, 1999.

    32. I. Foster. & C.K., “The Globus Project: A Status Report”, Proc. IPPS/SPDP '98 Heterogeneous Computing Workshop, pp. 4-18, 1998.

    33. I. Foster. & Tuecke, S., “Software infrastructure for the I-WAY metacomputing experiment”, Concurrency: Practice and Experience”, pp. 567-581, 1998.

    34. Casanova, H. & Dongarra, J., “NetSolve: A network-enabled server for solving computational science problems” International Journal of High Performance Computing Applications, pp. 212-223, 1997.

    35. C. Amza, A.C. & Zwaenepoel, W., “TreadMarks: Shared Memory Computing on Networks of Workstations”, IEEE Computer, pp.18-28, 1996.

    36. DeFanti, T. & Kuhfuss, T., “Overview of the I-WAY: Wide-area visual supercomputing”, International Journal of High Performance Computing Applications, pp. 123-130, 1996.

    37. J. Garcia, E.A., “Dynamic data distribution with control flow analysis”, in: Supercomputing96, November 1996.

    38. MPIF, M.P.I.F., “MPI-2: extensions to the message-passing interface”, Technical Report, University of Tennessee, Knoxville, 1996.

    39. A. Barak, O.L. & Yarom, Y., “The NOW Mosix and its Preemptive Process Migration Scheme”, Bulletin of the IEEE Technical Committee on Operating Systems and Application Environments, pp. 5-11, 1995.

    40. S. Zhou, X.Z. & Delisle, P., “Utopia: a Load Sharing Facility for Large, Heterogeneous Distributed Computer Systems”, Software Practice and Experience, pp. 1305-1336, 1993.

    41. J.B. Carter, J.B. & Zwaenepoel., W., “Implementation and Performance of Munin”, In Proceedings of 13th ACM Symposium on Operating System Principles, pp. 152-164, 1991.

    42. Bennett, J.K. & W., Z., “Munin: Distributed shared memory based on type-specific memory coherence”, In Proceedings of the 2nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 168-176, March 1990.

    43. Chase, J.S. & Littlefield, R.J., “The Amber system: Parallel programming on a network of multiprocessors”, In Proceedings of the 12th ACM Symposium on Operating Systems Principles, pp. 147-158, December 1989.

    44. Li, K. & Hudak, P., “Memory coherence in shared virtual memory systems” ACM Transactions on Computer Systems, pp. 321-359, November 1989.

    45. Li., K., “IVY: A shared virtual memory system for parallel computing”, In Proceedings of the 1988 International Conference on Parallel Processing (ICPP'88), pp. 94-101, 1988.

    46. Douglis, F. & Ousterhout, J., “Process migration in the Sprite operating system”, In Proceedings of the 7th International Conference on Distributed Computing Systems, pp. 18-25, September 1987.

    下載圖示 校內:立即公開
    校外:2006-08-14公開
    QR CODE