Title: Adaptive granularity: transparent integration of fine- and coarse-grain communication
Abstract: The granularity of shared data is one of the key factors affecting the performance of distributed shared memory machines (DSM). Given that programs exhibit quite different sharing patterns, providing only one or two fixed granularities cannot result in an efficient use of resources. On the other hand, supporting arbitrarily granularity sizes significantly increases not only hardware complexity but software overhead as well. Furthermore, the efficient use of arbitrarily granularities put the burden on users to provide information about program behavior to compilers and/or runtime systems. These kind of requirements tend to restrict the programmability of the shared memory model. In this paper we present a new communication scheme, called adaptive granularity (AG). Adaptive granularity makes it possible to transparently integrate bulk transfer into the shared memory model by supporting variable-size granularity and memory replication. It consists of two protocols: one for small data and another for large data. For small size data, the standard hardware DSM protocol is used and the granularity is fixed to the size of a cache line. For large array data, the protocol for bulk data is used instead and the granularity varies depending on the sharing behavior of applications at runtime. Simulation results show that AG improves performance up to 43% over the hardware implementation of DSM (e.g., DASH, Alewife). Compared with an equivalent architecture that supports fine-grain memory replication at the fixed granularity of a cache line (e.g., Typhoon), AG reduces execution time up to 35%.
Publication Year: 2002
Publication Date: 2002-12-23
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
Cited By Count: 8
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot