Title: Making Massively Parallel Computations Available for End Users
Abstract: We first define the meaning of parallel computation: considering our finite element scientific software CimLib we have determined the number of cores threshold for a massively parallel computation as 32 for a two-dimensional simulation and 128 for a three-dimensional one. After this threshold, the number of neighbouring domains of a domain stays almost constant. We also point out that massively parallel computation uses and generates a huge amount of data that must be exploited in-situ. Then, we describe the parallelisation and optimisation done to adapt CimLib to massively parallel computers. We briefly present the parallelisation of the finite element method that leads to the resolution of large linear system with parallel preconditioners using the PETSc library. An original parallelisation strategy of mesh adaptation is also presented. It is based on an independent subdomain remeshing under the constraint of blocked interfaces. Furthermore, a new partition of the mesh is done in order to move the interfaces. Then, these two steps are iterated until we obtain a good mesh everywhere. In this work, we also describe the optimisation made to obtain a good parallel performance by introducing a permute-cut and paste procedure: the poor quality submeshes are extracted and then remeshed and pasted back in the complete mesh. Finally, parallel performance analysis is done with a massively parallel computer using several hundreds or thousands of cores. The performance demonstrated the very good scalability of CimLib, including mesh adaptation and linear solver resolution. The use of the parallel visualisation software ParaView allowed data processing directly on site, so that the end-user only needs to download pictures or movies.
Publication Year: 2011
Publication Date: 2011-03-16
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
Cited By Count: 1
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot