The University of Southampton
University of Southampton Institutional Repository

Potential of chaotic iterative solvers for CFD

Potential of chaotic iterative solvers for CFD
Potential of chaotic iterative solvers for CFD
Computational Fluid Dynamics (CFD) has enjoyed the speed-up available from supercomputer technology advancements for many years. In the coming decade, however, the architecture of supercomputers will change, and CFD codes must adapt to remain current.

Based on the predictions of next-generation supercomputer architectures it is expected that the first computer capable of 1018 floating-point-operations-per-second (1 ExaFLOPS) will arrive in around 2020. Its architecture will be governed by electrical power limitations, whereas previously the main limitation was pure hardware speed. This has two significant repercussions. Firstly, due to physical power limitations of modern chips, core clock rates will decrease in favour of increasing concurrency. This trend can already been seen with the growth of accelerated “many-core” systems, which use graphics processing units (GPUs) or co-processors. Secondly, inter-nodal networks, typically using copper-wire or optical interconnect, must be reduced due to their proportionally large power consumption. This places more focus on shared-memory communications, with distributed-memory communication (predominantly MPI - “Message Passing Interface”) becoming less important.

The current most powerful computer, Tianhe-2, capable of 33 PFlops, consists of 3,120,000 cores. The first exascale machine, which will be 30 times more powerful, is likely to be 300-times more parallel – which is a massive acceleration in parallelization compared to the last 50 years. This concurrency will come primarily from intra-node parallelization. Whereas Tianhe-2 features an already-large O(100) cores per node, an exascale machine must consist of O(1k-10k) cores per node.

CFD has benefited from weak scalability (the ability to retain performance with a constant elements-per-core-ratio) for many years; its strong scalability (the ability to reduce the elements-per-core ratio) has been poor and mostly irrelevant. With the shift to massive parallelism in the next few years, the strong scalability of CFD codes must be investigated and improved.

In this paper, a brief summary of earlier results is given, which identified the linear-equation system solver as one of the least-scalable parts of the code. Based on these results, a chaotic iterative solver, which is a totally-asynchronous, non-stationary, linear solver for high-scalability, is proposed. This paper focuses on the suitability of such a solver, by investigating the linear equation systems produced by typical CFD problems. If the results are optimistic, future work will be carried out to implement and test chaotic iterative solvers.
Hawkes, J.
3882428e-838b-4e54-9fbf-ba85ba080dc9
Vaz, G.
7fe43ac2-1513-440b-8332-aa64d61fce9d
Turnock, S.R.
d6442f5c-d9af-4fdb-8406-7c79a92b26ce
Cox, S.J.
0e62aaed-24ad-4a74-b996-f606e40e5c55
Phillips, A.B.
f565b1da-6881-4e2a-8729-c082b869028f
Hawkes, J.
3882428e-838b-4e54-9fbf-ba85ba080dc9
Vaz, G.
7fe43ac2-1513-440b-8332-aa64d61fce9d
Turnock, S.R.
d6442f5c-d9af-4fdb-8406-7c79a92b26ce
Cox, S.J.
0e62aaed-24ad-4a74-b996-f606e40e5c55
Phillips, A.B.
f565b1da-6881-4e2a-8729-c082b869028f

Hawkes, J., Vaz, G., Turnock, S.R., Cox, S.J. and Phillips, A.B. (2014) Potential of chaotic iterative solvers for CFD. NuTTS ’14: 17th Numerical Towing Tank Symposium, Marstrand, Sweden. 28 - 30 Sep 2014. 6 pp .

Record type: Conference or Workshop Item (Paper)

Abstract

Computational Fluid Dynamics (CFD) has enjoyed the speed-up available from supercomputer technology advancements for many years. In the coming decade, however, the architecture of supercomputers will change, and CFD codes must adapt to remain current.

Based on the predictions of next-generation supercomputer architectures it is expected that the first computer capable of 1018 floating-point-operations-per-second (1 ExaFLOPS) will arrive in around 2020. Its architecture will be governed by electrical power limitations, whereas previously the main limitation was pure hardware speed. This has two significant repercussions. Firstly, due to physical power limitations of modern chips, core clock rates will decrease in favour of increasing concurrency. This trend can already been seen with the growth of accelerated “many-core” systems, which use graphics processing units (GPUs) or co-processors. Secondly, inter-nodal networks, typically using copper-wire or optical interconnect, must be reduced due to their proportionally large power consumption. This places more focus on shared-memory communications, with distributed-memory communication (predominantly MPI - “Message Passing Interface”) becoming less important.

The current most powerful computer, Tianhe-2, capable of 33 PFlops, consists of 3,120,000 cores. The first exascale machine, which will be 30 times more powerful, is likely to be 300-times more parallel – which is a massive acceleration in parallelization compared to the last 50 years. This concurrency will come primarily from intra-node parallelization. Whereas Tianhe-2 features an already-large O(100) cores per node, an exascale machine must consist of O(1k-10k) cores per node.

CFD has benefited from weak scalability (the ability to retain performance with a constant elements-per-core-ratio) for many years; its strong scalability (the ability to reduce the elements-per-core ratio) has been poor and mostly irrelevant. With the shift to massive parallelism in the next few years, the strong scalability of CFD codes must be investigated and improved.

In this paper, a brief summary of earlier results is given, which identified the linear-equation system solver as one of the least-scalable parts of the code. Based on these results, a chaotic iterative solver, which is a totally-asynchronous, non-stationary, linear solver for high-scalability, is proposed. This paper focuses on the suitability of such a solver, by investigating the linear equation systems produced by typical CFD problems. If the results are optimistic, future work will be carried out to implement and test chaotic iterative solvers.

Text
james_hawkes_chaotic_solvers_Nutts2014.pdf - Author's Original
Download (3MB)

More information

e-pub ahead of print date: September 2014
Venue - Dates: NuTTS ’14: 17th Numerical Towing Tank Symposium, Marstrand, Sweden, 2014-09-28 - 2014-09-30
Organisations: National Oceanography Centre, Fluid Structure Interactions Group

Identifiers

Local EPrints ID: 368987
URI: http://eprints.soton.ac.uk/id/eprint/368987
PURE UUID: da17aaa9-8505-423c-ae85-bda90a89f477
ORCID for S.R. Turnock: ORCID iD orcid.org/0000-0001-6288-0400
ORCID for A.B. Phillips: ORCID iD orcid.org/0000-0003-3234-8506

Catalogue record

Date deposited: 18 Sep 2014 13:29
Last modified: 15 Mar 2024 03:21

Export record

Contributors

Author: J. Hawkes
Author: G. Vaz
Author: S.R. Turnock ORCID iD
Author: S.J. Cox
Author: A.B. Phillips ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×