The University of Southampton
University of Southampton Institutional Repository

Extrapolation performance of convolutional neural network-based combustion models for large-eddy simulation: influence of Reynolds number, filter kernel and filter size

Extrapolation performance of convolutional neural network-based combustion models for large-eddy simulation: influence of Reynolds number, filter kernel and filter size
Extrapolation performance of convolutional neural network-based combustion models for large-eddy simulation: influence of Reynolds number, filter kernel and filter size

The extrapolation performance of Convolutional Neural Network (CNN)-based models for Large-Eddy Simulations (LES) has been investigated in the context of turbulent premixed combustion. The study utilises a series of Direct Numerical Simulation (DNS) datasets of turbulent premixed methane/air and hydrogen/air jet flames to train the CNN models. The methane/air flames, which are characterised by increasing Reynolds numbers, are used to model the subgrid-scale flame wrinkling. The hydrogen/air flame, exhibiting complex thermodiffusive instability, is employed to test the ability of the CNN-based combustion models to predict the filtered progress variable source term. This study focuses on the influence of varying training Reynolds numbers, filter sizes, and filter kernels to evaluate the performance of the CNN models to out-of-sample conditions, i.e., not seen during training. The objectives of this study are threefold: (i) analyse the performance of CNN models at different Reynolds numbers compared to the one trained with; (ii) analyse the performance of CNN models at different filter sizes compared to the one trained with; (iii) assess the influence of using different filter kernels (i.e., Gaussian and box filter kernels) between training and testing, to emulate a posteriori applications. The results demonstrate that the CNN models show good extrapolation performance when the training Reynolds number is sufficiently high. Vice versa, when CNN models are trained on low-Reynolds-number flame data, their performance degrades as they are applied to flames with progressively higher Reynolds numbers. When these CNN models are tested on datasets with filter sizes not included in the training process, they exhibit sufficient interpolation capabilities, the extrapolation performance is less precise but still satisfactory overall. This indicates that CNN models can be effectively trained using data filtered with a limited range of filter sizes and then successfully applied across a broader spectrum of filter sizes. Furthermore, when CNNs trained on box-filtered data are applied to Gaussian-filtered data, or vice versa, the models perform well for smaller filter sizes. However, as the filter size increases, the accuracy of the predictions diminishes. Interestingly, increasing the quantity of training data does not significantly enhance model performance. Yet, when training data are distributed with greater weighting towards larger filter sizes, the model’s overall performance improves. This suggests that the strategic selection and weighting of training data can lead to more robust generalization across different filter conditions.

Convolutional neural networks, Deep learning, Direct numerical simulations, Large-Eddy simulation closure, Turbulent combustion
1386-6184
Arumapperuma, Geveen
e1912ae2-d9b6-4b40-bf03-f72819ec0c00
Sorace, Nicola
706c7197-8034-4742-b884-7bdef9da537f
Jansen, Matthew
59d69f79-89a5-4a79-9528-b10045dd6d63
Bladek, Oliver
34f362bf-b092-4ad5-9a36-d205a0d07a87
Nista, Ludovico
3c47ceab-cf95-418e-a8b3-b1926403faf1
Sakhare, Shreyans
3f4fdce3-336b-4738-82f4-822251deab89
Berger, Lukas
d7eee085-2362-42c0-ba19-85d799b67367
Pitsch, Heinz
3dc0eb6e-deca-4742-98a1-f0cdd62ff8b8
Grenga, Temistocle
be0eba30-74b5-4134-87e7-3a2d6dd3836f
Attili, Antonio
359dfd2e-9503-4a12-88ee-a8f16579a107
Arumapperuma, Geveen
e1912ae2-d9b6-4b40-bf03-f72819ec0c00
Sorace, Nicola
706c7197-8034-4742-b884-7bdef9da537f
Jansen, Matthew
59d69f79-89a5-4a79-9528-b10045dd6d63
Bladek, Oliver
34f362bf-b092-4ad5-9a36-d205a0d07a87
Nista, Ludovico
3c47ceab-cf95-418e-a8b3-b1926403faf1
Sakhare, Shreyans
3f4fdce3-336b-4738-82f4-822251deab89
Berger, Lukas
d7eee085-2362-42c0-ba19-85d799b67367
Pitsch, Heinz
3dc0eb6e-deca-4742-98a1-f0cdd62ff8b8
Grenga, Temistocle
be0eba30-74b5-4134-87e7-3a2d6dd3836f
Attili, Antonio
359dfd2e-9503-4a12-88ee-a8f16579a107

Arumapperuma, Geveen, Sorace, Nicola, Jansen, Matthew, Bladek, Oliver, Nista, Ludovico, Sakhare, Shreyans, Berger, Lukas, Pitsch, Heinz, Grenga, Temistocle and Attili, Antonio (2025) Extrapolation performance of convolutional neural network-based combustion models for large-eddy simulation: influence of Reynolds number, filter kernel and filter size. Flow, Turbulence and Combustion, [112254]. (doi:10.1007/s10494-025-00643-w).

Record type: Article

Abstract

The extrapolation performance of Convolutional Neural Network (CNN)-based models for Large-Eddy Simulations (LES) has been investigated in the context of turbulent premixed combustion. The study utilises a series of Direct Numerical Simulation (DNS) datasets of turbulent premixed methane/air and hydrogen/air jet flames to train the CNN models. The methane/air flames, which are characterised by increasing Reynolds numbers, are used to model the subgrid-scale flame wrinkling. The hydrogen/air flame, exhibiting complex thermodiffusive instability, is employed to test the ability of the CNN-based combustion models to predict the filtered progress variable source term. This study focuses on the influence of varying training Reynolds numbers, filter sizes, and filter kernels to evaluate the performance of the CNN models to out-of-sample conditions, i.e., not seen during training. The objectives of this study are threefold: (i) analyse the performance of CNN models at different Reynolds numbers compared to the one trained with; (ii) analyse the performance of CNN models at different filter sizes compared to the one trained with; (iii) assess the influence of using different filter kernels (i.e., Gaussian and box filter kernels) between training and testing, to emulate a posteriori applications. The results demonstrate that the CNN models show good extrapolation performance when the training Reynolds number is sufficiently high. Vice versa, when CNN models are trained on low-Reynolds-number flame data, their performance degrades as they are applied to flames with progressively higher Reynolds numbers. When these CNN models are tested on datasets with filter sizes not included in the training process, they exhibit sufficient interpolation capabilities, the extrapolation performance is less precise but still satisfactory overall. This indicates that CNN models can be effectively trained using data filtered with a limited range of filter sizes and then successfully applied across a broader spectrum of filter sizes. Furthermore, when CNNs trained on box-filtered data are applied to Gaussian-filtered data, or vice versa, the models perform well for smaller filter sizes. However, as the filter size increases, the accuracy of the predictions diminishes. Interestingly, increasing the quantity of training data does not significantly enhance model performance. Yet, when training data are distributed with greater weighting towards larger filter sizes, the model’s overall performance improves. This suggests that the strategic selection and weighting of training data can lead to more robust generalization across different filter conditions.

Text
s10494-025-00643-w - Version of Record
Available under License Creative Commons Attribution.
Download (5MB)

More information

Accepted/In Press date: 12 February 2025
Published date: 24 March 2025
Keywords: Convolutional neural networks, Deep learning, Direct numerical simulations, Large-Eddy simulation closure, Turbulent combustion

Identifiers

Local EPrints ID: 500946
URI: http://eprints.soton.ac.uk/id/eprint/500946
ISSN: 1386-6184
PURE UUID: 713f866b-4b61-456f-90df-79086457fab0
ORCID for Temistocle Grenga: ORCID iD orcid.org/0000-0002-9465-9505

Catalogue record

Date deposited: 19 May 2025 17:02
Last modified: 22 Aug 2025 02:38

Export record

Altmetrics

Contributors

Author: Geveen Arumapperuma
Author: Nicola Sorace
Author: Matthew Jansen
Author: Oliver Bladek
Author: Ludovico Nista
Author: Shreyans Sakhare
Author: Lukas Berger
Author: Heinz Pitsch
Author: Temistocle Grenga ORCID iD
Author: Antonio Attili

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×