Skip to main content Accessibility help
×
Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-04T08:05:10.150Z Has data issue: false hasContentIssue false

7 - Data-Driven Modeling for Coarse Graining

from Part I - Paradigms and Tools

Published online by Cambridge University Press:  31 January 2025

Fernando F. Grinstein
Affiliation:
Los Alamos National Laboratory
Filipe S. Pereira
Affiliation:
Los Alamos National Laboratory
Massimo Germano
Affiliation:
Duke University, North Carolina
Get access

Summary

This chapter gives an overview of data-driven methods applied to turbulence closure modeling for coarse graining. A non-exhaustive introduction of the various data-driven approaches that have been used in the context of closure modeling is provided which includes a discussion of model consistency, which is the ultimate indicator of a successful model, and other key concepts. More details are then presented for two specific methods, one a neural-network representative of nontransparent black-box approaches and one specific type of evolutionary algorithm representative of transparent approaches yielding explicit mathematical expressions. The importance of satisfying physical constraints is emphasized and methods to choose the most relevant input features are suggested. Several recent applications of data-driven methods to subgrid closure modeling are discussed, both for nonreactive and reactive flow configurations. The chapter is concluded with current trends and an assessment of what can be realistically expected of data-driven methods for coarse graining.

Type
Chapter
Information
Coarse Graining Turbulence
Modeling and Data-Driven Approaches and their Applications
, pp. 202 - 221
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Augusto, D. A., and Barbosa, H. J. C. 2000. Symbolic regression via genetic programming. Pages 173–178 in Proceedings Sixth Brazilian Symposium on Neural Networks, Vol. 1. IEEE.Google Scholar
Beck, A., and Kurz, M. 2021. A perspective on machine learning methods in turbulence modeling. GAMM-Mitteilungen, 44(1), e202100002.CrossRefGoogle Scholar
Bode, M., Gauding, M., Lian, Z., Denker, D., Davidovic, M., Kleinheinz, K., Jitsev, J., and Pitsch, H. 2021. Using physics-informed enhanced super-resolution generative adversarial networks for subfilter modeling in turbulent reactive flows. Proceedings of the Combustion Institute, 38(2), 2617–2625.CrossRefGoogle Scholar
Brunton, S. L., Proctor, J. L., and Kutz, J. N. 2016. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15), 3932–3937.CrossRefGoogle ScholarPubMed
Cai, S., Mao, Z., Wang, Z., Yin, M., and Karniadakis, G. E. 2021. Physics-informed neural networks (PINNs) for fluid mechanics: a review. Acta Mechanica Sinica, 37, 1727–1738.CrossRefGoogle Scholar
Chen, J.-Y., Blasco, J. A., Fueyo, N., and Dopazo, C. 2000. An economical strategy for storage of chemical kinetics: Fitting in situ adaptive tabulation with artificial neural networks. Proceedings of the Combustion Institute, 28(1), 115–121.CrossRefGoogle Scholar
Chen, T., and Guestrin, C. 2016. XGBoost: A scalable tree boosting system. Pages 785–794 of: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM.Google Scholar
Chung, W. T., Mishra, A. A., and Ihme, M. 2022. Interpretable data-driven methods for subgrid-scale closure in LES for transcritical LOX/GCH4 combustion. Combustion and Flame, 239, 111758.CrossRefGoogle Scholar
Duraisamy, K. 2021. Perspectives on machine learning-augmented Reynolds-averaged and large eddy simulation models of turbulence. Phys. Rev. Fluids, 6(5), 050504.CrossRefGoogle Scholar
Duraisamy, K., Iaccarino, G., and Xiao, H. 2019. Turbulence modeling in the age of data. Annual Review of Fluid Mechanics, 51(1), 357–377.CrossRefGoogle Scholar
Ferreira, C. 2001. Gene expression programming: A new adaptive algorithm for solving problems. Complex Systems, 13(2), 87–129.Google Scholar
Font, B., Weymouth, G. D., Nguyen, V.-T., and Tutty, O. R. 2021. Deep learning of the spanwise-averaged Navier–Stokes equations. Journal of Computational Physics, 434, 110199.CrossRefGoogle Scholar
Fukami, K., Fukagata, K., and Taira, K. 2019. Super-resolution reconstruction of turbulent flows with machine learning. Journal of Fluid Mechanics, 870, 106–120.CrossRefGoogle Scholar
Galindo, L., Perset, K., and Sheeka, F. 2021. An overview of national AI strategies and policies. https://goingdigital.oecd.org/data/notes/No14_ToolkitNote_AIStrategies.pdf. Accessed: January 15, 2023.Google Scholar
Gamahara, M., and Hattori, Y. 2017. Searching for turbulence models by artificial neural network. Phys. Rev. Fluids, 2, 054604.CrossRefGoogle Scholar
Gartner Research. 2023. Understanding Gartner’s hype cycles. www.gartner.com/en/documents/3887767. Accessed: 2023–01–12.Google Scholar
Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J., and Chen, T. 2018. Recent advances in convolutional neural networks. Pattern Recognition, 77, 354–377.CrossRefGoogle Scholar
Hassanian, R., Riedel, M., and Bouhlali, L. 2022. The capability of recurrent neural networks to predict turbulence flow via spatiotemporal features. Pages 000335–000338 in 2022 IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC).CrossRefGoogle Scholar
He, K., Zhang, X., Ren, S., and Sun, J. 2016. Deep residual learning for image recognition. Pages 770–778 in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).CrossRefGoogle Scholar
Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural Computation, 9(8), 1735–1780.CrossRefGoogle ScholarPubMed
Holland, J. H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT Press.CrossRefGoogle Scholar
Hornik, K., Stinchcombe, M., and White, H. 1989. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.CrossRefGoogle Scholar
Hutter, F., Kotthoff, L., and Vanschoren, J. 2019. Automated Machine Learning: Methods, Systems, Challenge. Springer Nature.CrossRefGoogle Scholar
Ihme, M, Schmitt, C, and Pitsch, H. 2009. Optimal artificial neural networks and tabulation methods for chemistry representation in LES of a bluff-body swirl-stabilized flame. Proceedings of the Combustion Institute, 32(1), 1527–1535.CrossRefGoogle Scholar
Ihme, M., Chung, W. T., and Mishra, A. A. 2022. Combustion machine learning: Principles, progress and prospects. Progress in Energy and Combustion Science, 91, 101010.CrossRefGoogle Scholar
Jaffri, A., and Choudhary, F. 2022. Gartner hype cycle for artificial intelligence. www.gartner.com/doc/reprints. Accessed: 2023–01–12.Google Scholar
Jolliffe, I. T., and Cadima, J. 2016. Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202.Google ScholarPubMed
Kasten, C., Fahr, J., and Klein, M. 2022a. An efficient way of introducing gender into evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 27(4), 1005–1014.Google Scholar
Kasten, C., Shin, J., Sandberg, R., Pfitzner, M., Chakraborty, N., and Klein, M. 2022b. Modeling subgrid-scale scalar dissipation rate in turbulent premixed flames using gene expression programming and deep artificial neural networks. Physics of Fluids, 34(8), 085113.CrossRefGoogle Scholar
Kasten, C., Shin, J., Pfitzner, M., and Klein, M. 2022c. Modelling filtered reaction rate in turbulent premixed flames using feature importance analysis, gene expression programming and tiny artificial neural networks. International Journal of Heat and Fluid Flow, 97, 109032.CrossRefGoogle Scholar
Kempf, A., Flemming, F., and Janicka, J. 2005. Investigation of lengthscales, scalar dissipation, and flame orientation in a piloted diffusion flame by LES. Proceedings of the Combustion Institute, 30(1), 557–565.CrossRefGoogle Scholar
Kim, J., Kim, H., Kim, J., and Lee, C. 2022. Deep reinforcement learning for large-eddy simulation modeling in wall-bounded turbulence. Physics of Fluids, 34(10), 105132.CrossRefGoogle Scholar
Koza, J. R. 1994. Genetic programming as a means for programming computers by natural selection. Statistics and Computing, 4, 87–112.CrossRefGoogle Scholar
Lapeyre, C. J., Misdariis, A., Cazard, N., Veynante, D., and Poinsot, T. 2019. Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates. Combustion and Flame, 203, 255–264.CrossRefGoogle Scholar
Li, H., Zhao, Y., Wang, J., and Sandberg, R. D. 2021. Data-driven model development for large-eddy simulation of turbulence using gene-expression programing. Physics of Fluids, 33(12), 125127.CrossRefGoogle Scholar
Li, H., Waschkowski, F., Zhao, Y., and Sandberg, R. D. 2023. Turbulence model development based on a novel method combining gene expression programming with an artificial neural network. ArXiv:2301.07293.Google Scholar
Ling, J., Kurzawski, A., and Templeton, J. 2016. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807, 155–166.CrossRefGoogle Scholar
Liu, Y., Hu, R., Kraus, A., Balaprakash, P., and Obabko, A. 2022. Data-driven modeling of coarse mesh turbulence for reactor transient analysis using convolutional recurrent neural networks. Nuclear Engineering and Design, 390, 111716.CrossRefGoogle Scholar
Lundberg, S. M., and Lee, S.-I. 2017. A unified approach to interpreting model predictions. Pages 4768–4777 in NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems.Google Scholar
McCulloch, W., and Pitts, W. 1943. A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 127–147.CrossRefGoogle Scholar
Meneveau, C. 1994. Statistics of turbulence subgrid‐scale stresses: Necessary conditions and experimental tests. Physics of Fluids, 6(2), 815–833.CrossRefGoogle Scholar
Mitchell, T. 1997. Machine Learning. McGraw Hill.Google Scholar
Nikolaou, Z. M., Chrysostomou, C., Vervisch, L., and S, Cant. 2019. Progress variable variance and filtered rate modelling using convolutional neural networks and flamelet methods. Flow Turbulence Combust, 103, 485–501.CrossRefGoogle Scholar
Nillson, N. J. 2010. The Quest for Artificial Intelligence. Cambridge University Press.Google Scholar
Novati, G., de Laroussilhe, H. L., and Koumoutsakos, P. 2021. Automating turbulence modelling by multi-agent reinforcement learning. Nature Machine Intelligence, 3, 87–96.CrossRefGoogle Scholar
Parish, E. J., and Duraisamy, K. 2016. A paradigm for data-driven predictive modeling using field inversion and machine learning. Journal of Computational Physics, 305, 758–774.CrossRefGoogle Scholar
Reissmann, M., Hasslberger, J., Sandberg, R. D., and Klein, M. 2020. Application of Gene Expression Programming to a posteriori LES modeling of a Taylor Green Vortex. Journal of Computational Physics, 424, 109859.Google Scholar
Ren, J., Wang, H., Luo, K., and Fan, J. 2021. A priori assessment of convolutional neural network and algebraic models for flame surface density of high Karlovitz premixed flames. Physics of Fluids, 33(3), 036111.CrossRefGoogle Scholar
Reshef, D. N., Reshef, Y. A., Finucane, H. K., Grossman, S. R., McVean, G., Turnbaugh, P. J., Lander, E. S., Mitzenmacher, M., and Sabeti, P. C. 2011. Detecting Novel Associations in Large Data Sets. Science, 334(6062), 1518–1524.CrossRefGoogle ScholarPubMed
Rosenblatt, F. 1958. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–340.CrossRefGoogle ScholarPubMed
Rumelhart, D., Hinton, G., and Williams, R. 1986. Learning representations by back-propagating errors. Nature, 323, 533–536.CrossRefGoogle Scholar
Sarghini, F., De Felice, G., and Santini, S. 2003. Neural networks based subgrid scale modeling in large eddy simulations. Computers & Fluids, 32(1), 97–108.CrossRefGoogle Scholar
Schmelzer, M., Dwight, R. P., and Cinnella, P. 2020. Discovery of algebraic Reynolds-stress models using sparse symbolic regression. Flow, Turbulence and Combustion, 104, 579–603.CrossRefGoogle Scholar
Schober, P., Boer, C., and Schwarte, L. A. 2018. Correlation coefficients: appropriate use and interpretation. Anesthesia & Analgesia, 126(5), 1763–1768.CrossRefGoogle ScholarPubMed
Schoepplein, M., Weatheritt, J., Sandberg, R. D., Talei, M., and Klein, M. 2018. Application of an evolutionary algorithm to LES modelling of turbulent transport in premixed flames. Journal of Computational Physics, 374, 1166–1179.CrossRefGoogle Scholar
Seltz, A., Domingo, P., Vervisch, L., and Nikolaou, Z. M. 2019. Direct mapping from LES resolved scales to filtered-flame generated manifolds using convolutional neural networks. Combustion and Flame, 210, 71–82.CrossRefGoogle Scholar
Sen, B. A., Hawkes, E. R., and Menon, S. 2010. Large eddy simulation of extinction and reignition with artificial neural networks based chemical kinetics. Combustion and Flame, 157(3), 566–578.Google Scholar
Shin, J., Ge, Y., Lampmann, A., and Pfitzner, M. 2021. A data-driven subgrid scale model in Large Eddy Simulation of turbulent premixed combustion. Combustion and Flame, 231, 111486.CrossRefGoogle Scholar
Shin, J., Hansinger, M., Pfitzner, M, and Klein, M. 2022. A Priori Analysis on Deep Learning of Filtered Reaction Rate. Flow Turbulence and Combustion, 109, 383–409.CrossRefGoogle Scholar
Swaminathan, N., and Parente, A. 2022. Machine Learning and Its Application to Reacting Flows. Springer.Google Scholar
Vaddireddy, H., Rasheed, A., Staples, A. E., and San, O. 2020. Feature engineering and symbolic regression methods for detecting hidden physics from sparse sensor observation data. Physics of Fluids, 32(1), 015113.CrossRefGoogle Scholar
Vollant, A., Balarac, G., and Corre, C. 2017. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedure. Journal of Turbulence, 18, 854–878.CrossRefGoogle Scholar
Waschkowski, F., Li, H., Deshmukh, A., Grenga, T., Zhao, Y., Pitsch, H., Klewicki, J., and Sandberg, R. D. 2022a. Gradient information and regularization for gene expression programming to develop data-driven physics closure models. ArXiv:2211.12341.Google Scholar
Waschkowski, F., Zhao, Y., Sandberg, R. D., and Klewicki, J. 2022b. Multi-objective CFD-driven development of coupled turbulence closure models. Journal of Computational Physics, 452, 110922.CrossRefGoogle Scholar
Weatheritt, J., and Sandberg, R. D. 2016. A novel evolutionary algorithm applied to algebraic modifications of the RANS stress–strain relationship. Journal of Computational Physics, 325, 22–37.CrossRefGoogle Scholar
Xie, C., Li, K., Ma, C., and Wang, J. 2019. Modeling subgrid-scale force and divergence of heat flux of compressible isotropic turbulence by artificial neural network. Physical Review Fluids, 4(10), 104605.CrossRefGoogle Scholar
Xing, V., Lapeyre, C., Jaravel, T., and Poinsot, T. 2021. Generalization capability of convolutional neural networks for progress variable variance and reaction rate subgrid-scale modeling. Energies, 14(16), 5096.CrossRefGoogle Scholar
Yao, S., Wang, B., Kronenburg, A., and Stein, O. T. 2021. Conditional scalar dissipation rate modeling for turbulent spray flames using artificial neural networks. Proceedings of the Combustion Institute, 38(2), 3371–3378.CrossRefGoogle Scholar
Yellapantula, S., Perry, B. A., and Grout, R. W. 2021. Deep learning-based model for progress variable dissipation rate in turbulent premixed flames. Proceedings of the Combustion Institute, 38(2), 2929–2938.CrossRefGoogle Scholar
Zhao, Y., Akolekar, H. D., Weatheritt, J., Michelassi, V., and Sandberg, R. D. 2020. RANS turbulence model development using CFD-driven machine learning. Journal of Computational Physics, 411, 109413.CrossRefGoogle Scholar
Zhou, Z., He, G., Wang, S., and Jin, G. 2019. Subgrid-scale model for large-eddy simulation of isotropic turbulent flows using an artificial neural network. Computers and Fluids, 195, 104319.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×