1
|
Szames E, Ammar K, Tomatis D, Martinez J. FEW-GROUP CROSS SECTIONS LIBRARY BY ACTIVE LEARNING WITH SPLINE KERNELS. EPJ WEB OF CONFERENCES 2021. [DOI: 10.1051/epjconf/202124706012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
This work deals with the representation of homogenized few-groups cross sections libraries by machine learning. A Reproducing Kernel Hilbert Space (RKHS) is used for different Pool Active Learning strategies to obtain an optimal support. Specifically a spline kernel is used and results are compared to multi-linear interpolation as used in industry, discussing the reduction of the library size and of the overall performance. A standard PWR fuel assembly provides the use case (OECD-NEA Burn-up Credit Criticality Benchmark [1]).
Collapse
|
2
|
Szames E, Ammar K, Tomatis D, Martinez J. FEW-GROUP CROSS SECTIONS MODELING BY ARTIFICIAL NEURAL NETWORKS. EPJ WEB OF CONFERENCES 2021. [DOI: 10.1051/epjconf/202124706029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
This work deals with the modeling of homogenized few-group cross sections by Artificial Neural Networks (ANN). A comprehensive sensitivity study on data normalization, network architectures and training hyper-parameters specifically for Deep and Shallow Feed Forward ANN is presented. The optimal models in terms of reduction in the library size and training time are compared to multi-linear interpolation on a Cartesian grid. The use case is provided by the OECD-NEA Burn-up Credit Criticality Benchmark [1]. The Pytorch [2] machine learning framework is used.
Collapse
|