1
|
Phan J, Sarmad M, Ruspini L, Kiss G, Lindseth F. Generating 3D images of material microstructures from a single 2D image: a denoising diffusion approach. Sci Rep 2024; 14:6498. [PMID: 38499588 PMCID: PMC10948834 DOI: 10.1038/s41598-024-56910-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/12/2024] [Indexed: 03/20/2024] Open
Abstract
Three-dimensional (3D) images provide a comprehensive view of material microstructures, enabling numerical simulations unachievable with two-dimensional (2D) imaging alone. However, obtaining these 3D images can be costly and constrained by resolution limitations. We introduce a novel method capable of generating large-scale 3D images of material microstructures, such as metal or rock, from a single 2D image. Our approach circumvents the need for 3D image data while offering a cost-effective, high-resolution alternative to existing imaging techniques. Our method combines a denoising diffusion probabilistic model with a generative adversarial network framework. To compensate for the lack of 3D training data, we implement chain sampling, a technique that utilizes the 3D intermediate outputs obtained by reversing the diffusion process. During the training phase, these intermediate outputs are guided by a 2D discriminator. This technique facilitates our method's ability to gradually generate 3D images that accurately capture the geometric properties and statistical characteristics of the original 2D input. This study features a comparative analysis of the 3D images generated by our method, SliceGAN (the current state-of-the-art method), and actual 3D micro-CT images, spanning a diverse set of rock and metal types. The results shown an improvement of up to three times in the Frechet inception distance score, a typical metric for evaluating the performance of image generative models, and enhanced accuracy in derived properties compared to SliceGAN. The potential of our method to produce high-resolution and statistically representative 3D images paves the way for new applications in material characterization and analysis domains.
Collapse
Affiliation(s)
- Johan Phan
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway.
- Petricore Norway, Trondheim, Norway.
| | - Muhammad Sarmad
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | | | - Gabriel Kiss
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Frank Lindseth
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
2
|
Yin R, Teng Q, Wu X, Zhang F, Xiong S. Three-dimensional reconstruction of granular porous media based on deep generative models. Phys Rev E 2023; 108:055303. [PMID: 38115524 DOI: 10.1103/physreve.108.055303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 10/09/2023] [Indexed: 12/21/2023]
Abstract
Reconstruction of microstructure in granular porous media, which can be viewed as granular assemblies, is crucial for studying their characteristics and physical properties in various fields concerned with the behavior of such media, including petroleum geology and computational materials science. In spite of the fact that many existing studies have investigated grain reconstruction, most of them treat grains as simplified individuals for discrete reconstruction, which cannot replicate the complex geometrical shapes and natural interactions between grains. In this work, a hybrid generative model based on a deep-learning algorithm is proposed for high-quality three-dimensional (3D) microstructure reconstruction of granular porous media from a single two-dimensional (2D) slice image. The method extracts 2D prior information from the given image and generates the grain set as a whole. Both a self-attention module and effective pattern loss are introduced in a bid to enhance the reconstruction ability of the model. Samples with grains of varied geometrical shapes are utilized for the validation of our method, and experimental results demonstrate that our proposed approach can accurately reproduce the complex morphology and spatial distribution of grains without any artificiality. Furthermore, once the model training is complete, rapid end-to-end generation of diverse 3D realizations from a single 2D image can be achieved.
Collapse
Affiliation(s)
- Rongyan Yin
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Qizhi Teng
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Xiaohong Wu
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Fan Zhang
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Shuhua Xiong
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| |
Collapse
|
3
|
Lu S, Jayaraman A. Pair-Variational Autoencoders for Linking and Cross-Reconstruction of Characterization Data from Complementary Structural Characterization Techniques. JACS AU 2023; 3:2510-2521. [PMID: 37772182 PMCID: PMC10523369 DOI: 10.1021/jacsau.3c00275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/11/2023] [Accepted: 07/11/2023] [Indexed: 09/30/2023]
Abstract
In materials research, structural characterization often requires multiple complementary techniques to obtain a holistic morphological view of a synthesized material. Depending on the availability and accessibility of the different characterization techniques (e.g., scattering, microscopy, spectroscopy), each research facility or academic research lab may have access to high-throughput capability in one technique but face limitations (sample preparation, resolution, access time) with other technique(s). Furthermore, one type of structural characterization data may be easier to interpret than another (e.g., microscopy images are easier to interpret than small-angle scattering profiles). Thus, it is useful to have machine learning models that can be trained on paired structural characterization data from multiple techniques (easy and difficult to interpret, fast and slow in data collection or sample preparation) so that the model can generate one set of characterization data from the other. In this paper we demonstrate one such machine learning workflow, Pair-Variational Autoencoders (PairVAE), that works with data from small-angle X-ray scattering (SAXS) that present information about bulk morphology and images from scanning electron microscopy (SEM) that present two-dimensional local structural information on the sample. Using paired SAXS and SEM data of newly observed block copolymer assembled morphologies [open access data from Doerk G. S.; et al. Sci. Adv.2023, 9 ( (2), ), eadd3687], we train our PairVAE. After successful training, we demonstrate that the PairVAE can generate SEM images of the block copolymer morphology when it takes as input that sample's corresponding SAXS 2D pattern and vice versa. This method can be extended to other soft material morphologies as well and serves as a valuable tool for easy interpretation of 2D SAXS patterns as well as an engine for generating ensembles of similar microscopy images to create a database for other downstream calculations of structure-property relationships.
Collapse
Affiliation(s)
- Shizhao Lu
- Department
of Chemical and Biomolecular Engineering, University of Delaware, Newark, Delaware 19716, United States
| | - Arthi Jayaraman
- Department
of Chemical and Biomolecular Engineering, University of Delaware, Newark, Delaware 19716, United States
- Department
of Materials Science and Engineering, University
of Delaware, Newark, Delaware 19716, United
States
| |
Collapse
|
4
|
Rigby SP. The Anatomy of Amorphous, Heterogeneous Catalyst Pellets. MATERIALS (BASEL, SWITZERLAND) 2023; 16:3205. [PMID: 37110038 PMCID: PMC10142278 DOI: 10.3390/ma16083205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/03/2023] [Accepted: 04/05/2023] [Indexed: 06/19/2023]
Abstract
This review focuses on disordered, or amorphous, porous heterogeneous catalysts, especially those in the forms of pellets and monoliths. It considers the structural characterisation and representation of the void space of these porous media. It discusses the latest developments in the determination of key void space descriptors, such as porosity, pore size, and tortuosity. In particular, it discusses the contributions that can be made by various imaging modalities in both direct and indirect characterisations and their limitations. The second part of the review considers the various types of representations of the void space of porous catalysts. It was found that these come in three main types, which are dependent on the level of idealisation of the representation and the final purpose of the model. It was found that the limitations on the resolution and field of view for direct imaging methods mean that hybrid methods, combined with indirect porosimetry methods that can bridge the many length scales of structural heterogeneity and provide more statistically representative parameters, deliver the best basis for model construction for understanding mass transport in highly heterogeneous media.
Collapse
Affiliation(s)
- Sean P. Rigby
- Department of Chemical and Environmental Engineering, Faculty of Engineering, University Park Campus, University of Nottingham, Nottingham NG7 2RD, UK;
- Geo-Energy Research Centre, University Park Campus, University of Nottingham, Nottingham NG7 2RD, UK
| |
Collapse
|
5
|
Zhang F, He X, Teng Q, Wu X, Cui J, Dong X. PM-ARNN: 2D-To-3D reconstruction paradigm for microstructure of porous media via adversarial recurrent neural network. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
6
|
Kamrava S, Mirzaee H. End-to-end three-dimensional designing of complex disordered materials from limited data using machine learning. Phys Rev E 2022; 106:055301. [PMID: 36559380 DOI: 10.1103/physreve.106.055301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 10/04/2022] [Indexed: 12/24/2022]
Abstract
Precise 3D representation of complex materials, here the lithium-ion batteries, is a critical step toward designing optimized energy storage systems. One requires obtaining several such samples for a more accurate evaluation of uncertainty and variability, which in turn can be costly and time demanding. Using 3D models is crucial when it comes to evaluating the transport and heat capacity of batteries. Further, such models represent the microstructures more precisely where connectivity and heterogeneity can be detected. However, 3D images are hard to access, and the available images are often collected in two dimensions (2D). Such 2D images, on the other hand, are more accessible and often have higher resolution. In this paper, a deep learning method has been applied to take advantage of 2D images and build 3D models of heterogeneous materials through which more accurate characterization and physical evaluations can be achieved. While being trained using only 2D images, the proposed framework can be utilized to generate 3D images. The proposed method is applied to a few realistic 3D images of lithium-ion battery electrodes. The results indicate that the implemented method can reproduce important structural properties while the flow and heat properties are within an acceptable range.
Collapse
|
7
|
Zhang F, Teng Q, He X, Wu X, Dong X. Improved recurrent generative model for reconstructing large-size porous media from two-dimensional images. Phys Rev E 2022; 106:025310. [PMID: 36109946 DOI: 10.1103/physreve.106.025310] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/06/2022] [Indexed: 06/15/2023]
Abstract
Modeling the three-dimensional (3D) structure from a given 2D image is of great importance for analyzing and studying the physical properties of porous media. As an intractable inverse problem, many methods have been developed to address this fundamental problems over the past decades. Among many methods, the deep learning-(DL) based methods show great advantages in terms of accuracy, diversity, and efficiency. Usually, the 3D reconstruction from the 2D slice with a larger field-of-view is more conducive to simulate and analyze the physical properties of porous media accurately. However, due to the limitation of reconstruction ability, the reconstruction size of most widely used generative adversarial network-based model is constrained to 64^{3} or 128^{3}. Recently, a 3D porous media recurrent neural network based method (namely, 3D-PMRNN) (namely 3D-PMRNN) has been proposed to improve the reconstruction ability, and thus the reconstruction size is expanded to 256^{3}. Nevertheless, in order to train these models, the existed DL-based methods need to down-sample the original computed tomography (CT) image first so that the convolutional kernel can capture the morphological features of training images. Thus, the detailed information of the original CT image will be lost. Besides, the 3D reconstruction from a optical thin section is not available because of the large size of the cutting slice. In this paper, we proposed an improved recurrent generative model to further enhance the reconstruction ability (512^{3}). Benefiting from the RNN-based architecture, the proposed model requires only one 3D training sample at least and generates the 3D structures layer by layer. There are three more improvements: First, a hybrid receptive field for the kernel of convolutional neural network is adopted. Second, an attention-based module is merged into the proposed model. Finally, a useful section loss is proposed to enhance the continuity along the Z direction. Three experiments are carried out to verify the effectiveness of the proposed model. Experimental results indicate the good reconstruction ability of proposed model in terms of accuracy, diversity, and generalization. And the effectiveness of section loss is also proved from the perspective of visual inspection and statistical comparison.
Collapse
Affiliation(s)
- Fan Zhang
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
- School of electrical engineering and electronic information, Xihua University, Chengdu 610039, China
| | - Qizhi Teng
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Xiaohai He
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Xiaohong Wu
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Xiucheng Dong
- School of electrical engineering and electronic information, Xihua University, Chengdu 610039, China
| |
Collapse
|