1
|
Wohlmann J. Expanding the field of view - a simple approach for interactive visualisation of electron microscopy data. J Cell Sci 2024; 137:jcs262198. [PMID: 39324375 PMCID: PMC11529876 DOI: 10.1242/jcs.262198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 09/12/2024] [Indexed: 09/27/2024] Open
Abstract
The unparalleled resolving power of electron microscopy is both a blessing and a curse. At 30,000× magnification, 1 µm corresponds to 3 cm in the image and the field of view is only a few micrometres or less, resulting in an inevitable reduction in the spatial data available in an image. Consequently, the gain in resolution is at the cost of loss of the contextual 'reference space', which is crucial for understanding the embedded structures of interest. This problem is particularly pronounced in immunoelectron microscopy, where the detection of a gold particle is crucial for the localisation of specific molecules. The common solution of presenting high-magnification and overview images side by side often insufficiently represents the cellular environment. To address these limitations, we propose here an interactive visualization strategy inspired by digital maps and GPS modules which enables seamless transitions between different magnifications by dynamically linking virtual low magnification overview images with primary high-resolution data. By enabling dynamic browsing, it offers the potential for a deeper understanding of cellular landscapes leading to more comprehensive analysis of the primary ultrastructural data.
Collapse
Affiliation(s)
- Jens Wohlmann
- Department of Biosciences, University of Oslo, Blindernveien 31, PO Box 1041, 0316 Oslo, Norway
| |
Collapse
|
2
|
Grass DM, Malek G, Taïeb HM, Ittah E, Richard H, Reznikov N, Laverty S. Characterization and quantification of in-vitro equine bone resorption in 3D using μCT and deep learning-aided feature segmentation. Bone 2024; 185:117131. [PMID: 38777311 DOI: 10.1016/j.bone.2024.117131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 05/18/2024] [Accepted: 05/19/2024] [Indexed: 05/25/2024]
Abstract
High cyclic strains induce formation of microcracks in bone, triggering targeted bone remodeling, which entails osteoclastic resorption. Racehorse bone is an ideal model for studying the effects of high-intensity loading, as it is subject to focal formation of microcracks and subsequent bone resorption. The volume of resorption in vitro is considered a direct indicator of osteoclast activity but indirect 2D measurements are used more often. Our objective was to develop an accurate, high-throughput method to quantify equine osteoclast resorption volume in μCT 3D images. Here, equine osteoclasts were cultured on equine bone slices and imaged with μCT pre- and postculture. Individual resorption events were then isolated and analyzed in 3D. Modal volume, maximum depth, and aspect ratio of resorption events were calculated. A convolutional neural network (CNN U-Net-like) was subsequently trained to identify resorption events on post-culture μCT images alone, without the need for pre-culture imaging, using archival bone slices with known resorption areas and paired CTX-I biomarker levels in culture media. 3D resorption volume measurements strongly correlated with both the CTX-I levels (p < 0.001) and area measurements (p < 0.001). Our 3D analysis shows that the shapes of resorption events form a continuous spectrum, rather than previously reported pit and trench categories. With more extensive resorption, shapes of increasing complexity appear, although simpler resorption cavity morphologies (small, rounded) remain most common, in acord with the left-hand limit paradigm. Finally, we show that 2D measurements of in vitro osteoclastic resorption are a robust and reliable proxy.
Collapse
Affiliation(s)
- Debora M Grass
- Comparative Orthopaedic Research Laboratory, Department of Clinical Sciences, Faculty of Veterinary Medicine, University of Montreal, 3200 Sicotte, St-Hyacinthe, QC J2S 2M2, Canada
| | - Gwladys Malek
- Comparative Orthopaedic Research Laboratory, Department of Clinical Sciences, Faculty of Veterinary Medicine, University of Montreal, 3200 Sicotte, St-Hyacinthe, QC J2S 2M2, Canada
| | - Hubert M Taïeb
- Department of Bioengineering, Faculty of Engineering, McGill University, 3480 University Street, Montreal, Quebec H3A 0E9, Canada
| | - Eran Ittah
- Department of Bioengineering, Faculty of Engineering, McGill University, 3480 University Street, Montreal, Quebec H3A 0E9, Canada
| | - Hélène Richard
- Comparative Orthopaedic Research Laboratory, Department of Clinical Sciences, Faculty of Veterinary Medicine, University of Montreal, 3200 Sicotte, St-Hyacinthe, QC J2S 2M2, Canada
| | - Natalie Reznikov
- Department of Bioengineering, Faculty of Engineering, McGill University, 3480 University Street, Montreal, Quebec H3A 0E9, Canada
| | - Sheila Laverty
- Comparative Orthopaedic Research Laboratory, Department of Clinical Sciences, Faculty of Veterinary Medicine, University of Montreal, 3200 Sicotte, St-Hyacinthe, QC J2S 2M2, Canada.
| |
Collapse
|
3
|
Nguyen N, Bohak C, Engel D, Mindek P, Strnad O, Wonka P, Li S, Ropinski T, Viola I. Finding Nano-Ötzi: Cryo-Electron Tomography Visualization Guided by Learned Segmentation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4198-4214. [PMID: 35749328 DOI: 10.1109/tvcg.2022.3186146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details of interest due to low signal-to-noise ratio. In order to design more powerful transfer functions, we propose leveraging soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning, where we combine the advantages of two segmentation algorithms. First, the weak segmentation algorithm provides good results for propagating sparse user-provided labels to other voxels in the same volume and is used to generate dense pseudo-labels. Second, the powerful deep-learning-based segmentation algorithm learns from these pseudo-labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses deep-learning-based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through frequency distribution analysis. Furthermore, our visualization uses gradient-free ambient occlusion shading to further suppress the visual presence of noise, and to give structural detail the desired prominence. The cryo-ET data studied in our technical experiments are based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.
Collapse
|
4
|
Jadhav S, Torkaman M, Tannenbaum A, Nadeem S, Kaufman AE. Volume Exploration Using Multidimensional Bhattacharyya Flow. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1651-1663. [PMID: 34780328 PMCID: PMC9594946 DOI: 10.1109/tvcg.2021.3127918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.
Collapse
|
5
|
Shen J, Li H, Xu J, Biswas A, Shen HW. IDLat: An Importance-Driven Latent Generation Method for Scientific Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:679-689. [PMID: 36166537 DOI: 10.1109/tvcg.2022.3209419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
Collapse
|
6
|
Han J, Wang C. SSR-TVD: Spatial Super-Resolution for Time-Varying Data Analysis and Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2445-2456. [PMID: 33074824 DOI: 10.1109/tvcg.2020.3032123] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present SSR-TVD, a novel deep learning framework that produces coherent spatial super-resolution (SSR) of time-varying data (TVD) using adversarial learning. In scientific visualization, SSR-TVD is the first work that applies the generative adversarial network (GAN) to generate high-resolution volumes for three-dimensional time-varying data sets. The design of SSR-TVD includes a generator and two discriminators (spatial and temporal discriminators). The generator takes a low-resolution volume as input and outputs a synthesized high-resolution volume. To capture spatial and temporal coherence in the volume sequence, the two discriminators take the synthesized high-resolution volume(s) as input and produce a score indicating the realness of the volume(s). Our method can work in the in situ visualization setting by downscaling volumetric data from selected time steps as the simulation runs and upscaling downsampled volumes to their original resolution during postprocessing. To demonstrate the effectiveness of SSR-TVD, we show quantitative and qualitative results with several time-varying data sets of different characteristics and compare our method against volume upscaling using bicubic interpolation and a solution solely based on CNN.
Collapse
|
7
|
Visualization of WiFi Signals Using Programmable Transfer Functions. INFORMATION 2022. [DOI: 10.3390/info13050224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/10/2022] Open
Abstract
In this paper, we show how volume rendering with a Programmable Transfer Function can be used for the effective and comprehensible visualization of WiFi signals. A traditional transfer function uses a low-dimensional lookup table to map the volumetric scalar field to color and opacity. In this paper, we present the concept of a Programmable Transfer Function. We then show how generalizing traditional lookup-based transfer functions to Programmable Transfer Functions enables us to leverage view-dependent and real-time attributes of a volumetric field to depict the data variations of WiFi surfaces with low and high-frequency components. Our Programmable Transfer Functions facilitate interactive knowledge discovery and produce meaningful visualizations.
Collapse
|
8
|
Dai H, Tao Y, He X, Lin H. IsoExplorer: an isosurface-driven framework for 3D shape analysis of biomedical volume data. J Vis (Tokyo) 2021; 24:1253-1266. [PMID: 34429686 PMCID: PMC8376112 DOI: 10.1007/s12650-021-00770-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 07/23/2021] [Indexed: 11/29/2022]
Abstract
Abstract The high-resolution scanning devices developed in recent decades provide biomedical volume datasets that support the study of molecular structure and drug design. Isosurface analysis is an important tool in these studies, and the key is to construct suitable description vectors to support subsequent tasks, such as classification and retrieval. Traditional methods based on handcrafted features are insufficient for dealing with complex structures, while deep learning-based approaches have high memory and computation costs when dealing directly with volume data. To address these problems, we propose IsoExplorer, an isosurface-driven framework for 3D shape analysis of biomedical volume data. We first extract isosurfaces from volume data and split them into individual 3D shapes according to their connectivity. Then, we utilize octree-based convolution to design a variational autoencoder model that learns the latent representations of the shape. Finally, these latent representations are used for low-dimensional isosurface representation and shape retrieval. We demonstrate the effectiveness and usefulness of IsoExplorer via isosurface similarity analysis, shape retrieval of real-world data, and comparison with existing methods. Graphic abstract ![]()
Collapse
Affiliation(s)
- Haoran Dai
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Yubo Tao
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Xiangyang He
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Hai Lin
- State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| |
Collapse
|
9
|
Engel D, Ropinski T. Deep Volumetric Ambient Occlusion. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1268-1278. [PMID: 33048686 DOI: 10.1109/tvcg.2020.3030344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAO's ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.
Collapse
|
10
|
Jakob J, Gross M, Gunther T. A Fluid Flow Data Set for Machine Learning and its Application to Neural Flow Map Interpolation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1279-1289. [PMID: 33026993 DOI: 10.1109/tvcg.2020.3028947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, deep learning has opened countless research opportunities across many different disciplines. At present, visualization is mainly applied to explore and explain neural networks. Its counterpart-the application of deep learning to visualization problems-requires us to share data more openly in order to enable more scientists to engage in data-driven research. In this paper, we construct a large fluid flow data set and apply it to a deep learning problem in scientific visualization. Parameterized by the Reynolds number, the data set contains a wide spectrum of laminar and turbulent fluid flow regimes. The full data set was simulated on a high-performance compute cluster and contains 8000 time-dependent 2D vector fields, accumulating to more than 16 TB in size. Using our public fluid data set, we trained deep convolutional neural networks in order to set a benchmark for an improved post-hoc Lagrangian fluid flow analysis. In in-situ settings, flow maps are exported and interpolated in order to assess the transport characteristics of time-dependent fluids. Using deep learning, we improve the accuracy of flow map interpolations, allowing a more precise flow analysis at a reduced memory IO footprint.
Collapse
|
11
|
Deep learning for 3D imaging and image analysis in biomineralization research. J Struct Biol 2020; 212:107598. [PMID: 32783967 DOI: 10.1016/j.jsb.2020.107598] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 08/04/2020] [Accepted: 08/05/2020] [Indexed: 11/22/2022]
Abstract
Biomineralization research examines structure-function relations in all types of exo- and endo-skeletons and other hard tissues of living organisms, and it relies heavily on 3D imaging. Segmentation of 3D renderings of biomineralized structures has long been a bottleneck because of human limitations such as our available time, attention span, eye-hand coordination, cognitive biases, and attainable precision, amongst other limitations. Since recently, some of these routine limitations appear to be surmountable thanks to the development of deep-learning algorithms for biological imagery in general, and for 3D image segmentation in particular. Many components of deep learning often appear too abstract for a life scientist. Despite this, the basic principles underlying deep learning have many easy-to-grasp commonalities with human learning and universal logic. This primer presents these basic principles in what we feel is an intuitive manner, without relying on prerequisite knowledge of informatics and computer science, and with the aim of improving the reader's general literacy in artificial intelligence and deep learning. Here, biomineralization case studies are presented to illustrate the application of deep learning for solving segmentation and analysis problems of 3D images ridden by various artifacts, and/or which are plainly difficult to interpret. The presented portfolio of case studies includes three examples of imaging using micro-computed tomography (µCT), and three examples using focused-ion beam scanning electron microscopy (FIB-SEM), all on mineralized tissues. We believe this primer will expand the circle of users of deep learning amongst biomineralization researchers and other life scientists involved with 3D imaging, and will encourage incorporation of this powerful tool into their professional skillsets and to explore it further.
Collapse
|
12
|
Han J, Wang C. TSR-TVD: Temporal Super-Resolution for Time-Varying Data Analysis and Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:205-215. [PMID: 31425081 DOI: 10.1109/tvcg.2019.2934255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present TSR-TVD, a novel deep learning framework that generates temporal super-resolution (TSR) of time-varying data (TVD) using adversarial learning. TSR-TVD is the first work that applies the recurrent generative network (RGN), a combination of the recurrent neural network (RNN) and generative adversarial network (GAN), to generate temporal high-resolution volume sequences from low-resolution ones. The design of TSR-TVD includes a generator and a discriminator. The generator takes a pair of volumes as input and outputs the synthesized intermediate volume sequence through forward and backward predictions. The discriminator takes the synthesized intermediate volumes as input and produces a score indicating the realness of the volumes. Our method handles multivariate data as well where the trained network from one variable is applied to generate TSR for another variable. To demonstrate the effectiveness of TSR-TVD, we show quantitative and qualitative results with several time-varying multivariate data sets and compare our method against standard linear interpolation and solutions solely based on RNN or CNN.
Collapse
|