1
|
Chen X, Wang Y, Bao H, Lu K, Jo J, Fu CW, Fekete JD. Visualization-Driven Illumination for Density Plots. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1631-1644. [PMID: 39527427 DOI: 10.1109/tvcg.2024.3495695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
We present a novel visualization-driven illumination model for density plots, a new technique to enhance density plots by effectively revealing the detailed structures in high- and medium-density regions and outliers in low-density regions, while avoiding artifacts in the density field's colors. When visualizing large and dense discrete point samples, scatterplots and dot density maps often suffer from overplotting, and density plots are commonly employed to provide aggregated views while revealing underlying structures. Yet, in such density plots, existing illumination models may produce color distortion and hide details in low-density regions, making it challenging to look up density values, compare them, and find outliers. The key novelty in this work includes (i) a visualization-driven illumination model that inherently supports density-plot-specific analysis tasks and (ii) a new image composition technique to reduce the interference between the image shading and the color-encoded density values. To demonstrate the effectiveness of our technique, we conducted a quantitative study, an empirical evaluation of our technique in a controlled study, and two case studies, exploring twelve datasets with up to two million data point samples.
Collapse
|
2
|
Guo P, Luo D, Wu Y, He S, Deng J, Yao H, Sun W, Zhang J. Coverage Planning for UVC Irradiation: Robot Surface Disinfection Based on Swarm Intelligence Algorithm. SENSORS (BASEL, SWITZERLAND) 2024; 24:3418. [PMID: 38894209 PMCID: PMC11174843 DOI: 10.3390/s24113418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/16/2024] [Accepted: 05/22/2024] [Indexed: 06/21/2024]
Abstract
Ultraviolet (UV) radiation has been widely utilized as a disinfection strategy to effectively eliminate various pathogens. The disinfection task achieves complete coverage of object surfaces by planning the motion trajectory of autonomous mobile robots and the UVC irradiation strategy. This introduces an additional layer of complexity to path planning, as every point on the surface of the object must receive a certain dose of irradiation. Nevertheless, the considerable dosage required for virus inactivation often leads to substantial energy consumption and dose redundancy in disinfection tasks, presenting challenges for the implementation of robots in large-scale environments. Optimizing energy consumption of light sources has become a primary concern in disinfection planning, particularly in large-scale settings. Addressing the inefficiencies associated with dosage redundancy, this study proposes a dose coverage planning framework, utilizing MOPSO to solve the multi-objective optimization model for planning UVC dose coverage. Diverging from conventional path planning methodologies, our approach prioritizes the intrinsic characteristics of dose accumulation, integrating a UVC light efficiency factor to mitigate dose redundancy with the aim of reducing energy expenditure and enhancing the efficiency of robotic disinfection. Empirical trials conducted with autonomous disinfecting robots in real-world settings have corroborated the efficacy of this model in deactivating viruses.
Collapse
Affiliation(s)
- Peiyao Guo
- Research Center for Optoelectronic Materials and Devices, Guangxi Key Laboratory for the Relativistic Astrophysics, School of Physical Science & Technology, Guangxi University, Nanning 530004, China; (P.G.); (D.L.); (Y.W.); (S.H.); (J.D.)
| | - Dekun Luo
- Research Center for Optoelectronic Materials and Devices, Guangxi Key Laboratory for the Relativistic Astrophysics, School of Physical Science & Technology, Guangxi University, Nanning 530004, China; (P.G.); (D.L.); (Y.W.); (S.H.); (J.D.)
| | - Yizhen Wu
- Research Center for Optoelectronic Materials and Devices, Guangxi Key Laboratory for the Relativistic Astrophysics, School of Physical Science & Technology, Guangxi University, Nanning 530004, China; (P.G.); (D.L.); (Y.W.); (S.H.); (J.D.)
| | - Sheng He
- Research Center for Optoelectronic Materials and Devices, Guangxi Key Laboratory for the Relativistic Astrophysics, School of Physical Science & Technology, Guangxi University, Nanning 530004, China; (P.G.); (D.L.); (Y.W.); (S.H.); (J.D.)
| | - Jianyu Deng
- Research Center for Optoelectronic Materials and Devices, Guangxi Key Laboratory for the Relativistic Astrophysics, School of Physical Science & Technology, Guangxi University, Nanning 530004, China; (P.G.); (D.L.); (Y.W.); (S.H.); (J.D.)
| | - Huilu Yao
- School of Electrical Engineering, Guangxi University, Nanning 530004, China;
| | - Wenhong Sun
- Research Center for Optoelectronic Materials and Devices, Guangxi Key Laboratory for the Relativistic Astrophysics, School of Physical Science & Technology, Guangxi University, Nanning 530004, China; (P.G.); (D.L.); (Y.W.); (S.H.); (J.D.)
- MOE Key Laboratory of New Processing Technology for Nonferrous Metals and Materials, Guangxi Key Laboratory of Processing for Non-Ferrous Metals and Featured Materials, Nanning 530004, China
- Third Generation Semiconductor Industry Research Institute, Guangxi University, Nanning 530004, China
| | - Jicai Zhang
- College of Mathematics and Physics, Beijing University of Chemical Technology, Beijing 100029, China;
| |
Collapse
|
3
|
Qiu RQ, Tsai ML, Chen YW, Singh SP, Lo CY. Integrated Automatic Optical Inspection and Image Processing Procedure for Smart Sensing in Production Lines. SENSORS (BASEL, SWITZERLAND) 2024; 24:1619. [PMID: 38475159 DOI: 10.3390/s24051619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/23/2024] [Accepted: 02/27/2024] [Indexed: 03/14/2024]
Abstract
An integrated automatic optical inspection (iAOI) system with a procedure was proposed for a printed circuit board (PCB) production line, in which pattern distortions and performance deviations appear with process variations. The iAOI system was demonstrated in a module comprising a camera and lens, showing improved supportiveness for commercially available hardware. The iAOI procedure was realized in a serial workflow of image registration, threshold setting, image gradient, marker alignment, and geometric transformation; furthermore, five operations with numerous functions were prepared for image processing. In addition to the system and procedure, a graphical user interface (GUI) that displays sequential image operation results with analyzed characteristics was established for simplicity. To demonstrate its effectiveness, self-complementary Archimedean spiral antenna (SCASA) samples fabricated via standard PCB fabrication and intentional pattern distortions were demonstrated. The results indicated that, compared with other existing methods, the proposed iAOI system and procedure provide unified and standard operations with efficiency, which result in scientific and unambiguous judgments on pattern quality. Furthermore, we showed that when an appropriate artificial intelligence model is ready, the electromagnetic characteristic projection for SCASAs can be simply obtained through the GUI.
Collapse
Affiliation(s)
- Rong-Qing Qiu
- Institute of NanoEngineering and MicroSystems, National Tsing Hua University, Hsinchu 300044, Taiwan
| | - Mu-Lin Tsai
- Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan
| | - Yu-Wen Chen
- Institute of NanoEngineering and MicroSystems, National Tsing Hua University, Hsinchu 300044, Taiwan
| | - Shivendra Pratap Singh
- Institute of NanoEngineering and MicroSystems, National Tsing Hua University, Hsinchu 300044, Taiwan
| | - Cheng-Yao Lo
- Institute of NanoEngineering and MicroSystems, National Tsing Hua University, Hsinchu 300044, Taiwan
- Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan
| |
Collapse
|
4
|
Blum A, Gillet R, Rauch A, Urbaneja A, Biouichi H, Dodin G, Germain E, Lombard C, Jaquet P, Louis M, Simon L, Gondim Teixeira P. 3D reconstructions, 4D imaging and postprocessing with CT in musculoskeletal disorders: Past, present and future. Diagn Interv Imaging 2020; 101:693-705. [PMID: 33036947 DOI: 10.1016/j.diii.2020.09.008] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/12/2020] [Accepted: 09/15/2020] [Indexed: 12/30/2022]
Abstract
Three-dimensional (3D) imaging and post processing are common tasks used daily in many disciplines. The purpose of this article is to review the new postprocessing tools available. Although 3D imaging can be applied to all anatomical regions and used with all imaging techniques, its most varied and relevant applications are found with computed tomography (CT) data in musculoskeletal imaging. These new applications include global illumination rendering (GIR), unfolded rib reformations, subtracted CT angiography for bone analysis, dynamic studies, temporal subtraction and image fusion. In all of these tasks, registration and segmentation are two basic processes that affect the quality of the results. GIR simulates the complete interaction of photons with the scanned object, providing photorealistic volume rendering. Reformations to unfold the rib cage allow more accurate and faster diagnosis of rib lesions. Dynamic CT can be applied to cinematic joint evaluations a well as to perfusion and angiographic studies. Finally, more traditional techniques, such as minimum intensity projection, might find new applications for bone evaluation with the advent of ultra-high-resolution CT scanners. These tools can be used synergistically to provide morphologic, topographic and functional information and increase the versatility of CT.
Collapse
Affiliation(s)
- A Blum
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France; Unité INSERM U1254 Imagerie Adaptative Diagnostique et Interventionnelle (IADI), CHRU of Nancy, 54511 Vandœuvre-lès-Nancy, France.
| | - R Gillet
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - A Rauch
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - A Urbaneja
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - H Biouichi
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - G Dodin
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - E Germain
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - C Lombard
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - P Jaquet
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - M Louis
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - L Simon
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - P Gondim Teixeira
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France; Unité INSERM U1254 Imagerie Adaptative Diagnostique et Interventionnelle (IADI), CHRU of Nancy, 54511 Vandœuvre-lès-Nancy, France
| |
Collapse
|
5
|
Petracek P, Kratky V, Saska M. Dronument: System for Reliable Deployment of Micro Aerial Vehicles in Dark Areas of Large Historical Monuments. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2969935] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
6
|
Saska M, Hert D, Baca T, Kratky V, Nascimento T. Formation control of unmanned micro aerial vehicles for straitened environments. Auton Robots 2020. [DOI: 10.1007/s10514-020-09913-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
7
|
Chu Y, Li X, Yang X, Ai D, Huang Y, Song H, Jiang Y, Wang Y, Chen X, Yang J. Perception enhancement using importance-driven hybrid rendering for augmented reality based endoscopic surgical navigation. BIOMEDICAL OPTICS EXPRESS 2018; 9:5205-5226. [PMID: 30460123 PMCID: PMC6238941 DOI: 10.1364/boe.9.005205] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 09/20/2018] [Accepted: 09/22/2018] [Indexed: 06/09/2023]
Abstract
Misleading depth perception may greatly affect the correct identification of complex structures in image-guided surgery. In this study, we propose a novel importance-driven hybrid rendering method to enhance perception for navigated endoscopic surgery. First, the volume structures are enhanced using gradient-based shading to reduce the color information in low-priority regions and improve the distinctions between complicated structures. Second, an importance sorting method based on the order-independent transparency rendering is introduced to intensify the perception of multiple surfaces. Third, volume data are adaptively truncated and emphasized with respect to the perspective orientation and the illustration of critical information for viewing range extension. Various experimental results prove that with the combination of volume and surface rendering, our method can effectively improve the depth distinction of multiple objects both in simulated and clinical scenes. Our importance-driven surface rendering method demonstrates improved average performance and statistical significance as rated by 15 participants (five clinicians and ten non-clinicians) on a five-point Likert scale. Further, the average frame rate of hybrid rendering with thin-layer sectioning reaches 42 fps. Given that the process of the hybrid rendering is fully automatic, it can be utilized in real-time surgical navigation to improve the rendering efficiency and information validity.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xu Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xilin Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Yurong Jiang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xiaohong Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Beijing 100730, China
- Co-corresponding authors
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
- Co-corresponding authors
| |
Collapse
|
8
|
Stoppel S, Erga MP, Bruckner S. Firefly: Virtual Illumination Drones for Interactive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:1204-1213. [PMID: 30130205 DOI: 10.1109/tvcg.2018.2864656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Light specification in three dimensional scenes is a complex problem and several approaches have been presented that aim to automate this process. However, there are many scenarios where a static light setup is insufficient, as the scene content and camera position may change. Simultaneous manual control over the camera and light position imposes a high cognitive load on the user. To address this challenge, we introduce a novel approach for automatic scene illumination with Fireflies. Fireflies are intelligent virtual light drones that illuminate the scene by traveling on a closed path. The Firefly path automatically adapts to changes in the scene based on an outcome-oriented energy function. To achieve interactive performance, we employ a parallel rendering pipeline for the light path evaluations. We provide a catalog of energy functions for various application scenarios and discuss the applicability of our method on several examples.
Collapse
|
9
|
Gulay SP, Bista S, Varshney A, Kirmizialtin S, Sanbonmatsu KY, Dinman JD. Tracking fluctuation hotspots on the yeast ribosome through the elongation cycle. Nucleic Acids Res 2017; 45:4958-4971. [PMID: 28334755 PMCID: PMC5416885 DOI: 10.1093/nar/gkx112] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Accepted: 02/06/2017] [Indexed: 11/15/2022] Open
Abstract
Chemical modification was used to quantitatively determine the flexibility of nearly the entire rRNA component of the yeast ribosome through 8 discrete stages of translational elongation, revealing novel observations at the gross and fine-scales. These include (i) the bulk transfer of energy through the intersubunit bridges from the large to the small subunit after peptidyltransfer, (ii) differences in the interaction of the sarcin ricin loop with the two elongation factors and (iii) networked information exchange pathways that may functionally facilitate intra- and intersubunit coordination, including the 5.8S rRNA. These analyses reveal hot spots of fluctuations that set the stage for large-scale conformational changes essential for translocation and enable the first molecular dynamics simulation of an 80S complex. Comprehensive datasets of rRNA base flexibilities provide a unique resource to the structural biology community that can be computationally mined to complement ongoing research toward the goal of understanding the dynamic ribosome.
Collapse
Affiliation(s)
- Suna P Gulay
- Department of Cell Biology and Molecular Genetics, University of Maryland, College Park, MD 20742, USA
| | - Sujal Bista
- Department of Computer Science, University of Maryland, College Park, MD 20742, USA
| | - Amitabh Varshney
- Department of Computer Science, University of Maryland, College Park, MD 20742, USA
| | - Serdal Kirmizialtin
- Chemistry Program, New York University Abu Dhabi, Abu Dhabi, UAE.,The New Mexico Consortium, Los Alamos, NM 87544, USA
| | - Karissa Y Sanbonmatsu
- The New Mexico Consortium, Los Alamos, NM 87544, USA.,Theoretical Biology and Biophysics, Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
| | - Jonathan D Dinman
- Department of Cell Biology and Molecular Genetics, University of Maryland, College Park, MD 20742, USA
| |
Collapse
|
10
|
Ament M, Zirr T, Dachsbacher C. Extinction-Optimized Volume Illumination. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:1767-1781. [PMID: 27214903 DOI: 10.1109/tvcg.2016.2569080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to optimize the attenuation of light for the single scattering model in direct volume rendering. A common problem of single scattering is the high dynamic range between lit and shadowed regions due to the exponential attenuation of light along a ray. Moreover, light is often attenuated too strong between a sample point and the camera, hampering the visibility of important features. Our algorithm employs an importance function to selectively illuminate important structures and make them visible from the camera. With the importance function, more light can be transmitted to the features of interest, while contextual structures cast shadows which provide visual cues for perception of depth. At the same time, more scattered light is transmitted from the sample point to the camera to improve the primary visibility of important features. We formulate a minimization problem that automatically determines the extinction along a view or shadow ray to obtain a good balance between sufficient transmittance and attenuation. In contrast to previous approaches, we do not require a computationally expensive solution of a global optimization, but instead provide a closed-form solution for each sampled extinction value along a view or shadow ray and thus achieve interactive performance.
Collapse
|
11
|
Zhou J, Wang X, Cui H, Gong P, Miao X, Miao Y, Xiao C, Chen F, Feng D. Topology-aware illumination design for volume rendering. BMC Bioinformatics 2016; 17:309. [PMID: 27538893 PMCID: PMC4991004 DOI: 10.1186/s12859-016-1177-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Accepted: 08/11/2016] [Indexed: 11/21/2022] Open
Abstract
Background Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. Results We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. Conclusions The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Collapse
Affiliation(s)
- Jianlong Zhou
- Xi'an Jiaotong University City College, 8715 Shangji Road, Xi'an, Shaanxi 710018, People's Republic of China.,DATA61, CSIRO, 13 Garden Street, Eveleigh, 2015, NSW, Australia
| | - Xiuying Wang
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| | - Hui Cui
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| | - Peng Gong
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| | - Xianglin Miao
- Xi'an Jiaotong University City College, 8715 Shangji Road, Xi'an, Shaanxi 710018, People's Republic of China.
| | - Yalin Miao
- Xi'an University of Technology, 5 Jinhua Nan Road, Xi'an, 710048, Shaanxi, People's Republic of China
| | - Chun Xiao
- Xiangtan University, Xiangtan, 411105, Hunan, People's Republic of China
| | - Fang Chen
- DATA61, CSIRO, 13 Garden Street, Eveleigh, 2015, NSW, Australia
| | - Dagan Feng
- The University of Sydney, 1 Cleveland Street, Darlington, 2008, NSW, Australia
| |
Collapse
|
12
|
Ament M, Sadlo F, Dachsbacher C, Weiskopf D. Low-Pass Filtered Volumetric Shadows. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2437-2446. [PMID: 26356957 DOI: 10.1109/tvcg.2014.2346333] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel and efficient method to compute volumetric soft shadows for interactive direct volume visualization to improve the perception of spatial depth. By direct control of the softness of volumetric shadows, disturbing visual patterns due to hard shadows can be avoided and users can adapt the illumination to their personal and application-specific requirements. We compute the shadowing of a point in the data set by employing spatial filtering of the optical depth over a finite area patch pointing toward each light source. Conceptually, the area patch spans a volumetric region that is sampled with shadow rays; afterward, the resulting optical depth values are convolved with a low-pass filter on the patch. In the numerical computation, however, to avoid expensive shadow ray marching, we show how to align and set up summed area tables for both directional and point light sources. Once computed, the summed area tables enable efficient evaluation of soft shadows for each point in constant time without shadow ray marching and the softness of the shadows can be controlled interactively. We integrated our method in a GPU-based volume renderer with ray casting from the camera, which offers interactive control of the transfer function, light source positions, and viewpoint, for both static and time-dependent data sets. Our results demonstrate the benefit of soft shadows for visualization to achieve user-controlled illumination with many-point lighting setups for improved perception combined with high rendering speed.
Collapse
|
13
|
Bista S, Zhuo J, Gullapalli RP, Varshney A. Visualization of Brain Microstructure Through Spherical Harmonics Illumination of High Fidelity Spatio-Angular Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:2516-2525. [PMID: 26356965 DOI: 10.1109/tvcg.2014.2346411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.
Collapse
Affiliation(s)
| | - Jiachen Zhuo
- University of Maryland School of Medicine at Baltimore
| | | | | |
Collapse
|
14
|
Zheng L, Chaudhari AJ, Badawi RD, Ma KL. Using global illumination in volume visualization of rheumatoid arthritis CT data. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2014; 34:16-23. [PMID: 25388232 PMCID: PMC4240269 DOI: 10.1109/mcg.2014.120] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Proper lighting in rendering is essential for visualizing 3D objects, but most visualization software tools still employ simple lighting models. The advent of hardware-accelerated advanced lighting suggests that volume visualization can be truly usable for clinical work. Researchers studied how volume rendering incorporating global illumination impacted perception of bone surface features captured by x-ray computed-tomography scanners for clinical monitoring of rheumatoid arthritis patients. The results, evaluated by clinical researchers familiar with the disease and medical-image interpretation, indicate that interactive visualization with global illumination helped the researchers derive more accurate interpretations of the image data. With clinical needs and the recent advancement of volume visualization technology, this study is timely and points the way for further research.
Collapse
Affiliation(s)
- Lin Zheng
- PhD student in computer science at the University of California, Davis.
| | | | - Ramsey D. Badawi
- Associate professor of radiology and biomedical engineering at the University of California, Davis.
| | - Kwan-Liu Ma
- Professor of computer science at the University of California, Davis. He’s a member of the CG&A editorial board.
| |
Collapse
|