1
|
Wang Z, Yang F, Zhang W, Xiong K, Yang S. Towards in vivo photoacoustic human imaging: shining a new light on clinical diagnostics. FUNDAMENTAL RESEARCH 2023. [DOI: 10.1016/j.fmre.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
|
2
|
Schellenberg M, Dreher KK, Holzwarth N, Isensee F, Reinke A, Schreck N, Seitel A, Tizabi MD, Maier-Hein L, Gröhl J. Semantic segmentation of multispectral photoacoustic images using deep learning. PHOTOACOUSTICS 2022; 26:100341. [PMID: 35371919 PMCID: PMC8968659 DOI: 10.1016/j.pacs.2022.100341] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 02/15/2022] [Accepted: 02/20/2022] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.
Collapse
Affiliation(s)
- Melanie Schellenberg
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
- Corresponding author at: Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Kris K. Dreher
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Niklas Holzwarth
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fabian Isensee
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Annika Reinke
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Nicholas Schreck
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Seitel
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Minu D. Tizabi
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lena Maier-Hein
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
- Corresponding author at: Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Janek Gröhl
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
3
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
4
|
Attia ABE, Moothanchery M, Li X, Yew YW, Thng STG, Dinish U, Olivo M. Microvascular imaging and monitoring of hemodynamic changes in the skin during arterial-venous occlusion using multispectral raster-scanning optoacoustic mesoscopy. PHOTOACOUSTICS 2021; 22:100268. [PMID: 34026491 PMCID: PMC8122174 DOI: 10.1016/j.pacs.2021.100268] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 04/15/2021] [Accepted: 04/16/2021] [Indexed: 05/18/2023]
Abstract
The ability to monitor oxygen delivery in microvasculature plays a vital role in measuring the viability of skin tissue and the probability of recovery. Using currently available clinical imaging tools, it is difficult to observe non-invasive hemodynamic regulation in the peripheral vessels. Here we propose the use of a novel multispectral raster-scanning optoacoustic mesoscopy (RSOM) system for noninvasive clinical monitoring of hemodynamic changes in the skin microvasculature's oxy- (HbO2) and deoxy-hemoglobin (Hb), total hemoglobin (HbT) and oxygen saturation (rsO2). High resolution images of hemoglobin distribution in the skin microvasculature from six healthy volunteers during venous and arterial occlusion, simulating systemic vascular diseases are presented. During venous occlusion, Hb and HbO2 optoacoustic signals showed an increasing trend with time, followed by a drop in the values after cuff deflation. During arterial occlusion, an increase in Hb value and decrease in HbO2 values was observed, followed by a drop in Hb and jump in HbO2 values after the cuff deflation. A decrease in rsO2 values during both venous and arterial occlusion was observed with an increase in value after occlusion release. Using this proof of concept study, hereby we propose multispectral RSOM as a novel tool to measure high resolution hemodynamic changes in microvasculature for investigating systemic vascular diseases on peripheral tissues and also for monitoring inflammatory skin diseases, and its therapeutic interventions.
Collapse
Affiliation(s)
- Amalina Binte Ebrahim Attia
- Laboratory of Bio Optical Imaging, Singapore Bioimaging Consortium, Agency of Science, Technology and Research (A*STAR), Singapore
| | - Mohesh Moothanchery
- Laboratory of Bio Optical Imaging, Singapore Bioimaging Consortium, Agency of Science, Technology and Research (A*STAR), Singapore
| | - Xiuting Li
- Laboratory of Bio Optical Imaging, Singapore Bioimaging Consortium, Agency of Science, Technology and Research (A*STAR), Singapore
| | | | | | - U.S. Dinish
- Laboratory of Bio Optical Imaging, Singapore Bioimaging Consortium, Agency of Science, Technology and Research (A*STAR), Singapore
- Corresponding authors.
| | - Malini Olivo
- Laboratory of Bio Optical Imaging, Singapore Bioimaging Consortium, Agency of Science, Technology and Research (A*STAR), Singapore
- Corresponding authors.
| |
Collapse
|
5
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
6
|
Regensburger AP, Brown E, Krönke G, Waldner MJ, Knieling F. Optoacoustic Imaging in Inflammation. Biomedicines 2021; 9:483. [PMID: 33924983 PMCID: PMC8145174 DOI: 10.3390/biomedicines9050483] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 04/20/2021] [Accepted: 04/21/2021] [Indexed: 12/11/2022] Open
Abstract
Optoacoustic or photoacoustic imaging (OAI/PAI) is a technology which enables non-invasive visualization of laser-illuminated tissue by the detection of acoustic signals. The combination of "light in" and "sound out" offers unprecedented scalability with a high penetration depth and resolution. The wide range of biomedical applications makes this technology a versatile tool for preclinical and clinical research. Particularly when imaging inflammation, the technology offers advantages over current clinical methods to diagnose, stage, and monitor physiological and pathophysiological processes. This review discusses the clinical perspective of using OAI in the context of imaging inflammation as well as in current and emerging translational applications.
Collapse
Affiliation(s)
- Adrian P. Regensburger
- Department of Pediatrics and Adolescent Medicine, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Loschgestr. 15, D-91054 Erlangen, Germany;
| | - Emma Brown
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK;
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, UK
| | - Gerhard Krönke
- Department of Medicine 3, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Ulmenweg 18, D-91054 Erlangen, Germany;
| | - Maximilian J. Waldner
- Department of Medicine 1, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Ulmenweg 18, D-91054 Erlangen, Germany;
| | - Ferdinand Knieling
- Department of Pediatrics and Adolescent Medicine, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Loschgestr. 15, D-91054 Erlangen, Germany;
| |
Collapse
|
7
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
8
|
Yang JM, Ghim CM. Photoacoustic Tomography Opening New Paradigms in Biomedical Imaging. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2021; 1310:239-341. [PMID: 33834440 DOI: 10.1007/978-981-33-6064-8_11] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
After the emergence of the ultrasound, X-ray CT, PET, and MRI, photoacoustic tomography (PAT) is now in the phase of its exponential growth, with its expected full maturation being another form of mainstream clinical imaging modality. By combining the high contrast benefit of optical imaging and the high-resolution deep imaging capability of ultrasound, PAT can provide unprecedented anatomical image contrasts at clinically relevant depths as well as enable the use of a variety of functional and molecular imaging information, which is not possible with conventional imaging modalities. With these strengths, PAT has achieved numerous breakthroughs in various biomedical applications and also provided new technical platforms that may be able to resolve unmet issues in clinics. In this chapter, we provide an overview of the development of PAT technology for several major biomedical applications and provide an approximate projection of the future of PAT.
Collapse
Affiliation(s)
- Joon-Mo Yang
- Center for Photoacoustic Medical Instruments, Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea.
| | - Cheol-Min Ghim
- Department of Physics, School of Natural Science, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| |
Collapse
|
9
|
Li F, Shi JX, Yan L, Wang YG, Zhang XD, Jiang MS, Wu ZZ, Zhou KQ. Lesion-aware convolutional neural network for chest radiograph classification. Clin Radiol 2020; 76:155.e1-155.e14. [PMID: 33077154 DOI: 10.1016/j.crad.2020.08.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 08/18/2020] [Indexed: 01/18/2023]
Abstract
AIM To investigate the performance of a deep-learning approach termed lesion-aware convolutional neural network (LACNN) to identify 14 different thoracic diseases on chest X-rays (CXRs). MATERIALS AND METHODS In total, 10,738 CXRs of 3,526 patients were collected retrospectively. Of these, 1,937 CXRs of 598 patients were selected for training and optimising the lesion-detection network (LDN) of LACNN. The remaining 8,801 CXRs from 2,928 patients were used to train and test the classification network of LACNN. The discriminative performance of the deep-learning approach was compared with that obtained by the radiologists. In addition, its generalisation was validated on the independent public dataset, ChestX-ray14. The decision-making process of the model was visualised by occlusion testing, and the effect of the integration of CXRs and non-image data on model performance was also investigated. In a systematic evaluation, F1 score, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics were calculated. RESULTS The model generated statistically significantly higher AUC performance compared with radiologists on atelectasis, mass, and nodule, with AUC values of 0.831 (95% confidence interval [CI]: 0.807-0.855), 0.959 (95% CI: 0.944-0.974), and 0.928 (95% CI: 0.906-0.950), respectively. For the other 11 pathologies, there were no statistically significant differences. The average time to complete each CXR classification in the testing dataset was substantially longer for the radiologists (∼35 seconds) than for the LACNN (∼0.197 seconds). In the ChestX-ray14 dataset, the present model also showed competitive performance in comparison with other state-of-the-art deep-learning approaches. Model performance was slightly improved when introducing non-image data. CONCLUSION The proposed LACNN achieved radiologist-level performance in identifying thoracic diseases on CXRs, and could potentially expand patient access to CXR diagnostics.
Collapse
Affiliation(s)
- F Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - J-X Shi
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - L Yan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Y-G Wang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - X-D Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - M-S Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China.
| | - Z-Z Wu
- Department of Precision Mechanical Engineering, Shanghai University, Shanghai, China
| | - K-Q Zhou
- Liver Cancer Institute, Zhongshan Hospital, Shanghai, China
| |
Collapse
|
10
|
Attia ABE, Bi R, Dev K, Du Y, Olivo M. Clinical noninvasive imaging and spectroscopic tools for dermatological applications: Review of recent progress. TRANSLATIONAL BIOPHOTONICS 2020. [DOI: 10.1002/tbio.202000010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Affiliation(s)
- Amalina Binte Ebrahim Attia
- Lab of Bio‐Optical Imaging, Singapore Bioimaging Consortium (SBIC) Agency for Science Technology and Research (A*STAR) Singapore Singapore
| | - Renzhe Bi
- Lab of Bio‐Optical Imaging, Singapore Bioimaging Consortium (SBIC) Agency for Science Technology and Research (A*STAR) Singapore Singapore
| | - Kapil Dev
- Lab of Bio‐Optical Imaging, Singapore Bioimaging Consortium (SBIC) Agency for Science Technology and Research (A*STAR) Singapore Singapore
| | | | - Malini Olivo
- Lab of Bio‐Optical Imaging, Singapore Bioimaging Consortium (SBIC) Agency for Science Technology and Research (A*STAR) Singapore Singapore
| |
Collapse
|
11
|
Gorzelanny C, Mess C, Schneider SW, Huck V, Brandner JM. Skin Barriers in Dermal Drug Delivery: Which Barriers Have to Be Overcome and How Can We Measure Them? Pharmaceutics 2020; 12:E684. [PMID: 32698388 PMCID: PMC7407329 DOI: 10.3390/pharmaceutics12070684] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 07/11/2020] [Accepted: 07/14/2020] [Indexed: 12/13/2022] Open
Abstract
Although, drugs are required in the various skin compartments such as viable epidermis, dermis, or hair follicles, to efficiently treat skin diseases, drug delivery into and across the skin is still challenging. An improved understanding of skin barrier physiology is mandatory to optimize drug penetration and permeation. The various barriers of the skin have to be known in detail, which means methods are needed to measure their functionality and outside-in or inside-out passage of molecules through the various barriers. In this review, we summarize our current knowledge about mechanical barriers, i.e., stratum corneum and tight junctions, in interfollicular epidermis, hair follicles and glands. Furthermore, we discuss the barrier properties of the basement membrane and dermal blood vessels. Barrier alterations found in skin of patients with atopic dermatitis are described. Finally, we critically compare the up-to-date applicability of several physical, biochemical and microscopic methods such as transepidermal water loss, impedance spectroscopy, Raman spectroscopy, immunohistochemical stainings, optical coherence microscopy and multiphoton microscopy to distinctly address the different barriers and to measure permeation through these barriers in vitro and in vivo.
Collapse
Affiliation(s)
| | | | | | | | - Johanna M. Brandner
- Department of Dermatology and Venerology, Center for Internal Medicine, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany; (C.G.); (C.M.); (S.W.S.); (V.H.)
| |
Collapse
|