1
|
Ueda Y, Ogawa D, Ishida T. Patient Re-Identification Based on Deep Metric Learning in Trunk Computed Tomography Images Acquired from Devices from Different Vendors. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1124-1136. [PMID: 38366292 PMCID: PMC11169436 DOI: 10.1007/s10278-024-01017-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/18/2024]
Abstract
During radiologic interpretation, radiologists read patient identifiers from the metadata of medical images to recognize the patient being examined. However, it is challenging for radiologists to identify "incorrect" metadata and patient identification errors. We propose a method that uses a patient re-identification technique to link correct metadata to an image set of computed tomography images of a trunk with lost or wrongly assigned metadata. This method is based on a feature vector matching technique that uses a deep feature extractor to adapt to the cross-vendor domain contained in the scout computed tomography image dataset. To identify "incorrect" metadata, we calculated the highest similarity score between a follow-up image and a stored baseline image linked to the correct metadata. The re-identification performance tests whether the image with the highest similarity score belongs to the same patient, i.e., whether the metadata attached to the image are correct. The similarity scores between the follow-up and baseline images for the same "correct" patients were generally greater than those for "incorrect" patients. The proposed feature extractor was sufficiently robust to extract individual distinguishable features without additional training, even for unknown scout computed tomography images. Furthermore, the proposed augmentation technique further improved the re-identification performance of the subset for different vendors by incorporating changes in width magnification due to changes in patient table height during each examination. We believe that metadata checking using the proposed method would help detect the metadata with an "incorrect" patient identifier assigned due to unavoidable errors such as human error.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Daiki Ogawa
- School of Allied Health Sciences, Faculty of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Takayuki Ishida
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
2
|
Ueda Y, Morishita J. Patient Identification Based on Deep Metric Learning for Preventing Human Errors in Follow-up X-Ray Examinations. J Digit Imaging 2023; 36:1941-1953. [PMID: 37308675 PMCID: PMC10501972 DOI: 10.1007/s10278-023-00850-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 06/14/2023] Open
Abstract
Biological fingerprints extracted from clinical images can be used for patient identity verification to determine misfiled clinical images in picture archiving and communication systems. However, such methods have not been incorporated into clinical use, and their performance can degrade with variability in the clinical images. Deep learning can be used to improve the performance of these methods. A novel method is proposed to automatically identify individuals among examined patients using posteroanterior (PA) and anteroposterior (AP) chest X-ray images. The proposed method uses deep metric learning based on a deep convolutional neural network (DCNN) to overcome the extreme classification requirements for patient validation and identification. It was trained on the NIH chest X-ray dataset (ChestX-ray8) in three steps: preprocessing, DCNN feature extraction with an EfficientNetV2-S backbone, and classification with deep metric learning. The proposed method was evaluated using two public datasets and two clinical chest X-ray image datasets containing data from patients undergoing screening and hospital care. A 1280-dimensional feature extractor pretrained for 300 epochs performed the best with an area under the receiver operating characteristic curve of 0.9894, an equal error rate of 0.0269, and a top-1 accuracy of 0.839 on the PadChest dataset containing both PA and AP view positions. The findings of this study provide considerable insights into the development of automated patient identification to reduce the possibility of medical malpractice due to human errors.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Graduate School of Medicine, Division of Health Sciences, Osaka University, Osaka, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
3
|
Ueda Y, Morishita J, Kudomi S. Biological fingerprint for patient verification using trunk scout views at various scan ranges in computed tomography. Radiol Phys Technol 2022; 15:398-408. [PMID: 36155890 DOI: 10.1007/s12194-022-00682-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/21/2022] [Accepted: 09/22/2022] [Indexed: 10/14/2022]
Abstract
Immediate verification of whether a patient being examined is correct is desirable, even if the scan ranges change during different examinations for the same patient. This study proposes an advanced biological fingerprint technique for the rapid and reliable verification of various scan ranges in computed tomography (CT) scans of the torso of the same patient. The method comprises the following steps: geometric correction of different scans, local feature extraction, mismatch elimination, and similarity evaluation. The geometric magnification correction was aligned at the scanner table height in the first two steps, and the local maxima were calculated as the local features. In the third step, local features from the follow-up scout image are matched to those in the corresponding baseline scout image via template matching and outlier elimination via a robust estimator. We evaluated the correspondence rate based on the inlier ratio between corresponding scout images. The ratio of inliers between the baseline and follow-up scout images was assessed as the similarity score. The clinical dataset, including chest, abdomen-pelvis, and chest-abdomen-pelvis scans, included 600 patients (372 men, 68 ± 12 years) who underwent two routine torso CT examinations. The highest area under the receiver operating characteristic curve (AUC) was 0.996, which was sufficient for patient verification. Moreover, the verification results were comparable to the conventional method, which uses scout images in the same scan range. Patient identity verification was achieved before the main scan, even in follow-up torso CT, under different scan ranges.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, Fukuoka, 812-8582, Japan
| | - Shohei Kudomi
- Department of Radiological Technology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi, 755-8505, Japan
| |
Collapse
|
4
|
Morishita J, Ueda Y. New solutions for automated image recognition and identification: challenges to radiologic technology and forensic pathology. Radiol Phys Technol 2021; 14:123-133. [PMID: 33710498 DOI: 10.1007/s12194-021-00611-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 02/26/2021] [Accepted: 02/28/2021] [Indexed: 11/30/2022]
Abstract
This paper outlines the history of biometrics for personal identification, the current status of the initial biological fingerprint techniques for digital chest radiography, and patient verification during medical imaging, such as computed tomography and magnetic resonance imaging. Automated image recognition and identification developed for clinical images without metadata could also be applied to the identification of victims in mass disasters or other unidentified individuals. The development of methods that are adaptive to a wide range of recent imaging modalities in the fields of radiologic technology, patient safety, forensic pathology, and forensic odontology is still in its early stages. However, its importance in practice will continue to increase in the future.
Collapse
Affiliation(s)
- Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka, 812-8582, Japan.
| | - Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| |
Collapse
|
5
|
Ueda Y, Morishita J, Hongyo T. Biological fingerprint using scout computed tomographic images for positive patient identification. Med Phys 2019; 46:4600-4609. [PMID: 31442297 DOI: 10.1002/mp.13779] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 08/16/2019] [Accepted: 08/16/2019] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Management of patient identification is an important issue that should be addressed to ensure patient safety while using modern healthcare systems. Patient identification errors can be mainly attributed to human errors or system problems. An error-tolerant system, such as a biometric system, should be able to prevent or mitigate potential misidentification occurrences. Herein, we propose the use of scout computed tomography (CT) images for biometric patient identity verification and present the quantitative accuracy outcomes of using this technique in a clinical setting. METHODS Scout CT images acquired from routine examinations of the chest, abdomen, and pelvis were used as biological fingerprints. We evaluated the resemblance of the follow-up with the baseline image by comparing the estimates of the image characteristics using local feature extraction and matching algorithms. The verification performance was evaluated according to the receiver operating characteristic (ROC) curves, area under the ROC curves (AUC), and equal error rates (EER). The closed-set identification performance was evaluated according to the cumulative match characteristic curves and rank-one identification rates (R1). RESULTS A total of 619 (383 males, 236 females, age range 21-92 years) patients who underwent baseline and follow-up chest-abdomen-pelvis CT scans on the same CT system were analyzed for verification and closed-set identification. The highest performances of AUC, EER, and R1 were 0.998, 1.22%, and 99.7%, respectively, in the considered evaluation range. Furthermore, to determine whether the performance decreased in the presence of metal artifacts, the patients were classified into two groups, namely scout images with (255 patients) and without (364 patients) metal artifacts, and the significance test was performed for two ROC curves using the unpaired Delong's test. No significant differences were found between the ROC performances in the presence and absence of metal artifacts when using a sufficient number of local features. Our proposed technique demonstrated that the performance was comparable to that of conventional biometrics methods when using chest, abdomen, and pelvis scout CT images. Thus, this method has the potential to discover inadequate patient information using the available chest, abdomen, and pelvis scout CT image; moreover, it can be applied widely to routine adult CT scans where no significant body structure effects due to illness or aging are present. CONCLUSIONS Our proposed method can obtain accurate patient information available at the point-of-care and help healthcare providers verify whether a patient's identity is matched accurately. We believe the method to be a key solution for patient misidentification problems.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Tadashi Hongyo
- Division of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
6
|
Shimizu Y, Morishita J. Development of a method of automated extraction of biological fingerprints from chest radiographs as preprocessing of patient recognition and identification. Radiol Phys Technol 2017; 10:376-381. [DOI: 10.1007/s12194-017-0400-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 04/17/2017] [Accepted: 04/23/2017] [Indexed: 10/19/2022]
|
7
|
Evaluation of the usefulness of modified biological fingerprints in chest radiographs for patient recognition and identification. Radiol Phys Technol 2016; 9:240-4. [DOI: 10.1007/s12194-016-0355-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2015] [Revised: 04/19/2016] [Accepted: 04/19/2016] [Indexed: 11/26/2022]
|
8
|
|
9
|
Okumura E, Aridome K, Iwakiri C, Oda K, Nakamura K, Yamamoto M. [Development of an automated patient recognition method for chest CT images using a template-matching technique]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2014; 70:1125-34. [PMID: 25327422 DOI: 10.6009/jjrt.2014_jsrt_70.10.1125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
If patient information, such as identification number or patient name, has been entered incorrectly in a picture archiving and communication system (PACS) environment, the image may be stored in the wrong place. To prevent such cases of misfiling, we have developed an automated patient recognition system for chest CT images. The image database consisted of 100 cases with present and previous chest CT images. A volume of interest (VOI) measuring 40 × 40 pixels was selected from the left lung region, bronchus region, and right lung region. Next, the overall lung region and these three regions in a current chest CT image were used as a template for determining the residual value with the corresponding four regions in previous chest CT images. To ensure separation between the same and different patients, we applied a combined analysis that employed the ruled-based plus artificial neural network (ANN) method. The overall performance of the method developed was examined in terms of receiver operating characteristic (ROC) curves. The performance of the rule-based plus ANN method using a combination of the four regions was higher than obtained using a rule-based method using these four regions separately. The automated patient recognition system using the rule-based plus ANN method achieved an area under the curve (AUC) value of 0.987. This automated patient recognition method for chest CT images is promising for helping to retrieve misfiled patient images, especially in a PACS environment.
Collapse
Affiliation(s)
- Eiichiro Okumura
- Department of Medical Radiological Technology, Kagoshima Medical Technology College
| | | | | | | | | | | |
Collapse
|
10
|
A multiobserver study of the effects of including point-of-care patient photographs with portable radiography: a means to detect wrong-patient errors. Acad Radiol 2014; 21:1038-47. [PMID: 25018076 DOI: 10.1016/j.acra.2014.03.006] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2014] [Revised: 02/27/2014] [Accepted: 03/04/2014] [Indexed: 11/24/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. MATERIALS AND METHODS In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. RESULTS Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). CONCLUSIONS In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification.
Collapse
|