1
|
Zhang S. AI-assisted early screening, diagnosis, and intervention for autism in young children. Front Psychiatry 2025; 16:1513809. [PMID: 40297334 PMCID: PMC12036476 DOI: 10.3389/fpsyt.2025.1513809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2024] [Accepted: 03/24/2025] [Indexed: 04/30/2025] Open
Abstract
Autism is a serious threat to an individual's physical and mental health. Early screening, diagnosis, and intervention can effectively reduce the level of deficits in individuals with autism. However, traditional methods of screening, diagnosis, and intervention rely on the professionalism of psychiatrists and require a great deal of time and effort, resulting in a large proportion of individuals with autism being diagnosed after the age of 6. Artificial intelligence (AI) combined with machine learning is being used to improve the efficiency of early screening, diagnosis, and intervention of autism in young children. This review aims to summarize AI-assisted methods for early screening, diagnosis, and intervention of autism in young children (infants, toddlers, and preschoolers). To achieve early screening and diagnosis of autism in young children, AI methods have built predictive models to improve the automation of early behavioral diagnosis, analyzed brain imaging and genetic data to break the age barrier for diagnosis, and established intelligent screening systems for early mass screening. For early intervention of autism in young children, AI methods built intelligent education systems to optimize the teaching and learning environment and provide individualized interventions, constructed intelligent monitoring systems for dynamic tracking, and created intelligent support systems to provide continuous support and meet the diverse needs of young children with autism. As AI continues to develop, further research is needed to build a large and shared database on autism, to generalize and migrate the effects of AI interventions, and to improve the appearance and performance of AI-powered robots, to reduce failure rates and costs of AI technologies.
Collapse
Affiliation(s)
- Sijun Zhang
- Institute of Educational Sciences, Hunan University, Changsha, China
| |
Collapse
|
2
|
Salles A, Farisco M. Neuroethics and AI ethics: a proposal for collaboration. BMC Neurosci 2024; 25:41. [PMID: 39210267 PMCID: PMC11360855 DOI: 10.1186/s12868-024-00888-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 08/14/2024] [Indexed: 09/04/2024] Open
Abstract
The scientific relationship between neuroscience and artificial intelligence is generally acknowledged, and the role that their long history of collaboration has played in advancing both fields is often emphasized. Beyond the important scientific insights provided by their collaborative development, both neuroscience and AI raise a number of ethical issues that are generally explored by neuroethics and AI ethics. Neuroethics and AI ethics have been gaining prominence in the last few decades, and they are typically carried out by different research communities. However, considering the evolving landscape of AI-assisted neurotechnologies and the various conceptual and practical intersections between AI and neuroscience-such as the increasing application of AI in neuroscientific research, the healthcare of neurological and mental diseases, and the use of neuroscientific knowledge as inspiration for AI-some scholars are now calling for a collaborative relationship between these two domains. This article seeks to explore how a collaborative relationship between neuroethics and AI ethics can stimulate theoretical and, ideally, governance efforts. First, we offer some reasons for calling for the collaboration of the ethical reflection on neuroscientific innovations and AI. Next, we explore some dimensions that we think could be enhanced by the cross-fertilization between these two subfields of ethics. We believe that considering the pace and increasing fusion of neuroscience and AI in the development of innovations, broad and underspecified calls for responsibility that do not consider insights from different ethics subfields will only be partially successful in promoting meaningful changes in both research and applications.
Collapse
Affiliation(s)
| | - Michele Farisco
- Centre for Research Ethics and Bioethics, Uppsala University, Uppsala, Sweden.
- Biogem, Biology and Molecular Genetics Research Institute, Bioethics Unit, Ariano Irpino, AV, Italy.
| |
Collapse
|
3
|
Ćosić K, Popović S, Wiederhold BK. Enhancing Aviation Safety through AI-Driven Mental Health Management for Pilots and Air Traffic Controllers. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2024; 27:588-598. [PMID: 38916063 DOI: 10.1089/cyber.2023.0737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
This article provides an overview of the mental health challenges faced by pilots and air traffic controllers (ATCs), whose stressful professional lives may negatively impact global flight safety and security. The adverse effects of mental health disorders on their flight performance pose a particular safety risk, especially in sudden unexpected startle situations. Therefore, the early detection, prediction and prevention of mental health deterioration in pilots and ATCs, particularly among those at high risk, are crucial to minimize potential air crash incidents caused by human factors. Recent research in artificial intelligence (AI) demonstrates the potential of machine and deep learning, edge and cloud computing, virtual reality and wearable multimodal physiological sensors for monitoring and predicting mental health disorders. Longitudinal monitoring and analysis of pilots' and ATCs physiological, cognitive and behavioral states could help predict individuals at risk of undisclosed or emerging mental health disorders. Utilizing AI tools and methodologies to identify and select these individuals for preventive mental health training and interventions could be a promising and effective approach to preventing potential air crash accidents attributed to human factors and related mental health problems. Based on these insights, the article advocates for the design of a multidisciplinary mental healthcare ecosystem in modern aviation using AI tools and technologies, to foster more efficient and effective mental health management, thereby enhancing flight safety and security standards. This proposed ecosystem requires the collaboration of multidisciplinary experts, including psychologists, neuroscientists, physiologists, psychiatrists, etc. to address these challenges in modern aviation.
Collapse
Affiliation(s)
- Krešimir Ćosić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Siniša Popović
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | | |
Collapse
|
4
|
Hurley ME, Sonig A, Herrington J, Storch EA, Lázaro-Muñoz G, Blumenthal-Barby J, Kostick-Quenet K. Ethical considerations for integrating multimodal computer perception and neurotechnology. Front Hum Neurosci 2024; 18:1332451. [PMID: 38435745 PMCID: PMC10904467 DOI: 10.3389/fnhum.2024.1332451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/30/2024] [Indexed: 03/05/2024] Open
Abstract
Background Artificial intelligence (AI)-based computer perception technologies (e.g., digital phenotyping and affective computing) promise to transform clinical approaches to personalized care in psychiatry and beyond by offering more objective measures of emotional states and behavior, enabling precision treatment, diagnosis, and symptom monitoring. At the same time, passive and continuous nature by which they often collect data from patients in non-clinical settings raises ethical issues related to privacy and self-determination. Little is known about how such concerns may be exacerbated by the integration of neural data, as parallel advances in computer perception, AI, and neurotechnology enable new insights into subjective states. Here, we present findings from a multi-site NCATS-funded study of ethical considerations for translating computer perception into clinical care and contextualize them within the neuroethics and neurorights literatures. Methods We conducted qualitative interviews with patients (n = 20), caregivers (n = 20), clinicians (n = 12), developers (n = 12), and clinician developers (n = 2) regarding their perspective toward using PC in clinical care. Transcripts were analyzed in MAXQDA using Thematic Content Analysis. Results Stakeholder groups voiced concerns related to (1) perceived invasiveness of passive and continuous data collection in private settings; (2) data protection and security and the potential for negative downstream/future impacts on patients of unintended disclosure; and (3) ethical issues related to patients' limited versus hyper awareness of passive and continuous data collection and monitoring. Clinicians and developers highlighted that these concerns may be exacerbated by the integration of neural data with other computer perception data. Discussion Our findings suggest that the integration of neurotechnologies with existing computer perception technologies raises novel concerns around dignity-related and other harms (e.g., stigma, discrimination) that stem from data security threats and the growing potential for reidentification of sensitive data. Further, our findings suggest that patients' awareness and preoccupation with feeling monitored via computer sensors ranges from hypo- to hyper-awareness, with either extreme accompanied by ethical concerns (consent vs. anxiety and preoccupation). These results highlight the need for systematic research into how best to implement these technologies into clinical care in ways that reduce disruption, maximize patient benefits, and mitigate long-term risks associated with the passive collection of sensitive emotional, behavioral and neural data.
Collapse
Affiliation(s)
- Meghan E. Hurley
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Anika Sonig
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - John Herrington
- Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, United States
| | - Eric A. Storch
- Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - Gabriel Lázaro-Muñoz
- Center for Bioethics, Harvard Medical School, Boston, MA, United States
- Department of Psychiatry and Behavioral Sciences, Massachusetts General Hospital, Boston, MA, United States
| | | | - Kristin Kostick-Quenet
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
5
|
Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak 2023; 23:73. [PMID: 37081503 PMCID: PMC10116477 DOI: 10.1186/s12911-023-02162-y] [Citation(s) in RCA: 70] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/29/2023] [Indexed: 04/22/2023] Open
Abstract
Artificial intelligence (AI) is often cited as a possible solution to current issues faced by healthcare systems. This includes the freeing up of time for doctors and facilitating person-centred doctor-patient relationships. However, given the novelty of artificial intelligence tools, there is very little concrete evidence on their impact on the doctor-patient relationship or on how to ensure that they are implemented in a way which is beneficial for person-centred care.Given the importance of empathy and compassion in the practice of person-centred care, we conducted a literature review to explore how AI impacts these two values. Besides empathy and compassion, shared decision-making, and trust relationships emerged as key values in the reviewed papers. We identified two concrete ways which can help ensure that the use of AI tools have a positive impact on person-centred doctor-patient relationships. These are (1) using AI tools in an assistive role and (2) adapting medical education. The study suggests that we need to take intentional steps in order to ensure that the deployment of AI tools in healthcare has a positive impact on person-centred doctor-patient relationships. We argue that the proposed solutions are contingent upon clarifying the values underlying future healthcare systems.
Collapse
Affiliation(s)
- Aurelia Sauerbrei
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF UK
| | - Angeliki Kerasidou
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF UK
| | - Federica Lucivero
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF UK
| | - Nina Hallowell
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF UK
| |
Collapse
|
6
|
Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought. NEUROETHICS-NETH 2023. [DOI: 10.1007/s12152-022-09511-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractProgress in neurotechnology and Artificial Intelligence (AI) provides unprecedented insights into the human brain. There are increasing possibilities to influence and measure brain activity. These developments raise multifaceted ethical and legal questions. The proponents of neurorights argue in favour of introducing new human rights to protect mental processes and brain data. This article discusses the necessity and advantages of introducing new human rights focusing on the proposed new human right to mental self-determination and the right to freedom of thought as enshrined in Art.18 International Covenant on Civil and Political Rights (ICCPR) and Art. 9 European Convention on Human Rights (ECHR). I argue that the right to freedom of thought can be coherently interpreted as providing comprehensive protection of mental processes and brain data, thus offering a normative basis regarding the use of neurotechnologies. Besides, I claim that an evolving interpretation of the right to freedom of thought is more convincing than introducing a new human right to mental self-determination.
Collapse
|
7
|
Rainey S. Neurorights as Hohfeldian Privileges. NEUROETHICS-NETH 2023. [DOI: 10.1007/s12152-023-09515-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
AbstractThis paper argues that calls for neurorights propose an overcomplicated approach. It does this through analysis of ‘rights’ using the influential framework provided by Wesley Hohfeld, whose analytic jurisprudence is still well regarded in its clarificatory approach to discussions of rights. Having disentangled some unclarities in talk about rights, the paper proposes the idea of ‘novel human rights’ is not appropriate for what is deemed worth protecting in terms of mental integrity and cognitive liberty. That is best thought of in terms of Hohfeld’s account of ‘right’ as privilege. It goes on to argue that as privileges, legal protections are not well suited to these cases. As such, they cannot be ‘novel human rights’. Instead, protections for mental integrity and cognitive liberty are best accounted for in terms of familiar and established rational and discursive norms. Mental integrity is best thought of as evaluable in terms of familiar rational norms, and cognitive freedom is constrained by appraisals of sense-making. Concerns about how neurotechnologies might pose particular challenges to mental integrity and cognitive liberty are best protected through careful use of existing legislation on data protection, not novel rights, as it is via data that risks to integrity and liberty are manifested.
Collapse
|
8
|
Ćosić K, Popović S, Šarlija M, Kesedžić I, Gambiraža M, Dropuljić B, Mijić I, Henigsberg N, Jovanovic T. AI-Based Prediction and Prevention of Psychological and Behavioral Changes in Ex-COVID-19 Patients. Front Psychol 2021; 12:782866. [PMID: 35027902 PMCID: PMC8751545 DOI: 10.3389/fpsyg.2021.782866] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/02/2021] [Indexed: 12/30/2022] Open
Abstract
The COVID-19 pandemic has adverse consequences on human psychology and behavior long after initial recovery from the virus. These COVID-19 health sequelae, if undetected and left untreated, may lead to more enduring mental health problems, and put vulnerable individuals at risk of developing more serious psychopathologies. Therefore, an early distinction of such vulnerable individuals from those who are more resilient is important to undertake timely preventive interventions. The main aim of this article is to present a comprehensive multimodal conceptual approach for addressing these potential psychological and behavioral mental health changes using state-of-the-art tools and means of artificial intelligence (AI). Mental health COVID-19 recovery programs at post-COVID clinics based on AI prediction and prevention strategies may significantly improve the global mental health of ex-COVID-19 patients. Most COVID-19 recovery programs currently involve specialists such as pulmonologists, cardiologists, and neurologists, but there is a lack of psychiatrist care. The focus of this article is on new tools which can enhance the current limited psychiatrist resources and capabilities in coping with the upcoming challenges related to widespread mental health disorders. Patients affected by COVID-19 are more vulnerable to psychological and behavioral changes than non-COVID populations and therefore they deserve careful clinical psychological screening in post-COVID clinics. However, despite significant advances in research, the pace of progress in prevention of psychiatric disorders in these patients is still insufficient. Current approaches for the diagnosis of psychiatric disorders largely rely on clinical rating scales, as well as self-rating questionnaires that are inadequate for comprehensive assessment of ex-COVID-19 patients' susceptibility to mental health deterioration. These limitations can presumably be overcome by applying state-of-the-art AI-based tools in diagnosis, prevention, and treatment of psychiatric disorders in acute phase of disease to prevent more chronic psychiatric consequences.
Collapse
Affiliation(s)
- Krešimir Ćosić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Siniša Popović
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Marko Šarlija
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Ivan Kesedžić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Mate Gambiraža
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Branimir Dropuljić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Igor Mijić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Neven Henigsberg
- Croatian Institute for Brain Research, University of Zagreb School of Medicine, Zagreb, Croatia
| | - Tanja Jovanovic
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| |
Collapse
|
9
|
Hildt E, Laas K, Sziron M. Editorial: Shaping Ethical Futures in Brain-Based and Artificial Intelligence Research. SCIENCE AND ENGINEERING ETHICS 2020; 26:2371-2379. [PMID: 32749648 DOI: 10.1007/s11948-020-00235-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Affiliation(s)
- Elisabeth Hildt
- Center for the Study of Ethics in the Professions, Illinois Institute of Technology, 10 W. 35th Street, Chicago, IL, 60616, USA.
| | - Kelly Laas
- Center for the Study of Ethics in the Professions, Illinois Institute of Technology, 10 W. 35th Street, Chicago, IL, 60616, USA
| | - Monika Sziron
- Center for the Study of Ethics in the Professions, Illinois Institute of Technology, 10 W. 35th Street, Chicago, IL, 60616, USA
| |
Collapse
|
10
|
Jotterand F, Bosco C. Keeping the "Human in the Loop" in the Age of Artificial Intelligence : Accompanying Commentary for "Correcting the Brain?" by Rainey and Erden. SCIENCE AND ENGINEERING ETHICS 2020; 26:2455-2460. [PMID: 32643058 DOI: 10.1007/s11948-020-00241-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The benefits of Artificial Intelligence (AI) in medicine are unquestionable and it is unlikely that the pace of its development will slow down. From better diagnosis, prognosis, and prevention to more precise surgical procedures, AI has the potential to offer unique opportunities to enhance patient care and improve clinical practice overall. However, at this stage of AI technology development it is unclear whether it will de-humanize or re-humanize medicine. Will AI allow clinicians to spend less time on administrative tasks and technology related procedures and more time being present in person to attend to the needs of their patients? Or will AI dramatically increase the presence of smart technology in the clinical context to a point of undermining the humane dimension of the patient-physician relationship? In this brief commentary, we argue that technological solutions should be only integrated into clinical medicine if they fulfill the following three conditions: (1) they serve human ends; (2) they respect personal identity; and (3) they promote human interaction. These three conditions form the moral imperative of humanity.
Collapse
Affiliation(s)
- Fabrice Jotterand
- Center for Bioethics and Medical Humanities, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI, 53226, USA.
| | - Clara Bosco
- Center for Bioethics and Medical Humanities, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI, 53226, USA
| |
Collapse
|