1
|
Maurya BM, Yadav N, T A, J S, A S, V P, Iyer M, Yadav MK, Vellingiri B. Artificial intelligence and machine learning algorithms in the detection of heavy metals in water and wastewater: Methodological and ethical challenges. CHEMOSPHERE 2024; 353:141474. [PMID: 38382714 DOI: 10.1016/j.chemosphere.2024.141474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 01/17/2024] [Accepted: 02/14/2024] [Indexed: 02/23/2024]
Abstract
Heavy metals (HMs) enter waterbodies through various means, which, when exceeding a threshold limit, cause toxic effects both on the environment and in humans upon entering their systems. Recent times have seen an increase in such HM influx incident rates. This requires an instant response in this regard to review the challenges in the available classical methods for HM detection and removal. As well as provide an opportunity to explore the applications of artificial intelligence (AI) and machine learning (ML) for the identification and further redemption of water and wastewater from the HMs. This review of research focuses on such applications in conjunction with the available in-silico models producing worldwide data for HM levels. Furthermore, the effect of HMs on various disease progressions has been provided, along with a brief account of prediction models analysing the health impact of HM intoxication. Also discussing the ethical and other challenges associated with the use of AI and ML in this field is the futuristic approach intended to follow, opening a wide scope of possibilities for improvement in wastewater treatment methodologies.
Collapse
Affiliation(s)
- Brij Mohan Maurya
- Human Cytogenetics and Stem Cell Laboratory, Department of Zoology, School of Basic Sciences, Central University of Punjab, Bathinda, 151401, Punjab, India
| | - Nidhi Yadav
- Human Cytogenetics and Stem Cell Laboratory, Department of Zoology, School of Basic Sciences, Central University of Punjab, Bathinda, 151401, Punjab, India
| | - Amudha T
- Department of Computer Applications, Bharathiar University, Coimbatore, India
| | - Satheeshkumar J
- Department of Computer Applications, Bharathiar University, Coimbatore, India
| | - Sangeetha A
- Department of Computer Applications, Bharathiar University, Coimbatore, India
| | - Parthasarathy V
- Department of Computer Science and Engineering, Karpagam Academy of Higher Education, Pollachi Main Road, Eachanari Post, Coimbatore, 641021, Tamil Nadu, India
| | - Mahalaxmi Iyer
- Centre for Neuroscience, Department of Biotechnology, Karpagam Academy of Higher Education, Coimbatore, 641021, Tamil Nadu, India; Department of Microbiology, Central University of Punjab, Bathinda, 151401, Punjab, India
| | - Mukesh Kumar Yadav
- Department of Microbiology, Central University of Punjab, Bathinda, 151401, Punjab, India
| | - Balachandar Vellingiri
- Human Cytogenetics and Stem Cell Laboratory, Department of Zoology, School of Basic Sciences, Central University of Punjab, Bathinda, 151401, Punjab, India.
| |
Collapse
|
2
|
Andhari S, Khutale G, Gupta R, Patil Y, Khandare J. Chemical tunability of advanced materials used in the fabrication of micro/nanobots. J Mater Chem B 2023. [PMID: 37163210 DOI: 10.1039/d2tb02743g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Micro and nanobots (MNBs) are unprecedented in their ability to be chemically tuned for autonomous tasks with enhanced targeting and functionality while maintaining their mobility. A myriad of chemical modifications involving a large variety of advanced materials have been demonstrated to be effective in the design of MNBs. Furthermore, they can be controlled for their autonomous motion, and their ability to carry chemical or biological payloads. In addition, MNBs can be modified to achieve targetability with specificity for biological implications. MNBs by virtue of their chemical compositions may be limited by their biocompatibility, tissue accumulation, poor biodegradability and toxicity. This review presents a note on artificial intelligence materials (AIMs), their importance, and the dimensional scales at which intrinsic autonomy can be achieved for diverse utility. We briefly discuss the evolution of such systems with a focus on their advancements in nanomedicine. We highlight MNBs covering their contemporary traits and the emergence of a few start-ups in specific areas. Furthermore, we showcase various examples, demonstrating that chemical tunability is an attractive primary approach for designing MNBs with immense capabilities both in biology and chemistry. Finally, we cover biosafety and ethical considerations in designing MNBs in the era of artificial intelligence for varied applications.
Collapse
Affiliation(s)
- Saloni Andhari
- OneCell Diagnostics, Pune 411057, India
- OneCell Diagnostics, Cupertino, California 95014, USA
| | - Ganesh Khutale
- OneCell Diagnostics, Pune 411057, India
- OneCell Diagnostics, Cupertino, California 95014, USA
| | - Rituja Gupta
- School of Pharmacy, Dr. Vishwanath Karad MIT World Peace University, Kothrud, Pune 411038, India.
| | - Yuvraj Patil
- School of Pharmacy, Dr. Vishwanath Karad MIT World Peace University, Kothrud, Pune 411038, India.
| | - Jayant Khandare
- OneCell Diagnostics, Pune 411057, India
- OneCell Diagnostics, Cupertino, California 95014, USA
- School of Pharmacy, Dr. Vishwanath Karad MIT World Peace University, Kothrud, Pune 411038, India.
- Actorius Innovations and Research, Pune, 411057, India
- Actorius Innovations and Research, Simi Valley, CA 93063, USA
- School of Consciousness, Dr. Vishwanath Karad MIT World Peace University, Kothrud, Pune 411038, India
| |
Collapse
|
3
|
Génova G, Moreno V, González MR. Machine Ethics: Do Androids Dream of Being Good People? SCIENCE AND ENGINEERING ETHICS 2023; 29:10. [PMID: 36952064 PMCID: PMC10036453 DOI: 10.1007/s11948-023-00433-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely "following a moral code". In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.
Collapse
Affiliation(s)
- Gonzalo Génova
- Computer Science and Engineering Department, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganés, Madrid, Spain
| | - Valentín Moreno
- Computer Science and Engineering Department, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganés, Madrid, Spain
| | - M. Rosario González
- Department of Educational Studies, Universidad Complutense de Madrid, Avda. Rector Royo Vilanova S/N, 28040 Madrid, Spain
| |
Collapse
|
4
|
From EU Robotics and AI Governance to HRI Research: Implementing the Ethics Narrative. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00982-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
AbstractIn recent years, the European Union has made considerable efforts to develop dedicated strategies and policies for the governance of robotics and AI. An important component of the EU’s approach is its emphasis on the need to mitigate the potential societal impacts of the expected rise in the interactive capacities of autonomous systems. In the quest to define and implement new policies addressing this issue, ethical notions have taken an increasingly central position. This paper presents a concise overview of the integration of this ethics narrative in the EU’s policy plans. It demonstrates how the ethics narrative aids the definition of policy issues and the establishment of new policy ideas. Crucially, in this context, robotics and AI are explicitly understood as emerging technologies. This implies many ambiguities about their actual future impact, which in turn results in uncertainty regarding effective implementation of policies that draw on the ethics narrative. In an effort to develop clearer pathways towards the further development of ethical notions in AI and robotics governance, this paper understands human-robot interaction (HRI) research as a field that can play an important role in the implementation of ethics. Four different complementary pathways towards ethics integration in (HRI) research are proposed, namely: providing insights for the improvement of ethical assessment, further research into the moral competence of artificial agents, engage in value-based design and implementation of robots, and participation in discussions on building ethical sociotechnical systems around robots.
Collapse
|
5
|
Bricout J, Greer J, Fields N, Xu L, Tamplain P, Doelling K, Sharma B. The "humane in the loop": Inclusive research design and policy approaches to foster capacity building assistive technologies in the COVID-19 era. Assist Technol 2022; 34:644-652. [PMID: 34048326 DOI: 10.1080/10400435.2021.1930282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
The COVID-19 pandemic is emerging as a driver of greater reliance on wireless technologies, including intelligent assistive technologies, such as robots and artificial intelligence. We must integrate the humane "into the loop" of human-AT interactions to realize the full potential of wireless inclusion for people with disabilities and older adults. Embedding ethics into these new technologies is critical and requires a co-design approach, with end users participating throughout. Developing humane AT begins with a participatory, user-centered design embedded in an iterative co-creation process, and guided by an ethos prioritizing beneficence, user autonomy and agency. To gain insight into plausible AT development pathways ("futures"), we use scenario planning as a tool to articulate themes in the research literature. Four plausible scenarios are developed and compared to identify one as a desired "humane" future for AT development. Policy and practice recommendations derived from this scenario, and their implications for the role of AT in the advancement of human potential are explored.
Collapse
Affiliation(s)
- John Bricout
- School of Social Work, University of Minnesota, Twin Cities, Minnesota, USA
| | | | - Noelle Fields
- School of Social Work, University of Texas at Arlington
| | - Ling Xu
- School of Social Work, University of Texas at Arlington
| | | | - Kris Doelling
- School of Social Work, University of Texas at Arlington
| | - Bonita Sharma
- School of Social Work, University of Texas at San Antonio
| |
Collapse
|
6
|
Pauketat JV, Anthis JR. Predicting the moral consideration of artificial intelligences. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
7
|
Ruf E, Pauli C, Misoch S. Emotionale Reaktionen älterer Menschen gegenüber Sozial Assistiven Robotern. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00641-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
ZusammenfassungDieser Beitrag der Zeitschrift Gruppe. Interaktion. Organisation. (GIO) beschreibt unterschiedliche emotionale Reaktionen älterer Personen auf in verschiedenen Settings eingesetzte Sozial Assistive Roboter (SAR). In Folge des demographischen Wandels gibt es zunehmend mehr Personen in hohem Lebensalter, welche zuhause oder in Institutionen Unterstützung benötigen. Der Einsatz von Robotern zur Unterstützung wird als eine Möglichkeit gesehen, den gesellschaftlichen Herausforderungen zu begegnen. Gerade SAR werden zunehmend für ältere Personen erprobt und eingesetzt. Systematische Reviews zeigen das positive Potenzial von SAR auf ältere Menschen hinsichtlich (sozial-)psychologischer und physiologischer Parameter, gleichzeitig hat der Einsatz von SAR bei älteren Menschen eine intensive ethische Diskussion ausgelöst. Emotionen von Nutzenden gegenüber Robotern stehen dabei im Fokus, da diese einen wichtigen Aspekt der Akzeptanz und Wirkung darstellen. Dabei werden vor allem Fragen, die mit einer emotionalen Bindung an den Roboter zusammenhängen, kritisch diskutiert. Das Institut für Altersforschung (IAF) der Ostschweizer Fachhochschule (OST) hat im Rahmen von Feldtestungen mit unterschiedlichen SAR bei unterschiedlichen Personengruppen und Einsatzbereichen geforscht. Im Rahmen einer Sekundäranalyse wurden eine Bandbreite emotionaler Reaktionen bis hin zu Bindungen der verschiedenen Nutzergruppen registriert. Es konnte gezeigt werden, dass sozio-emotionale Bedürfnisse von Nutzenden durch den SAR gestillt werden können, und es zu Ablehnung kommen kann, wenn diesen nicht Rechnung getragen wird. Emotionale Bindungen sind jedoch differenziert zu betrachten, da der Einsatz von SAR, gerade bei vulnerablen Personen, trotz funktionaler Bindung auch neu induzierte negative Gefühle hervorrufen kann. Beim Einsatz von SAR in der Praxis es ist wichtig, alle Emotionen der Nutzenden gegenüber SAR frühzeitig zu erheben und im Hinblick auf mögliche unterwünschte Wirkungen wie (zu) starkem emotionalen Attachment zu beurteilen. Die dargestellten explorativen Studien ermöglichen es, exemplarische Einsatzfelder mit positivem Potential zu definieren, aber auch ethisch problematische Situationen zu beschreiben, um diese in Zukunft vermeiden zu können.
Collapse
|
8
|
Tabudlo J, Kuan L, Garma PF. Can nurses in clinical practice ascribe responsibility to intelligent robots? Nurs Ethics 2022; 29:1457-1465. [PMID: 35727571 DOI: 10.1177/09697330221090591] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND The twenty first- century marked the exponential growth in the use of intelligent robots and artificial intelligent in nursing compared to the previous decades. To the best of our knowledge, this article is first in responding to question, "Can nurses in clinical practice ascribe responsibility to intelligent robots and artificial intelligence when they commit errors?". PURPOSE The objective of this article is to present two worldviews (anthropocentrism and biocentrism) in responding to the question at hand chosen based on the roles of the entities involved in the use of intelligent robots and artificial intelligence in nursing. METHODS The development of this article was motivated by the immense discoveries, the current landscape, and nurses' role in relation to advanced technologies in healthcare. The paper begins the discussion by situating the use of intelligent robots and artificial intelligence in nursing and healthcare and presenting its ethical and moral implications. Then, we presented the two worldviews: anthropocentrism and biocentrism which are used to respond to the task at hand. RESULTS Anthropocentrism puts humans in the center in terms of moral standing and thus responsibility rests on them alone. Biocentrism declares that all creations deserve moral consideration and thus responsibility is equally allocated to all entities. Within these two worldviews, consensus development was offered to resolve these issues. Consensus provides clarity and democracy between and among the societies. CONCLUSIONS The findings of this article can be basis in (1) instituting mechanisms of robust peer review and a rigorous series of simulation before adopting or implementing intelligent robots and artificial intelligence in clinical practice; (2) education and training of highly specialized nurse practitioners who can be focal persons in responding to ethical and moral issues with regard to these advanced technologies; and (3) harmonization of robotics research, manufacturing, and clinical practice.
Collapse
Affiliation(s)
- Jerick Tabudlo
- College of Nursing, 54725University of the Philippines Manila, Manila, Philippines
| | - Letty Kuan
- College of Nursing, 54725University of the Philippines Manila, Manila, Philippines
| | | |
Collapse
|
9
|
Trust and ethics in AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01473-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
10
|
Hu Z, Zhang Y, Li Q, Lv C. Human–Machine Telecollaboration Accelerates the Safe Deployment of Large-Scale Autonomous Robots During the COVID-19 Pandemic. Front Robot AI 2022; 9:853828. [PMID: 35494540 PMCID: PMC9043527 DOI: 10.3389/frobt.2022.853828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 03/30/2022] [Indexed: 11/17/2022] Open
Affiliation(s)
- Zhongxu Hu
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Nanyang, Singapore
| | - Yiran Zhang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Nanyang, Singapore
| | - Qinghua Li
- Autonomous Driving Lab, Alibaba DAMO Academy, Hangzhou, China
| | - Chen Lv
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Nanyang, Singapore
- *Correspondence: Chen Lv,
| |
Collapse
|
11
|
Brady A, Naikar N. Development of Rasmussen's risk management framework for analysing multi-level sociotechnical influences in the design of envisioned work systems. ERGONOMICS 2022; 65:485-518. [PMID: 35083958 DOI: 10.1080/00140139.2021.2005823] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Besides radically altering work, advances in automation and intelligent technologies have the potential to bring significant societal transformation. These transitional periods require an approach to analysis and design that goes beyond human-machine interaction in the workplace to consider the wider sociotechnical needs of envisioned work systems. The Sociotechnical Influences Space, an analytical tool motivated by Rasmussen's risk management model, promotes a holistic approach to the design of future systems, attending to societal needs and challenges, while still recognising the bottom-up push from emerging technologies. A study explores the concept and practical potential of the tool when applied to the analysis of a large-scale, 'real-world' problem, specifically the societal, governmental, regulatory, organisational, human, and technological factors of significance in mixed human-artificial agent workforces. Further research is needed to establish the feasibility of the tool in a range of application domains, the details of the method, and the value of the tool in design. Practitioner summary: Emerging automation and intelligent technologies are not only transforming workplaces, but may be harbingers of major societal change. A new analytical tool, the Sociotechnical Influences Space, is proposed to support organisations in taking a holistic approach to the incorporation of advanced technologies into workplaces and function allocation in mixed human-artificial agent teams.
Collapse
Affiliation(s)
- Ashleigh Brady
- Defence Science and Technology Group, Melbourne, Australia
| | - Neelam Naikar
- Defence Science and Technology Group, Melbourne, Australia
| |
Collapse
|
12
|
Eiben ÁE, Ellers J, Meynen G, Nyholm S. Robot Evolution: Ethical Concerns. Front Robot AI 2021; 8:744590. [PMID: 34805290 PMCID: PMC8603346 DOI: 10.3389/frobt.2021.744590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 10/11/2021] [Indexed: 01/11/2023] Open
Abstract
Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.
Collapse
Affiliation(s)
- Ágoston E Eiben
- Department of Computer Science and Ecological Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.,Department of Electronic Engineering, University of York, York, United Kingdom
| | - Jacintha Ellers
- Department of Ecological Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Gerben Meynen
- Department of Philosophy, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.,Department of Law, Utrecht University, Utrecht, Netherlands
| | - Sven Nyholm
- Department of Philosophy and Religious Studies, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
13
|
Guzhva O, Siegford JM, Lunner Kolstrup C. The Hitchhiker's Guide to Integration of Social and Ethical Awareness in Precision Livestock Farming Research. FRONTIERS IN ANIMAL SCIENCE 2021. [DOI: 10.3389/fanim.2021.725710] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
While fully automated livestock production may be considered the ultimate goal for optimising productivity at the farm level, the benefits and costs of such a development at the scale at which it needs to be implemented must also be considered from social and ethical perspectives. Automation resulting from Precision Livestock Farming (PLF) could alter fundamental views of human-animal interactions on farm and, even further, potentially compromise human and animal welfare and health if PLF development does not include a flexible, holistic strategy for integration. To investigate topic segregation, inclusion of socio-ethical aspects, and consideration of human-animal interactions within the PLF research field, the abstracts from 644 peer-reviewed publications were analysed using the recent advances in the Natural Language Processing (NLP). Two Latent Dirichlet Allocation (LDA) probabilistic models with varying number of topics (13 and 3 for Model 1 and Model 2, respectively) were implemented to create a generalised research topic overview. The visual representation of topics produced by LDA Model 1 and Model 2 revealed prominent similarities in the terms contributing to each topic, with only weight for each term being different. The majority of terms for both models were process-oriented, obscuring the inclusion of social and ethical angles in PLF publications. A subset of articles (5%, n = 32) was randomly selected for manual examination of the full text to evaluate whether abstract text and focus reflected that of the article as a whole. Few of these articles (12.5%, n = 4) focused specifically on broader ethical or societal considerations of PLF or (9.4%, n = 3) discussed PLF with respect to human-animal interactions. While there was consideration of the impact of PLF on animal welfare and farmers in nearly half of the full texts examined (46.9%, n = 15), this was often limited to a few statements in passing. Further, these statements were typically general rather than specific and presented PLF as beneficial to human users and animal recipients. To develop PLF that is in keeping with the ethical values and societal concerns of the public and consumers, projects, and publications that deliberately combine social context with technological processes and results are needed.
Collapse
|
14
|
Salvini P, Paez-Granados D, Billard A. On the Safety of Mobile Robots Serving in Public Spaces. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3442678] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Since 2014, a specific standard has been dedicated for the safety certification of personal care robots, which operate in close proximity to humans. These robots serve as information providers, object transporters, personal mobility carriers, and security patrollers. In this article, we point out the shortcomings concerning EN ISO 13482:2014, which encompasses guidelines regarding the safety and design of personal care robots. In particular, we argue that the current standard is not suitable for guaranteeing people's safety when these robots operate in public spaces. Specifically, the standard lacks requirements to protect pedestrians and bystanders. The guideline implicitly assumes that private spaces, such as households and offices, present the same hazards as in public spaces. We highlight the existence of at least three properties pertaining to robots’ use in public spaces. These properties include (1) crowds, (2) social norms and proxemics rules, and (3) people's misbehaviours. We discuss how these properties impact robots’ safety. This article aims to raise stakeholders’ awareness on individuals’ safety when robots are deployed in public spaces. This could be achieved by integrating the gaps present in EN ISO 13482:2014 or by creating a new dedicated standard.
Collapse
Affiliation(s)
- Pericle Salvini
- LASA Laboratory, School of Engineering, EPFL Station 9, 1015 Lausanne, Switzerland
| | - Diego Paez-Granados
- LASA Laboratory, School of Engineering, EPFL Station 9, 1015 Lausanne, Switzerland
| | - Aude Billard
- LASA Laboratory, School of Engineering, EPFL Station 9, 1015 Lausanne, Switzerland
| |
Collapse
|
15
|
Brenna CT. Medical Machines: The Expanding Role of Ethics in Technology-Driven Healthcare. CANADIAN JOURNAL OF BIOETHICS 2021. [DOI: 10.7202/1077638ar] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Emerging technologies such as artificial intelligence are actively revolutionizing the healthcare industry. While there is widespread concern that these advances will displace human practitioners within the healthcare sector, there are several tasks – including original and nuanced ethical decision making – that they cannot replace. Further, the implementation of artificial intelligence in clinical practice can be anticipated to drive the production of novel ethical tensions surrounding its use, even while eliminating some of the technical tasks which currently compete with ethical deliberation for clinicians’ limited time. A new argument therefore arises to suggest that although these disruptive technologies will change the face of medicine, they may also foster a revival of several fundamental components inherent to the role of healthcare professionals, chiefly, the principal activities of moral philosophy. Accordingly, “machine medicine” presents a vital opportunity to reinvigorate the field of bioethics, rather than withdraw from it.
Collapse
|
16
|
Gorjup G, Gerez L, Liarokapis M. Leveraging Human Perception in Robot Grasping and Manipulation Through Crowdsourcing and Gamification. Front Robot AI 2021; 8:652760. [PMID: 33996927 PMCID: PMC8116898 DOI: 10.3389/frobt.2021.652760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 03/31/2021] [Indexed: 11/19/2022] Open
Abstract
Robot grasping in unstructured and dynamic environments is heavily dependent on the object attributes. Although Deep Learning approaches have delivered exceptional performance in robot perception, human perception and reasoning are still superior in processing novel object classes. Furthermore, training such models requires large, difficult to obtain datasets. This work combines crowdsourcing and gamification to leverage human intelligence, enhancing the object recognition and attribute estimation processes of robot grasping. The framework employs an attribute matching system that encodes visual information into an online puzzle game, utilizing the collective intelligence of players to expand the attribute database and react to real-time perception conflicts. The framework is deployed and evaluated in two proof-of-concept applications: enhancing the control of a robotic exoskeleton glove and improving object identification for autonomous robot grasping. In addition, a model for estimating the framework response time is proposed. The obtained results demonstrate that the framework is capable of rapid adaptation to novel object classes, based purely on visual information and human experience.
Collapse
Affiliation(s)
- Gal Gorjup
- New Dexterity Research Group, Department of Mechanical Engineering, The University of Auckland, Auckland, New Zealand
| | - Lucas Gerez
- New Dexterity Research Group, Department of Mechanical Engineering, The University of Auckland, Auckland, New Zealand
| | - Minas Liarokapis
- New Dexterity Research Group, Department of Mechanical Engineering, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
17
|
Hurtado JV, Londoño L, Valada A. From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation. Front Robot AI 2021; 8:650325. [PMID: 33842558 PMCID: PMC8024571 DOI: 10.3389/frobt.2021.650325] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 03/01/2021] [Indexed: 11/13/2022] Open
Abstract
The exponentially increasing advances in robotics and machine learning are facilitating the transition of robots from being confined to controlled industrial spaces to performing novel everyday tasks in domestic and urban environments. In order to make the presence of robots safe as well as comfortable for humans, and to facilitate their acceptance in public environments, they are often equipped with social abilities for navigation and interaction. Socially compliant robot navigation is increasingly being learned from human observations or demonstrations. We argue that these techniques that typically aim to mimic human behavior do not guarantee fair behavior. As a consequence, social navigation models can replicate, promote, and amplify societal unfairness, such as discrimination and segregation. In this work, we investigate a framework for diminishing bias in social robot navigation models so that robots are equipped with the capability to plan as well as adapt their paths based on both physical and social demands. Our proposed framework consists of two components: learning which incorporates social context into the learning process to account for safety and comfort, and relearning to detect and correct potentially harmful outcomes before the onset. We provide both technological and societal analysis using three diverse case studies in different social scenarios of interaction. Moreover, we present ethical implications of deploying robots in social environments and propose potential solutions. Through this study, we highlight the importance and advocate for fairness in human-robot interactions in order to promote more equitable social relationships, roles, and dynamics and consequently positively influence our society.
Collapse
|
18
|
Doorn N. Artificial intelligence in the water domain: Opportunities for responsible use. THE SCIENCE OF THE TOTAL ENVIRONMENT 2021; 755:142561. [PMID: 33039891 PMCID: PMC7522739 DOI: 10.1016/j.scitotenv.2020.142561] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 08/28/2020] [Accepted: 09/21/2020] [Indexed: 05/06/2023]
Abstract
Recent years have seen a rise of techniques based on artificial intelligence (AI). With that have also come initiatives for guidance on how to develop "responsible AI" aligned with human and ethical values. Compared to sectors like energy, healthcare, or transportation, the use of AI-based techniques in the water domain is relatively modest. This paper presents a review of current AI applications in the water domain and develops some tentative insights as to what "responsible AI" could mean there. Building on the reviewed literature, four categories of application are identified: modeling, prediction and forecasting, decision support and operational management, and optimization. We also identify three insights pertaining to the water sector in particular: the use of AI techniques in general, and many-objective optimization in particular, that allow for a pluralism of values and changing values; the use of theory-guided data science, which can avoid some of the pitfalls of strictly data-driven models; and the ability to build on experiences with participatory decision-making in the water sector. These insights suggest that the development and application of responsible AI techniques for the water sector should not be left to data scientists alone, but requires concerted effort by water professionals and data scientists working together, complemented with expertise from the social sciences and humanities.
Collapse
Affiliation(s)
- Neelke Doorn
- Delft University of Technology, School of Technology, Policy and Management, Department of Values, Technology and Innovation, PO Box 5015, 2600 GA Delft, the Netherlands.
| |
Collapse
|
19
|
Abstract
Social robots that can interact and communicate with people are growing in popularity for use at home and in customer-service, education, and healthcare settings. Although growing evidence suggests that co-operative and emotionally aligned social robots could benefit users across the lifespan, controversy continues about the ethical implications of these devices and their potential harms. In this perspective, we explore this balance between benefit and risk through the lens of human-robot relationships. We review the definitions and purposes of social robots, explore their philosophical and psychological status, and relate research on human-human and human-animal relationships to the emerging literature on human-robot relationships. Advocating a relational rather than essentialist view, we consider the balance of benefits and harms that can arise from different types of relationship with social robots and conclude by considering the role of researchers in understanding the ethical and societal impacts of social robotics.
Collapse
Affiliation(s)
- Tony J. Prescott
- Department of Computer Science, University of Sheffield, Sheffield, UK
| | | |
Collapse
|
20
|
Zhang J, Oh YJ, Lange P, Yu Z, Fukuoka Y. Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet: Viewpoint. J Med Internet Res 2020; 22:e22845. [PMID: 32996892 PMCID: PMC7557439 DOI: 10.2196/22845] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 09/03/2020] [Accepted: 09/17/2020] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND Chatbots empowered by artificial intelligence (AI) can increasingly engage in natural conversations and build relationships with users. Applying AI chatbots to lifestyle modification programs is one of the promising areas to develop cost-effective and feasible behavior interventions to promote physical activity and a healthy diet. OBJECTIVE The purposes of this perspective paper are to present a brief literature review of chatbot use in promoting physical activity and a healthy diet, describe the AI chatbot behavior change model our research team developed based on extensive interdisciplinary research, and discuss ethical principles and considerations. METHODS We conducted a preliminary search of studies reporting chatbots for improving physical activity and/or diet in four databases in July 2020. We summarized the characteristics of the chatbot studies and reviewed recent developments in human-AI communication research and innovations in natural language processing. Based on the identified gaps and opportunities, as well as our own clinical and research experience and findings, we propose an AI chatbot behavior change model. RESULTS Our review found a lack of understanding around theoretical guidance and practical recommendations on designing AI chatbots for lifestyle modification programs. The proposed AI chatbot behavior change model consists of the following four components to provide such guidance: (1) designing chatbot characteristics and understanding user background; (2) building relational capacity; (3) building persuasive conversational capacity; and (4) evaluating mechanisms and outcomes. The rationale and evidence supporting the design and evaluation choices for this model are presented in this paper. CONCLUSIONS As AI chatbots become increasingly integrated into various digital communications, our proposed theoretical framework is the first step to conceptualize the scope of utilization in health behavior change domains and to synthesize all possible dimensions of chatbot features to inform intervention design and evaluation. There is a need for more interdisciplinary work to continue developing AI techniques to improve a chatbot's relational and persuasive capacities to change physical activity and diet behaviors with strong ethical principles.
Collapse
Affiliation(s)
- Jingwen Zhang
- Department of Communication, University of California, Davis, Davis, CA, United States
- Department of Public Health Sciences, University of California, Davis, Davis, CA, United States
| | - Yoo Jung Oh
- Department of Communication, University of California, Davis, Davis, CA, United States
| | - Patrick Lange
- Department of Computer Science, University of California, Davis, Davis, CA, United States
| | - Zhou Yu
- Department of Computer Science, University of California, Davis, Davis, CA, United States
| | - Yoshimi Fukuoka
- Department of Physiological Nursing, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
21
|
Gerłowska J, Furtak-Niczyporuk M, Rejdak K. Robotic assistance for people with dementia: a viable option for the future? Expert Rev Med Devices 2020; 17:507-518. [PMID: 32511027 DOI: 10.1080/17434440.2020.1770592] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
INTRODUCTION Demographic changes in society and fewer personnel working in healthcare services have resulted in an increase in the speed of development of safe, reliable robotic assistance technologies for patients with neurological diseases. This paper aims to advocate for the frailty of patients in light of the economic need for robotic assistance, discuss potential hazards, and outline related factors that influence positive outcomes. AREAS COVERED This article reviews the state of the art and perspectives regarding the use of robotics in older adults with dementia. We focus on current trends in the development of robotic technologies for these patients and discuss the potential hazards associated with the implementation of such cutting-edge technology in daily practice. EXPERT OPINION We envisage a gradual increase in the usage of robot-based devices for the management and support of patients with cognitive deficits. In particular, the introduction of artificial intelligence will enhance the functionality of these technologies, but also increase potential hazards resulting from human-robot interactions. The development of such technology must consider whether neurological syndromes are static or progressive. Progressive syndromes pose the biggest challenge since the functionality of robotic devices must adapt to patients changing cognitive and motor performance profiles.
Collapse
Affiliation(s)
| | | | - Konrad Rejdak
- Department of Neurology, Medical University of Lublin , Lublin, Poland
| |
Collapse
|
22
|
Card D, Smith NA. On Consequentialism and Fairness. Front Artif Intell 2020; 3:34. [PMID: 33733152 PMCID: PMC7861221 DOI: 10.3389/frai.2020.00034] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 04/17/2020] [Indexed: 11/30/2022] Open
Abstract
Recent work on fairness in machine learning has primarily emphasized how to define, quantify, and encourage “fair” outcomes. Less attention has been paid, however, to the ethical foundations which underlie such efforts. Among the ethical perspectives that should be taken into consideration is consequentialism, the position that, roughly speaking, outcomes are all that matter. Although consequentialism is not free from difficulties, and although it does not necessarily provide a tractable way of choosing actions (because of the combined problems of uncertainty, subjectivity, and aggregation), it nevertheless provides a powerful foundation from which to critique the existing literature on machine learning fairness. Moreover, it brings to the fore some of the tradeoffs involved, including the problem of who counts, the pros and cons of using a policy, and the relative value of the distant future. In this paper we provide a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism. We conclude with a broader discussion of the issues of learning and randomization, which have important implications for the ethics of automated decision making systems.
Collapse
Affiliation(s)
- Dallas Card
- Computer Science Department, Stanford University, Stanford, CA, United States
- *Correspondence: Dallas Card
| | - Noah A. Smith
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, United States
- Allen Institute for AI, Seattle, WA, United States
| |
Collapse
|
23
|
What is the resource footprint of a computer science department? Place, people, and Pedagogy. DATA & POLICY 2020. [DOI: 10.1017/dap.2020.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
AbstractInternet and Communication Technology/electrical and electronic equipment (ICT/EEE) form the bedrock of today’s knowledge economy. This increasingly interconnected web of products, processes, services, and infrastructure is often invisible to the user, as are the resource costs behind them. This ecosystem of machine-to-machine and cyber-physical-system technologies has a myriad of (in)direct impacts on the lithosphere, biosphere, atmosphere, and hydrosphere. As key determinants of tomorrow’s digital world, academic institutions are critical sites for exploring ways to mitigate and/or eliminate negative impacts. This Report is a self-deliberation provoked by the questionHow do we create more resilient and healthier computer science departments: living laboratories for teaching and learning about resource-constrained computing, computation, and communication?Our response for University College London (UCL) Computer Science is to reflect on how, when, and where resources—energy, (raw) materials including water, space, and time—are consumed by the building (place), its occupants (people), and their activities (pedagogy). This perspective and attendant first-of-its-kind assessment outlines a roadmap and proposes high-level principles to aid our efforts, describing challenges and difficulties hindering quantification of the Department’s resource footprint. Qualitatively, we find a need to rematerialise the ICT/EEE ecosystem: to reveal the full costs of the seemingly intangible information society by interrogating the entire life history of paraphernalia from smartphones through servers to underground/undersea cables; another approach is demonstrating the corporeality of commonplace phrases and Nature-inspired terms such as artificial intelligence, social media, Big Data, smart cities/farming, the Internet, the Cloud, and the Web. We sketch routes to realising three interlinked aims: cap annual power consumption and greenhouse gas emissions, become a zero waste institution, and rejuvenate and (re)integrate the natural and built environments.
Collapse
|
24
|
Is Human Enhancement in Space a Moral Duty? Missions to Mars, Advanced AI and Genome Editing in Space. Camb Q Healthc Ethics 2019; 29:122-130. [PMID: 31858939 DOI: 10.1017/s0963180119000859] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Any space program involving long-term human missions will have to cope with serious risks to human health and life. Because currently available countermeasures are insufficient in the long term, there is a need for new, more radical solutions. One possibility is a program of human enhancement for future deep space mission astronauts. This paper discusses the challenges for long-term human missions of a space environment, opening the possibility of serious consideration of human enhancement and a fully automated space exploration, based on highly advanced AI. The author argues that for such projects, there are strong reasons to consider human enhancement, including gene editing of germ line and somatic cells, as a moral duty.
Collapse
|
25
|
Affiliation(s)
- Jairo Perez-Osorio
- Istituto Italiano di Tecnologia, Social Cognition in Human Robot Interaction, Genova, Italy
| | - Agnieszka Wykowska
- Istituto Italiano di Tecnologia, Social Cognition in Human Robot Interaction, Genova, Italy
| |
Collapse
|
26
|
Sennott SC, Akagi L, Lee M, Rhodes A. AAC and Artificial Intelligence (AI). TOPICS IN LANGUAGE DISORDERS 2019; 39:389-403. [PMID: 34012187 PMCID: PMC8130588 DOI: 10.1097/tld.0000000000000197] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Artificially intelligent tools have given us the capability to use technology to address ever more complex challenges. What are the capabilities, challenges, and hazards of incorporating and developing this technology for augmentative and alternative communication (AAC)? Artificial Intelligence can be defined as the capability of a machine to imitate human intelligence. The goal of artificial intelligence is to create machines that can use characteristics of human intelligence to solve problems and adapt to a changing environment. Harnessing the capabilities of AI tools has the potential to accelerate progress in serving individuals with complex communication needs. In this article, we discuss components of AI, including: (a) knowledge representation, (b) reasoning, (c) natural language processing, (d) machine learning, (e) computer vision, and (f) robotics. For each AI component, we delve into the implications, promise, and precautions of that component for AAC.
Collapse
Affiliation(s)
- Samuel C Sennott
- Universal Design Lab Director, College of Education, Portland State University, Post Office Box 751, Portland, Oregon, 97207
| | - Linda Akagi
- Universal Design Lab, Portland State University
| | | | - Anthony Rhodes
- Maseeh Department of Mathematics and Statistics, Portland State University
| |
Collapse
|
27
|
Pepito JA, Vasquez BA, Locsin RC. Artificial Intelligence and Autonomous Machines: Influences, Consequences, and Dilemmas in Human Care. Health (London) 2019. [DOI: 10.4236/health.2019.117075] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
28
|
|