1
|
Sardari S, Sharifzadeh S, Daneshkhah A, Nakisa B, Loke SW, Palade V, Duncan MJ. Artificial Intelligence for skeleton-based physical rehabilitation action evaluation: A systematic review. Comput Biol Med 2023; 158:106835. [PMID: 37019012 DOI: 10.1016/j.compbiomed.2023.106835] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 03/09/2023] [Accepted: 03/26/2023] [Indexed: 04/03/2023]
Abstract
Performing prescribed physical exercises during home-based rehabilitation programs plays an important role in regaining muscle strength and improving balance for people with different physical disabilities. However, patients attending these programs are not able to assess their action performance in the absence of a medical expert. Recently, vision-based sensors have been deployed in the activity monitoring domain. They are capable of capturing accurate skeleton data. Furthermore, there have been significant advancements in Computer Vision (CV) and Deep Learning (DL) methodologies. These factors have promoted the solutions for designing automatic patient's activity monitoring models. Then, improving such systems' performance to assist patients and physiotherapists has attracted wide interest of the research community. This paper provides a comprehensive and up-to-date literature review on different stages of skeleton data acquisition processes for the aim of physio exercise monitoring. Then, the previously reported Artificial Intelligence (AI) - based methodologies for skeleton data analysis will be reviewed. In particular, feature learning from skeleton data, evaluation, and feedback generation for the purpose of rehabilitation monitoring will be studied. Furthermore, the associated challenges to these processes will be reviewed. Finally, the paper puts forward several suggestions for future research directions in this area.
Collapse
|
2
|
Beckers N, Siebert LC, Bruijnes M, Jonker C, Abbink D. Drivers of partially automated vehicles are blamed for crashes that they cannot reasonably avoid. Sci Rep 2022; 12:16193. [PMID: 36171437 PMCID: PMC9519957 DOI: 10.1038/s41598-022-19876-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 09/06/2022] [Indexed: 11/09/2022] Open
Abstract
People seem to hold the human driver to be primarily responsible when their partially automated vehicle crashes, yet is this reasonable? While the driver is often required to immediately take over from the automation when it fails, placing such high expectations on the driver to remain vigilant in partially automated driving is unreasonable. Drivers show difficulties in taking over control when needed immediately, potentially resulting in dangerous situations. From a normative perspective, it would be reasonable to consider the impact of automation on the driver's ability to take over control when attributing responsibility for a crash. We, therefore, analyzed whether the public indeed considers driver ability when attributing responsibility to the driver, the vehicle, and its manufacturer. Participants blamed the driver primarily, even though they recognized the driver's decreased ability to avoid the crash. These results portend undesirable situations in which users of partially driving automation are the ones held responsible, which may be unreasonable due to the detrimental impact of driving automation on human drivers. Lastly, the outcome signals that public awareness of such human-factors issues with automated driving should be improved.
Collapse
Affiliation(s)
- Niek Beckers
- AiTech, Delft University of Technology, Delft, Netherlands. .,Cognitive Robotics, Faculty of Mechanical, Maritime, and Material Engineering, Delft University of Technology, Delft, Netherlands.
| | - Luciano Cavalcante Siebert
- AiTech, Delft University of Technology, Delft, Netherlands.,Interactive Intelligence, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Delft, Netherlands
| | - Merijn Bruijnes
- Public Governance and Management, Faculty of Law Economics and Governance, Utrecht University, Utrecht, Netherlands
| | - Catholijn Jonker
- AiTech, Delft University of Technology, Delft, Netherlands.,Interactive Intelligence, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Delft, Netherlands
| | - David Abbink
- AiTech, Delft University of Technology, Delft, Netherlands.,Cognitive Robotics, Faculty of Mechanical, Maritime, and Material Engineering, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
3
|
de Sio FS, Mecacci G, Calvert S, Heikoop D, Hagenzieker M, van Arem B. Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach. Minds Mach (Dordr) 2022:1-25. [PMID: 35915817 PMCID: PMC9330947 DOI: 10.1007/s11023-022-09608-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 06/14/2022] [Indexed: 11/29/2022]
Abstract
The paper presents a framework to realise "meaningful human control" over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project "Meaningful Human Control over Automated Driving Systems" lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately-though not necessarily directly-in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding.
Collapse
Affiliation(s)
| | - Giulio Mecacci
- Delft University of Technology, Delft, The Netherlands
- Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | | | | | | - Bart van Arem
- Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
4
|
Abstract
Abstract
There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
Collapse
|