1
|
Stirling L, Acosta-Sojo Y, Dennerlein JT. Defining a systems framework for characterizing physical work demands with wearable sensors. Ann Work Expo Health 2024; 68:443-465. [PMID: 38597679 DOI: 10.1093/annweh/wxae024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 03/01/2024] [Accepted: 03/20/2024] [Indexed: 04/11/2024] Open
Abstract
Measuring the physical demands of work is important in understanding the relationship between exposure to these job demands and their impact on the safety, health, and well-being of working people. However, work is changing and our knowledge of job demands should also evolve in anticipation of these changes. New opportunities exist for noninvasive long-term measures of physical demands through wearable motion sensors, including inertial measurement units, heart rate monitors, and muscle activity monitors. Inertial measurement units combine accelerometers, gyroscopes, and magnetometers to provide continuous measurement of a segment's motion and the ability to estimate orientation in 3-dimensional space. There is a need for a system-thinking perspective on how and when to apply these wearable sensors within the context of research and practice surrounding the measurement of physical job demands. In this paper, a framework is presented for measuring the physical work demands that can guide designers, researchers, and users to integrate and implement these advanced sensor technologies in a way that is relevant to the decision-making needs for physical demand assessment. We (i) present a literature review of the way physical demands are currently being measured, (ii) present a framework that extends the International Classification of Functioning to guide how technology can measure the facets of work, (iii) provide a background on wearable motion sensing, and (iv) define 3 categories of decision-making that influence the questions that we can ask and measures that are needed. By forming questions within these categories at each level of the framework, this approach encourages thinking about the systems-level problems inherent in the workplace and how they manifest at different scales. Applying this framework provides a systems approach to guide study designs and methodological approaches to study how work is changing and how it impacts worker safety, health, and well-being.
Collapse
Affiliation(s)
- Leia Stirling
- Industrial and Operations Engineering Department, Robotics Department, University of Michigan, Ann Arbor, MI 48109, United States
| | - Yadrianna Acosta-Sojo
- Industrial and Systems Engineering Department, Auburn University, Auburn, AL 36849, United States
| | - Jack T Dennerlein
- Sargent College of Health and Rehabilitation Sciences, Boston University, Boston, MA 02215, United States
| |
Collapse
|
2
|
Radwin RG, Hu YH, Akkas O, Bao S, Harris-Adamson C, Lin JH, Meyers AR, Rempel D. Comparison of the observer, single-frame video and computer vision hand activity levels. ERGONOMICS 2023; 66:1132-1141. [PMID: 36227226 PMCID: PMC10130228 DOI: 10.1080/00140139.2022.2136407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 10/10/2022] [Indexed: 05/11/2023]
Abstract
Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.
Collapse
Affiliation(s)
| | - Yu Hen Hu
- University of Wisconsin, Madison, WI, USA
| | - Oguz Akkas
- University of Wisconsin, Madison, WI, USA
| | - Stephen Bao
- SHARP Program, Washington State Department of Labor and Industries, Olympia, WA, USA
| | | | - Jia-Hua Lin
- SHARP Program, Washington State Department of Labor and Industries, Olympia, WA, USA
| | - Alysha R. Meyers
- Division of Field Studies and Engineering, National Institute for Occupational Safety and Health, Cincinnati, OH, USA
| | - David Rempel
- University of California-San Francisco, San Francisco, CA, USA
| |
Collapse
|
3
|
Azari DP, Frasier LL, Miller BL, Pavuluri Quamme SR, Le BV, Greenberg CC, Radwin RG. Modeling Performance of Open Surgical Cases. Simul Healthc 2021; 16:e188-e193. [PMID: 34860738 DOI: 10.1097/sih.0000000000000544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
INTRODUCTION Previous efforts used digital video to develop computer-generated assessments of surgical hand motion economy and fluidity of motion. This study tests how well previously trained assessment models match expert ratings of suturing and tying video clips recorded in a new operating room (OR) setting. METHODS Enabled through computer vision of the hands, this study tests the applicability of assessments born out of benchtop simulations to in vivo suturing and tying tasks recorded in the OR. RESULTS Compared with expert ratings, computer-generated assessments for fluidity of motion (slope = 0.83, intercept = 1.77, R2 = 0.55) performed better than motion economy (slope = 0.73, intercept = 2.04, R2 = 0.49), although 85% of ratings for both models were within ±2 of the expert response. Neither assessment performed as well in the OR as they did on the training data. Assessments were sensitive to changing hand postures, dropped ligatures, and poor tissue contact-features typically missing from training data. Computer-generated assessment of OR tasks was contingent on a clear, consistent view of both surgeon's hands. CONCLUSIONS Computer-generated assessment may help provide formative feedback during deliberate practice, albeit with greater variability in the OR compared with benchtop simulations. Future work will benefit from expanded available bimanual video records.
Collapse
Affiliation(s)
- David P Azari
- From the Department of Industrial and Systems Engineering (D.P.A., R.G.R.); Department of Surgery (S.R.P.Q., C.C.G.), Clinical Sciences Center; Department of Urology (B.V.L.); and Duane H. and Dorothy M. Bluemke Professor in the College of Engineering (R.G.R.), University of Wisconsin-Madison, Madison, WI; Department of Surgery (L.L.F.), Penn Medicine - University of Pennsylvania Health System, Philadelphia, PA; City of Hope National Comprehensive Cancer Center (B.L.M), Duarte, CA
| | | | | | | | | | | | | |
Collapse
|
4
|
Seidel DH, Heinrich K, Hermanns-Truxius I, Ellegast RP, Barrero LH, Rieger MA, Steinhilber B, Weber B. Assessment of work-related hand and elbow workloads using measurement-based TLV for HAL. APPLIED ERGONOMICS 2021; 92:103310. [PMID: 33352500 DOI: 10.1016/j.apergo.2020.103310] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 11/11/2020] [Accepted: 11/13/2020] [Indexed: 06/12/2023]
Abstract
Direct-measurement-based methods for assessing workloads of the hand or elbow in the field are rare. Aim of the study was to develop such a method based on the Threshold Limit Value for Hand Activity Level (TLV for HAL). Hence, HAL was quantified using kinematic data (mean power frequencies, angular velocities and micro-pauses) and combined with electromyographic data (root-mean-square values) in order to generate a measurement-based TLV for HAL (mTLV for HAL). The multi-sensor system CUELA including inertial sensors, potentiometers and a 4-channel surface electromyography module was used. For wrist and elbow regions, associations between mTLV for HAL and disorders/complaints (quantified by odds ratios (OR [95%-confidence interval])) were tested exploratively within a cross-sectional field study with 500 participants. Higher workloads were frequently significantly associated with arthrosis of distal joints (9.23 [3.29-25.87]), wrist complaints (2.89 [1.63-5.11]) or elbow complaints (1.99 [1.08-3.67]). The new method could extend previous application possibilities.
Collapse
Affiliation(s)
- David H Seidel
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany; University Hospital Tuebingen, Institute of Occupational and Social Medicine and Health Services Research (IASV), Wilhelmstrasse 27, Tuebingen, 72074, DE, Germany.
| | - Kai Heinrich
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| | - Ingo Hermanns-Truxius
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| | - Rolf P Ellegast
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| | - Lope H Barrero
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany; School of Engineering, Department of Industrial Engineering, Pontificia Universidad Javeriana, Carrera 7 No. 40 - 62, Bogotá DC, 110231, CO, Colombia
| | - Monika A Rieger
- University Hospital Tuebingen, Institute of Occupational and Social Medicine and Health Services Research (IASV), Wilhelmstrasse 27, Tuebingen, 72074, DE, Germany
| | - Benjamin Steinhilber
- University Hospital Tuebingen, Institute of Occupational and Social Medicine and Health Services Research (IASV), Wilhelmstrasse 27, Tuebingen, 72074, DE, Germany
| | - Britta Weber
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| |
Collapse
|
5
|
Ranavolo A, Ajoudani A, Cherubini A, Bianchi M, Fritzsche L, Iavicoli S, Sartori M, Silvetti A, Vanderborght B, Varrecchia T, Draicchio F. The Sensor-Based Biomechanical Risk Assessment at the Base of the Need for Revising of Standards for Human Ergonomics. SENSORS 2020; 20:s20205750. [PMID: 33050438 PMCID: PMC7599507 DOI: 10.3390/s20205750] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 09/24/2020] [Accepted: 10/03/2020] [Indexed: 02/06/2023]
Abstract
Due to the epochal changes introduced by “Industry 4.0”, it is getting harder to apply the varying approaches for biomechanical risk assessment of manual handling tasks used to prevent work-related musculoskeletal disorders (WMDs) considered within the International Standards for ergonomics. In fact, the innovative human–robot collaboration (HRC) systems are widening the number of work motor tasks that cannot be assessed. On the other hand, new sensor-based tools for biomechanical risk assessment could be used for both quantitative “direct instrumental evaluations” and “rating of standard methods”, allowing certain improvements over traditional methods. In this light, this Letter aims at detecting the need for revising the standards for human ergonomics and biomechanical risk assessment by analyzing the WMDs prevalence and incidence; additionally, the strengths and weaknesses of traditional methods listed within the International Standards for manual handling activities and the next challenges needed for their revision are considered. As a representative example, the discussion is referred to the lifting of heavy loads where the revision should include the use of sensor-based tools for biomechanical risk assessment during lifting performed with the use of exoskeletons, by more than one person (team lifting) and when the traditional methods cannot be applied. The wearability of sensing and feedback sensors in addition to human augmentation technologies allows for increasing workers’ awareness about possible risks and enhance the effectiveness and safety during the execution of in many manual handling activities.
Collapse
Affiliation(s)
- Alberto Ranavolo
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Monte Porzio Catone, 00040 Rome, Italy; (S.I.); (A.S.); (T.V.); (F.D.)
- Correspondence: ; Tel.: +39-043-224-0233
| | - Arash Ajoudani
- HRI2 Laboratory, Istituto Italiano di Tecnologia, 16163 Genova, Italy;
| | | | - Matteo Bianchi
- Centro di Ricerca “Enrico Piaggio” and Department of Information Engineering, Università di Pisa, 56126 Pisa, Italy;
| | - Lars Fritzsche
- Ergonomics Division, IMK Automotive GmbH, 09128 Chemnitz, Germany;
| | - Sergio Iavicoli
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Monte Porzio Catone, 00040 Rome, Italy; (S.I.); (A.S.); (T.V.); (F.D.)
| | - Massimo Sartori
- Department of Biomechanical Engineering, University of Twente, 7522 NB Enschede, The Netherlands;
| | - Alessio Silvetti
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Monte Porzio Catone, 00040 Rome, Italy; (S.I.); (A.S.); (T.V.); (F.D.)
| | - Bram Vanderborght
- Brubotics, Vrije Universiteit Brussel, 1050 Brussels, Belgium;
- Flanders Make, Oude Diestersebaan 133, 3920 Lommel, Belgium
| | - Tiwana Varrecchia
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Monte Porzio Catone, 00040 Rome, Italy; (S.I.); (A.S.); (T.V.); (F.D.)
| | - Francesco Draicchio
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Monte Porzio Catone, 00040 Rome, Italy; (S.I.); (A.S.); (T.V.); (F.D.)
| |
Collapse
|
6
|
Azari DP, Hu YH, Miller BL, Le BV, Radwin RG. Using Surgeon Hand Motions to Predict Surgical Maneuvers. HUMAN FACTORS 2019; 61:1326-1339. [PMID: 31013463 DOI: 10.1177/0018720819838901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.
Collapse
Affiliation(s)
| | - Yu Hen Hu
- University of Wisconsin-Madison, USA
| | | | | | | |
Collapse
|
7
|
Azari DP, Frasier LL, Quamme SRP, Greenberg CC, Pugh C, Greenberg JA, Radwin RG. Modeling Surgical Technical Skill Using Expert Assessment for Automated Computer Rating. Ann Surg 2019; 269:574-581. [PMID: 28885509 PMCID: PMC7412996 DOI: 10.1097/sla.0000000000002478] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
OBJECTIVE Computer vision was used to predict expert performance ratings from surgeon hand motions for tying and suturing tasks. SUMMARY BACKGROUND DATA Existing methods, including the objective structured assessment of technical skills (OSATS), have proven reliable, but do not readily discriminate at the task level. Computer vision may be used for evaluating distinct task performance throughout an operation. METHODS Open surgeries was videoed and surgeon hands were tracked without using sensors or markers. An expert panel of 3 attending surgeons rated tying and suturing video clips on continuous scales from 0 to 10 along 3 task measures adapted from the broader OSATS: motion economy, fluidity of motion, and tissue handling. Empirical models were developed to predict the expert consensus ratings based on the hand kinematic data records. RESULTS The predicted versus panel ratings for suturing had slopes from 0.73 to 1, and intercepts from 0.36 to 1.54 (Average R2 = 0.81). Predicted versus panel ratings for tying had slopes from 0.39 to 0.88, and intercepts from 0.79 to 4.36 (Average R2 = 0.57). The mean square error among predicted and expert ratings was consistently less than the mean squared difference among individual expert ratings and the eventual consensus ratings. CONCLUSIONS The computer algorithm consistently predicted the panel ratings of individual tasks, and were more objective and reliable than individual assessment by surgical experts.
Collapse
Affiliation(s)
- David P. Azari
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI
| | - Lane L. Frasier
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | | | - Caprice C. Greenberg
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | - Carla Pugh
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | - Jacob A. Greenberg
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | - Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI
| |
Collapse
|
8
|
Greene RL, Azari DP, Hu YH, Radwin RG. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision. APPLIED ERGONOMICS 2017; 65:461-472. [PMID: 28284701 DOI: 10.1016/j.apergo.2017.02.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 02/24/2017] [Accepted: 02/27/2017] [Indexed: 06/06/2023]
Abstract
Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.
Collapse
Affiliation(s)
| | | | - Yu Hen Hu
- University of Wisconsin-Madison, United States
| | | |
Collapse
|
9
|
Radwin RG, Lee JD, Akkas O. Driver Movement Patterns Indicate Distraction and Engagement. HUMAN FACTORS 2017; 59:844-860. [PMID: 28704631 DOI: 10.1177/0018720817696496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Objective This research considers how driver movements in video clips of naturalistic driving are related to observer subjective ratings of distraction and engagement behaviors. Background Naturalistic driving video provides a unique window into driver behavior unmatched by crash data, roadside observations, or driving simulator experiments. However, manually coding many thousands of hours of video is impractical. An objective method is needed to identify driver behaviors suggestive of distracted or disengaged driving for automated computer vision analysis to access this rich source of data. Method Visual analog scales ranging from 0 to 10 were created, and observers rated their perception of driver distraction and engagement behaviors from selected naturalistic driving videos. Driver kinematics time series were extracted from frame-by-frame coding of driver motions, including head rotation, head flexion/extension, and hands on/off the steering wheel. Results The ratings were consistent among participants. A statistical model predicting average ratings from the kinematic features accounted for 54% of distraction rating variance and 50% of engagement rating variance. Conclusion Rated distraction behavior was positively related to the magnitude of head rotation and fraction of time the hands were off the wheel. Rated engagement behavior was positively related to the variation of head rotation and negatively related to the fraction of time the hands were off the wheel. Application If automated computer vision can code simple kinematic features, such as driver head and hand movements, then large-volume naturalistic driving videos could be automatically analyzed to identify instances when drivers were distracted or disengaged.
Collapse
|
10
|
Akkas O, Lee CH, Hu YH, Yen TY, Radwin RG. Measuring elemental time and duty cycle using automated video processing. ERGONOMICS 2016; 59:1514-1525. [PMID: 26848051 PMCID: PMC5226076 DOI: 10.1080/00140139.2016.1146347] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A marker-less 2D video algorithm measured hand kinematics (location, velocity and acceleration) in a paced repetitive laboratory task for varying hand activity levels (HAL). The decision tree (DT) algorithm identified the trajectory of the hand using spatiotemporal relationships during the exertion and rest states. The feature vector training (FVT) method utilised the k-nearest neighbourhood classifier, trained using a set of samples or the first cycle. The average duty cycle (DC) error using the DT algorithm was 2.7%. The FVT algorithm had an average 3.3% error when trained using the first cycle sample of each repetitive task, and had a 2.8% average error when trained using several representative repetitive cycles. Error for HAL was 0.1 for both algorithms, which was considered negligible. Elemental time, stratified by task and subject, were not statistically different from ground truth (p < 0.05). Both algorithms performed well for automatically measuring elapsed time, DC and HAL. Practitioner Summary: A completely automated approach for measuring elapsed time and DC was developed using marker-less video tracking and the tracked kinematic record. Such an approach is automatic, repeatable, objective and unobtrusive, and is suitable for evaluating repetitive exertions, muscle fatigue and manual tasks.
Collapse
Affiliation(s)
- Oguz Akkas
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Cheng-Hsien Lee
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Yu Hen Hu
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Thomas Y. Yen
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
- Corresponding author. Robert G. Radwin, PhD, Department of Industrial and Systems Engineering, 1550 Engineering Drive, Madison, WI 53706,
| |
Collapse
|
11
|
Radwin RG, Azari DP, Lindstrom MJ, Ulin SS, Armstrong TJ, Rempel D. A frequency-duty cycle equation for the ACGIH hand activity level. ERGONOMICS 2015; 58:173-83. [PMID: 25343340 PMCID: PMC4302734 DOI: 10.1080/00140139.2014.966154] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A new equation for predicting the hand activity level (HAL) used in the American Conference for Government Industrial Hygienists threshold limit value®(TLV®) was based on exertion frequency (F) and percentage duty cycle (D). The TLV® includes a table for estimating HAL from F and D originating from data in Latko et al. (Latko WA, Armstrong TJ, Foulke JA, Herrin GD, Rabourn RA, Ulin SS, Development and evaluation of an observational method for assessing repetition in hand tasks. American Industrial Hygiene Association Journal, 58(4):278-285, 1997) and post hoc adjustments that include extrapolations outside of the data range. Multimedia video task analysis determined D for two additional jobs from Latko's study not in the original data-set, and a new nonlinear regression equation was developed to better fit the data and create a more accurate table. The equation, HAL = 6:56 ln D[F(1:31) /1+3:18 F(1:31), generally matches the TLV® HAL lookup table, and is a substantial improvement over the linear model, particularly for F>1.25 Hz and D>60% jobs. The equation more closely fits the data and applies the TLV® using a continuous function.
Collapse
Affiliation(s)
- Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
- Corresponding Author University of Wisconsin-Madison, 1550 Engineering Drive, Madison, WI 53706-1608,
| | - David P. Azari
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
| | - Mary J. Lindstrom
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
| | - Sheryl S. Ulin
- Department of Industrial and Operations Engineering, University of Michigan
| | | | - David Rempel
- Department of Medicine, University of California, San Francisco
| |
Collapse
|
12
|
Chen CH, Azari D, Hu YH, Lindstrom MJ, Thelen D, Yen TY, Radwin RG. The accuracy of conventional 2D video for quantifying upper limb kinematics in repetitive motion occupational tasks. ERGONOMICS 2015; 58:2057-66. [PMID: 25978764 PMCID: PMC4684497 DOI: 10.1080/00140139.2015.1051594] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross-correlation template-matching algorithm for tracking a region of interest on the upper extremities. Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s(2) for acceleration, and less than 93 mm/s for speed and 656 mm/s(2) for acceleration when camera pan and tilt were within ± 30 degrees. Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary: This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value(®) for repetitive motion when the camera is located within ± 30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task.
Collapse
Affiliation(s)
- Chia-Hsiung Chen
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison
| | - David Azari
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
| | - Yu Hen Hu
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison
| | - Mary J. Lindstrom
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
| | - Darryl Thelen
- Department of Mechanical Engineering, University of Wisconsin-Madison
| | - Thomas Y. Yen
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
| | - Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
- Corresponding Author Robert G, Radwin, PhD, 1550 Engineering Drive, University of Wisconsin-Madison, Madison, WI 53706, 608-263-6596, 608-262-8454,
| |
Collapse
|