skip to main content

Somewhere on The University of Michigan campus, tucked in a windowless lab in the basement, Associate Professor Dr. Sanghyun Lee and his team of post-docs, PhD. Students, and MSc students have been busy changing the future of ergonomics. Dr. Lee has spearheaded a sensorless motion-capture technology that makes conducting a musculoskeletal risk assessments faster, easier, and more accurate than any human could perform. With this tool, “ergonomics professionals can focus on the solution, which is value-added, and not spend their time and effort to collect the data needed for analysis,” he said in a recent interview.

“Smartphone-Based Ergonomics Risk Assessment” was a session presented by Dr. Lee at the most recent National Ergonomics Conference & Ergo Expo. Though the session was held on the final day of the conference, people gathered to learn what he set out to do ten years ago: develop a faster and easier way to conduct a MSD risk assessment.

This groundbreaking technology will be integrated into a new advanced assessment module for VelocityEHS Ergonomics that is set to release in the spring of 2019. Since we can’t wait, I sat down with Dr. Lee to learn more about his work. Here’s what he said.

How long have you been researching this technology and why?

I’ve been working on this for ten years. I was looking for an easy and affordable way to collect human data to understand how it impacts ergonomics (productivity and safety). I was a bit frustrated with existing risk assessment technologies that are often hard or inconvenient to be applied to the field. When I discovered that computer vision can do it without interrupting with workers’ ongoing work, I wanted to develop something to immediately apply to the field of occupational health and safety.

What is the most important feature of this technology?

No sensors, suit, or equipment set-up are required. Just take your smart phone and start recording.

How does it work?

The video is uploaded to the VelocityEHS Ergonomics System, and then sent to Kinetica Labs’ engine to be processed. Through artificial intelligence and machine learning, the software recognizes body segments and records postures, joint angles, frequencies, and durations. Within just a few minutes, a whole-body assessment is completed.

How does motion capture know where to put a joint, like an elbow?

We make our engine “learn” where the joints are located based on many videos.

How many people are on your team?

My current team has 12 researchers. Among them are three Ph.D. students and two M.Sc. students.

How do you know it works?

We have conducted several validation studies comparing the results from motion capture to human observation. A white paper describing the validation techniques and results is being developed. This will be made available to the public.

What is one thing you want people to know?

We just started witnessing this kind of technology in the field. More will come. We need to be open-minded to embrace such technologies to improve our practice.

What were the three main takeaways from your session at Ergo Expo?

There are many. If I must choose only three, they would be:

  1. Workers’ compensation costs approximately $15 to 20 billion a year.
  2. To reduce these costs, MSD risk assessments must be done accurately. Companies need to find the jobs that pose a risk for MSDs and fix them before people get injured.
  3. Manual, observation-based risk assessments take time, from 30 to 60 minutes per job. This new tool reduces the assessment time dramatically. More time can be spent improving jobs than assessing them.

Learn about the history of motion-capture and the pioneering work of Eadweard Muybridge, English photographer and artist. His work and research in photographic studies of human and animal locomotion is summarized in this month’s Ergo U blog post.