in

Reactive Video playback that you control with your body

Computer scientists have developed an entirely new way of interacting with video content that adapts to, and is controlled by, your body movement.

Fitness videos and other instructional content that aims to teach viewers new martial arts skills, exercises or yoga positions have been popular since VHS in the 80s and are abundant on Internet platforms like YouTube.

However, these traditional forms of instructional videos can lead to frustration, and even the potential for physical strain, as novice viewers, or those with limited physical mobility, struggle to keep up and mimic the movements of the expert instructors.

Now an international team of researchers from Lancaster University, Stanford University and FXPAL, have created a solution that dynamically adapts to mirror the position of the viewer’s body and matches the speed of video playback to the viewer’s movements.

The system, called ‘Reactive Video’, uses a Microsoft Kinect sensor, the latest in skeleton-tracking software, and probabilistic algorithms to identify the position, and movement of joints and limbs — such as elbows, knees, arms, hands, hips and legs. By working out the viewer’s movements it can match and compare this with the movement of the instructor in the video footage. It then estimates the time the user will take to perform a movement and adjusts playback of the video to the correct position, and pace, of the viewer.

As well as providing a more immersive experience, Reactive Video also helps users to more accurately mimic and learn new movements.

The researchers tested the system on study participants performing tai chi and radio exercises — a form of callisthenics popular in Japan. The results from the study showed that both systems could adapt to the users’ movements.

Dr Christopher Clarke, researcher from Lancaster University and co-author on the paper, said: “Since the 1980s, and especially now with the Internet, videos have helped people stay active and have offered a cheaper, more convenient alternative to gym memberships and personal trainers. However, traditional video players do have limitations — they can’t provide feedback, or adapt the pace and intensity of the physical movement to the user.

“We know performing movements in slow motion is beneficial for learning by providing opportunities to analyse your movements, and developing timing. We also know it can result in less physical strain for inexperienced users.

“For some people, keeping pace can be tricky — especially when learning something new, and for older people or those with impaired movement. Also, constantly reaching for a remote to pause, rewind and replay, can be frustrating and breaks the immersion.

“Our system overcomes these issues by having the video automatically adjust itself to play back at the user’s speed, which is less stressful and more beneficial for learning.”

Don Kimber, co-author of the research, said: “Reactive Video acts and feels like a magic mirror where as you move the video mirrors your movement, but with a cleaned-up version of the procedure, or position, performed correctly by an expert for the user to mimic and learn from.”

An additional benefit of Reactive Video, and something that sets it apart from exercise content developed for game consoles, is that it can be applied to existing footage of appropriate video content removing the need to create specially produced bespoke content.

“By using this system we can post-process existing instructional video content and enhance it to dynamically adapt to users providing a fundamental shift in how we can potentially interact with videos,” said Dr Clarke.

The team believe that with further research this kind of adaptive technology could be developed for sports and activities such as learning dance routines or honing golf swings.

The Reactive Video system was presented at UIST2020, a leading academic conference for the field of Human Computer Interaction.

It is detailed in the paper ‘Reactive Video: Adaptive Video Playback Based on User Motion for Supporting Physical Activity’.

The study’s authors are Christopher Clarke, of Lancaster University; Doga Cavdir of Stanford University; and Patrick Chiu, Laurent Denoue and Don Kimber, of FXPAL.


Source: Computers Math - www.sciencedaily.com

Discovery suggests new promise for nonsilicon computer transistors

Getting the right grip: Designing soft and sensitive robotic fingers