* equal contribution

# Abstract

Analyzing human posture and precisely comparing it across different subjects is essential for accurate understanding of behavior and numerous vision applications such as medical diagnostics or sports. Motion magnification techniques help to see even small deviations in posture that are invisible to the naked eye. However, they fail when comparing subtle posture differences across individuals with diverse appearance. Keypoint-based posture estimation and classification techniques can handle large variations in appearance, but are invariant to subtle deviations in posture. We present an approach to unsupervised magnification of posture differences across individuals despite large deviations in appearance. We do not require keypoint annotation and visualize deviations on a sub-bodypart level. To transfer appearance across subjects onto a magnified posture, we propose a novel loss for disentangling appearance and posture in an autoencoder. Posture magnification yields exaggerated images that are different from the training set. Therefore, we incorporate magnification already into the training of the disentangled autoencoder and learn on real data and synthesized magnifications without supervision. Experiments confirm that our approach improves upon the state-of-the-art in magnification and on the application of discovering posture deviations due to impairment.

# Our Poster

(Please click on the poster for a larger version)

## Our Approach

We facilitate the perception of subtle posture deviations between a query $$x^q$$ and reference frame $$x^r$$ by magnifying their differences in the posture encoding. $$x^q$$ walks with its legs apart, highlighted by the red line in comparison to the green line of $$x^r$$. We first disentangle posture from appearance. Then, we extrapolate in the posture encoding the distance of $$x^r$$ and $$x^q$$ in the direction of $$x^q$$. The magnified images are generated by combining the appearance encoding of $$x^q$$ and the magnified posture encoding using different magnification intensities $$\lambda$$. The generated images allow us to better perceive the differences.

## Results

and comparison with previous work1.

### Transfer of Query Posture to Different Appearances

1 T.-H. Oh, R. Jaroensri, C. Kim, M. Elgharib, F. Durand, W. T. Freeman, and W. Matusik. Learning-based video motion magnification. In ECCV 2018

## Acknowledgement

This work has been supported in part by DFG grant 421703927, the German federal ministry BMWi within the project 'KI Absicherung' and a hardware donation from NVIDIA Corporation.
The authors are grateful to Linard Filli for providing the recordings used in HG2DB.