Pose Tracking is the task of estimating multi-person human poses in videos and assigning unique instance IDs for each keypoint across frames. Accurate estimation of human keypoint-trajectories is useful for human action recognition, human interaction understanding, motion capture and animation.
Analyzing human motion with high autonomy requires advanced capabilities in sensing, communication, energy management and AI. Wearable systems help us go beyond external cameras enabling motion analysis in the wild. However, such systems are still semi-autonomous. This is because, they require careful sensor calibration and precise positioning on the body over the course of motion. Moreover, these systems are plagued with bulky batteries and issues of time synchronization, sensor noise and drift. All of these restrictions hinder the use of wearable motion analysis in applications that rely on long-term tracking such as everyday gait analysis, performance measurement in the wild and full-body VR controllers. In this project, we aim to solve the specific problem of achieving autonomy and non-intrusiveness in wearable systems that target motion analysis. In order to do so, we extensively leverage efficient techniques in machine learning and systems design.