z-logo
open-access-imgOpen Access
LiftPose3D, a deep learning-based approach for transforming two-dimensional to three-dimensional poses in laboratory animals
Author(s) -
Adam Gosztolai,
Semih Günel,
Víctor Lobato-Ríos,
Marco Abrate,
Daniel Morales,
Helge Rhodin,
Pascal Fua,
Pavan Ramdya
Publication year - 2021
Publication title -
nature methods
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 19.469
H-Index - 318
eISSN - 1548-7105
pISSN - 1548-7091
DOI - 10.1038/s41592-021-01226-z
Subject(s) - triangulation , artificial intelligence , computer science , computer vision , calibration , pose , deep learning , kinematics , mathematics , cartography , geography , statistics , physics , classical mechanics
Markerless three-dimensional (3D) pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D poses by multi-view triangulation of deep network-based two-dimensional (2D) pose estimates. However, triangulation requires multiple synchronized cameras and elaborate calibration protocols that hinder its widespread adoption in laboratory studies. Here we describe LiftPose3D, a deep network-based method that overcomes these barriers by reconstructing 3D poses from a single 2D camera view. We illustrate LiftPose3D's versatility by applying it to multiple experimental systems using flies, mice, rats and macaques, and in circumstances where 3D triangulation is impractical or impossible. Our framework achieves accurate lifting for stereotypical and nonstereotypical behaviors from different camera angles. Thus, LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays and tedious calibration procedures and despite occluded body parts in freely behaving animals.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here