SLP Dataset for Multimodal In-Bed Pose Estimation

This research is funded by the NSF Award #1755695. Also special thanks to the Amazon for the AWS Cloud Credits Award. This research is also highlighted by News@Northeastern in August 2019, by Experience Magazine in October 2019, and by Signal Processing Newsletters in January 2020. There is also an interview with the PI at Northeastern’s Litmus podcast “Invasion of the Bias Snatchers,” which highlights the research in March 2020.

This work involved human subjects in its research. Approval of all ethical and experimental procedures and protocols was granted by an institutional review board (IRB) approved by Northeastern University number IRB#17- 06-04

Although computer vision has been extremely successful in employing deep neural networks for many visual perception applications, its use is still hindered when it comes to practical medical applications. This shortcoming is due to the factors that have to be fully considered in the majority of the medical applications, including: (1) lack of enough sample data and the cost of data gathering and labeling, (2) patient’s privacy concerns, (3) patient’s safety and comfort consideration, and (4) accessibility and unobtrusiveness of the data collection/monitoring technologies, etc. All these factors act adversely against direct employment of the well-established computer vision approaches in the medical field. In this project, our focus is on the critical healthcare application of in-bed patient monitoring and we present a physics-inspired vision-based approach, called Under the Cover Imaging via Thermal Diffusion (UCITD) to accurately estimate in-bed poses with high granularity under natural sleeping conditions. Our approach addresses all aforementioned issues raised in medical care by: (1) releasing a large scale in-bed pose dataset in several modalities, (2) preserving patient’s privacy, (3) working under natural conditions (e.g. full darkness and heavy occlusion), (4) being contact-less and therefore unobtrusive, as well as being medically-safe (radiation free). Details on SLP dataset forming and training of several state-of-the-art in-bed pose estimation algorithms can be found in our paper “Simultaneously-Collected Multimodal Lying Pose Dataset: Towards In-Bed Human Pose Monitoring under Adverse Vision Conditions,” available at (arXiv Preprint). We also provide a demo code (Github link) for human pose estimation to demonstrate SLP capabilities in in-bed human pose estimation. Details on our pose estimation algorithm can be accessed in our paper “Seeing Under the Cover: A Physics Guided Learning Approach for In-Bed Pose Estimation,” published in MICCAI’19 (arXiv Preprint) (Code). We also have under review related papers on “Heuristic Weakly Supervised 3D Human Pose Estimation in Novel Contexts without Any 3D Pose Ground Truth,” [arXiv preprint at arXiv] and “Adapted Human Pose: Monocular 3D Human Pose Estimation with Zero Real 3D Pose Data,” [arXiv preprint at arXiv] that can be accessed in arXiv.

If you found our paper/code/data useful, please cite the followings:

@article{liu2022vipcup,
   title={Privacy-Preserving In-Bed Human Pose Estimation: Highlights from the IEEE Video and Image Processing Cup 2021 Student Competition},
   author={Liu, Shuangjun and Huang, Xiaofei and Marcenaro, Lucio  and Ostadabbas, Sarah},
   journal={IEEE Signal Processing Magazine},
   year={2022}
 }

@article{liu2021pressureeye,
   title={Pressure Eye: In-bed Contact Pressure Estimation via Contact-less Imaging},
   author={Liu, Shuangjun and Ostadabbas, Sarah},
   journal={arXiv preprint arXiv:2201.11828},
   year={2021}
 }

@article{liu2021Adapted,
   title={Adapted Human Pose: Monocular 3D Human Pose Estimation with Zero Real 3D Pose Data},
   author={Liu, Shuangjun and Ostadabbas, Sarah},
   journal=={arXiv preprint arXiv:2105.10837},
   year={2021}
 }

@article{liu20120simultaneously,
  title={Simultaneously-Collected Multimodal Lying Pose Dataset: Towards In-Bed Human Pose Monitoring under Adverse Vision Conditions},
  author={Liu, Shuangjun and  Huang, Xiaofei and Fu, Nihang and Li, Cheng and Su, Zhongnan and  Ostadabbas, Sarah},
  journal={arXiv preprint arXiv:2008.08735},
  year={2020}
}

@article{liu2019seeing,
  title={Seeing Under the Cover: A Physics Guided Learning Approach for In-Bed Pose Estimation},
  author={Liu, Shuangjun and Ostadabbas, Sarah},
  journal={22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2019), Shenzhen, China.},
  year={2019}
}

@patent{ostadabbas2021pressureEye,
   title={{Pressure Eye: In-bed Contact Pressure Estimation via Contact-less Imaging}},
   author={Ostadabbas, Sarah and Liu, Shuangjun},
   year={2021},
   month={5},
   note={Invention Disclosure, INV-21122}
 }

@patent{ostadabbas2021robust3D,
   title={{A Robust 3D Human Pose Estimation Systems in The Wild}},
   author={Ostadabbas, Sarah and Liu, Shuangjun and Huang, Xiaofei and Fu, Nihang},
   year={2021},
   month={4},
   note={Invention Disclosure, INV-21105}
 }

@patent{ostadabbas2021high,
   title={{High Resolution Full-Body In-Bed Pose Estimation Platform using Multimodal Sensing}},
   author={Ostadabbas, Sarah and Liu, Shuangjun},
   year={2020},
   month={7},
   note={Invention Disclosure, INV-21014}
 }

@patent{ostadabbas2020methods,
   title={{Methods and Systems for In-Bed Pose Estimation}},
   author={Ostadabbas, Sarah and Liu, Shuangjun},
   year={2020},
   month={8},
   note={US Patent App. 16/778,459}
 }

SLP Dataset:

As part of this project, we also released the first-ever large scale dataset on in-bed poses called “Simultaneously-collected multimodal Lying Pose (SLP)” (is pronounced as SLEEP).

SLP Dataset Description:

We setup two evaluation environments for our experiments, one in a lab setting turned into a regular bedroom and one in a simulated hospital room at Northeastern University Health Science Department. Using an IRB-approved protocol, we collected pose data from 109 participants (102 from home setting and 7 from hospital), asking them to lie in the bed and randomly change their poses under three main categories of supine, left side, and right side. For each category, 15 poses are collected. Overall 13,770 pose samples for home setting and 945 samples for hospital setting are collected in each of the 4 modalities. Moreover, we changed the cover condition from uncover, to cover one (a thin sheet with approx. 1mm thickness), and then to cover two (a thick blanket with approx. 3mm thickness). In each cover condition, we waited around 10–20 seconds to mimic a stabilized pose during a real-life monitoring scenario. We follow the definition of pose with 14 joints.

SLP dataset features:

  • 2 Data Collection Settings: (a) Hospital setting: 7 participants (3 females), and (b) Home setting: 102 participants (28 females, age range: 20-40).
  • 4 Imaging Modalities: RGB (regular webcam), LWIR (FLIR LWIR camera), DEPTH (Kinect v2) and Pressure Map (Tekscan Pressure Sensing Map).
  • 3 Cover Conditions: No cover, bed sheet (cov1), and blanket (cov2).
  • Fully labeled poses with 14 joints.

SLP download. Please fill the request form for password. Please read the the “Non-commercial Purposes” conditions stated below carefully. Also, make sure to use your institution email rather than a personal email.

SLP dataset Request
First
Last

By downloading or using the SLP Dataset, you are agreeing to the “Non-commercial Purposes” condition. “Non-commercial Purposes” means research, teaching, scientific publication and personal experimentation. Non-commercial Purposes include use of the Dataset to perform benchmarking for purposes of academic or applied research publication. Non-commercial Purposes does not include purposes primarily intended for or directed towards commercial advantage or monetary compensation, or purposes intended for or directed towards litigation, licensing, or enforcement, even in part. SLP dataset is provided as-is, is experimental in nature, and not intended for use by, with, or for the diagnosis of human subjects for incorporation into a product.

Acknowledgment:

Graduate Student R&D Team: Shuangjun Liu (team lead), and Xiaofei Huang.

Also, special thanks to the student volunteers ( Zhilan Li, Zhun Deng, Cheng Li, Zhongnan Su, Fei Xue, Fei Gao and Yu Yin) that helped us to collect and prepare the SLP dataset.

Blog Attachment