SLP Dataset for Multimodal In-Bed Pose Estimation

Idea graphs

This research is funded by the NSF Award #1755695. Also special thanks to the Amazon for the AWS Cloud Credits Award.

This research is also highlighted by News@Northeastern in August 2019.

Although computer vision has been extremely successful in employing deep neural networks for many visual perception applications, its use is still hindered when it comes to practical medical applications. This shortcoming is due to the factors that have to be fully considered in the majority of the medical applications, including: (1) lack of enough sample data and the cost of data gathering and labeling, (2) patient’s privacy concerns, (3) patient’s safety and comfort consideration, and (4) accessibility and unobtrusiveness of the data collection/monitoring technologies, etc. All these factors act adversely against direct employment of the well-established computer vision approaches in the medical field. In this project, our focus is on the critical healthcare application of in-bed patient monitoring and we present a physics-inspired vision-based approach, called Under the Cover Imaging via Thermal Diffusion (UCITD) to accurately estimate in-bed poses with high granularity under natural sleeping conditions. Our approach addresses all aforementioned issues raised in medical care by: (1) releasing a large scale in-bed pose dataset in several modalities, (2) preserving patient’s privacy, (3) working under natural conditions (e.g. full darkness and heavy occlusion), (4) being contact-less and therefore unobtrusive, as well as being medically-safe (radiation free). Details on our pose estimation algorithm can be accessed in our paper “Seeing Under the Cover: A Physics Guided Learning Approach for In-Bed Pose Estimation,” published in MICCAI’19 (arXiv Preprint). We also provide a demo code (Github link) for human pose estimation to demonstrate our method capabilities in in-bed human pose estimation.

If you found our paper/code/data useful, please cite the following:

@article{liu2019seeing,
  title={Seeing Under the Cover: A Physics Guided Learning Approach for In-Bed Pose Estimation},
  author={Liu, Shuangjun and Ostadabbas, Sarah},
  journal={22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2019), Shenzhen, China. arXiv preprint arXiv:1907.02161},
  year={2019}
}

P Dataset:

As part of this project, we also released the first-ever large scale dataset on in-bed poses called “Simultaneously-collected multimodal Lying Pose (SLP)” (is pronounced as SLEEP). At this point, the dataset is partially released with two modalities (RGB and IR), the rest of the modalities (depth and pressure map) will be released shortly.

SLP Dataset Description:

We setup two evaluation environments for our experiments, one in a lab setting turned into a regular living room and one in a simulated hospital room at Northeastern University Health Science Department. Using an IRB-approved protocol, we collected pose data from each participant, while lying in a bed and randomly changing their poses under three main categories of supine, left side, and right side. For each category, 15 poses are collected. Overall 13,770 pose samples for home setting and 945 samples for hospital setting are collected in each of the 4 modalities. Moreover, we changed the cover condition from uncover, to cover one (a thin sheet with approx. 1mm thickness), and then to cover two (a thick blanket with approx. 3mm thickness). In each cover condition, we waited around 10–20 seconds to mimic a stabilized pose during a real-life monitoring scenario. We follow the definition of pose with 14 joints.

SLP dataset features:

  • 2 Data Collection Settings: (a) Hospital setting: 7 participants, and (b) Home setting: 102 participants (29 females, age range: 20-40).
  • 4 Imaging Modalities: RGB (regular webcam), IR (FLIR LWIR camera), DEPTH (Kinect v2) and Pressure Map (Tekscan Pressure Sensing Map).
  • 3 Cover Conditions: uncover, bed sheet (cov1), and blanket (cov2).
  • Fully labeled poses with 14 joints.

To get access to the SLP dataset, please download the zipfile and fill the below survey to get the required password.

SLP dataset Request
First
Last

Acknowledgment:

Graduate Student R&D Team: Shuangjun Liu (team lead), and Xiaofei Huang.

Also, special thanks to the student volunteers (Zhilan Li and Zhun Deng) that helped us to collect and prepare the SLP dataset.

Blog Attachment