Human Pose Estimation in Extremely Low-Light Conditions

CVPR 2023

Human Pose Estimation in Extremely Low-Light Conditions

Sohyun Lee*,     Jaesung Rim*,     Boseung Jeong,     Geonu Kim,     ByungJu Woo,     Haechan Lee,     Sunghyun Cho,     Suha Kwak

* Equal contribution. Corresponding authors.


We study human pose estimation in extremely low-light images. This task is challenging due to the difficulty of collecting real low-light images with accurate labels, and severely corrupted inputs that degrade prediction quality significantly. To address the first issue, we develop a dedicated camera system and build a new dataset of real low-light images with accurate pose labels. Thanks to our camera system, each low-light image in our dataset is coupled with an aligned well-lit image, which enables accurate pose labeling and is used as privileged information during training. We also propose a new model and a new training strategy that fully exploit the privileged information to learn representation insensitive to lighting conditions. Our method demonstrates outstanding performance on real extremely low-light images, and extensive analyses validate that both of our model and dataset contribute to the success.

The ExLPose Dataset

Extremely Low-light human Pose dataset

Google Drive Link

Google drive link (recommended)

Postech Link

Postech link

2,556 pairs of a low-light image and the corresponding well-lit image.

360 low-light images captured with Sony A7M3 and RICOH3 cameras.

All annotations of the ExLPose and ExLPose-OC datasets.
We provide 14 joints of human poses and human bounding boxes in the COCO format.


The proposed model architecture and training strategy. Both of teacher (bottom) and student (top) are trained by the same pose estimation loss, and student takes additional supervision from teacher through LUPI. The loss for LUPI is applied to the feature maps of the first convolutional layer (i.e., C1) and the following four residual blocks (i.e., R1-R4) of a ResNet backbone. Teacher and student share all the parameters except LSBNs.


Pose estimation accuracy of AP@0.5:0.95 on the ExLPose dataset.

Methods LL-N LL-H LL-E LL-A WL
Baseline-low 32.6 25.1 13.8 24.6 1.6
Baseline-well 23.5 7.5 1.1 11.5 68.8
Baseline-all 33.8 25.4 14.3 25.4 57.9
LLFlow + Baseline-all 35.2 20.1 8.3 22.1 65.1
LIME + Baseline-all 38.3 25.6 12.5 26.6 63.0
DANN 34.9 24.9 13.3 25.4 58.6
AdvEnt 35.6 23.5 8.8 23.8 62.4
Ours 42.3 34.0 18.6 32.7 68.5


 title={Human Pose Estimation in Extremely Low-Light Conditions},
 author={Sohyun Lee, Jaesung Rim, Boseung Jeong, Geonu Kim, ByungJu Woo, Haechan Lee, Sunghyun Cho, Suha Kwak},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

Let's Get In Touch!

Please feel free to contact us with any feedback, questions, or comments