Learning Human-Object Interaction for 3D Human Pose Estimation from LiDAR Point Clouds
Abstract
Human-Object Interaction Learning framework addresses challenges in 3D human pose estimation from LiDAR point clouds by mitigating spatial ambiguity and class imbalance through contrastive learning and adaptive pooling techniques.
Understanding humans from LiDAR point clouds is one of the most critical tasks in autonomous driving due to its close relationships with pedestrian safety, yet it remains challenging in the presence of diverse human-object interactions and cluttered backgrounds. Nevertheless, existing methods largely overlook the potential of leveraging human-object interactions to build robust 3D human pose estimation frameworks. There are two major challenges that motivate the incorporation of human-object interaction. First, human-object interactions introduce spatial ambiguity between human and object points, which often leads to erroneous 3D human keypoint predictions in interaction regions. Second, there exists severe class imbalance in the number of points between interacting and non-interacting body parts, with the interaction-frequent regions such as hand and foot being sparsely observed in LiDAR data. To address these challenges, we propose a Human-Object Interaction Learning (HOIL) framework for robust 3D human pose estimation from LiDAR point clouds. To mitigate the spatial ambiguity issue, we present human-object interaction-aware contrastive learning (HOICL) that effectively enhances feature discrimination between human and object points, particularly in interaction regions. To alleviate the class imbalance issue, we introduce contact-aware part-guided pooling (CPPool) that adaptively reallocates representational capacity by compressing overrepresented points while preserving informative points from interacting body parts. In addition, we present an optional contact-based temporal refinement that refines erroneous per-frame keypoint estimates using contact cues over time. As a result, our HOIL effectively leverages human-object interaction to resolve spatial ambiguity and class imbalance in interaction regions. Codes will be released.
Community
Learning Human-Object Interaction for 3D Human Pose Estimation from LiDAR Point Clouds. Presented at arXiv 2026.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Static Frames: Temporal Aggregate-and-Restore Vision Transformer for Human Pose Estimation (2026)
- AHAP: Reconstructing Arbitrary Humans from Arbitrary Perspectives with Geometric Priors (2026)
- TeHOR: Text-Guided 3D Human and Object Reconstruction with Textures (2026)
- PIRATR: Parametric Object Inference for Robotic Applications with Transformers in 3D Point Clouds (2026)
- End-to-End Spatial-Temporal Transformer for Real-time 4D HOI Reconstruction (2026)
- Modeling 3D Pedestrian-Vehicle Interactions for Vehicle-Conditioned Pose Forecasting (2026)
- Open-Vocabulary Functional 3D Human-Scene Interaction Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper