FLEX: A Large-Scale Multi-Modal Multi-Action Dataset for Fitness Action Quality Assessment

1 School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, USTC
2 Suzhou Institute of Biomedical Engineering and Technology, CAS
3 Institute of High-Performance Computing, A*STAR
4 School of Psychology, Beijing Sports University
5 School of Competitive Sports, Beijing Sports University

*Indicates Equal Contribution


Under Review-2025

Abstract

With the increasing awareness of health and the growing desire for aesthetic physique, fitness has become a prevailing trend. However, the potential risks associated with fitness training, especially with weight-loaded fitness actions, cannot be overlooked. Action Quality Assessment (AQA), a technology that quantifies the quality of human action and provides feedback, holds the potential to assist fitness enthusiasts of varying skill levels in achieving better training outcomes. Nevertheless, current AQA methodologies and datasets are limited to single-view competitive sports scenarios and RGB modality and lack professional assessment and guidance of fitness actions. To address this gap, we propose the FLEX dataset, the first multi-modal, multi-action, large-scale dataset that incorporates surface electromyography (sEMG) signals into AQA. FLEX utilizes high-precision MoCap to collect 20 different weight-loaded actions performed by 38 subjects across 3 different skill levels, containing 5 different views of the RGB video, 3D pose, sEMG, and physiological information. Additionally, FLEX incorporates knowledge graphs into AQA, constructing annotation rules in the form of penalty functions that map weight-loaded actions, action keysteps, error types, and feedback. We conducted various baseline methodologies on FLEX, demonstrating that multimodal data, multiview data, and fine-grained annotations significantly enhance model performance. FLEX not only advances AQA methodologies and datasets towards multi-modal and multi-action scenarios but also fosters the integration of artificial intelligence within the fitness domain.

BibTeX


      @article{yin2025flex,
      title={FLEX: A Large-Scale Multi-Modal Multi-Action Dataset for Fitness Action Quality Assessment}, 
      author={Hao Yin, Lijun Gu, Paritosh Parmar, Lin Xu, Tianxiao Guo, Weiwei Fu, Yang Zhang, Tianyou Zheng},
      journal={arXiv preprint arXiv:2506.03198},
      year={2025},
      }
      
Copy to clipboard