Developing low-cost and easy-to-deploy table tennis training robots is a significant challenge, largely due to the stringent, time-consuming, and error-prone camera calibration required by traditional visual servoing systems. This paper directly addressed this problem by proposing a robust, calibration-free visual control framework that enables a robotic manipulator to perform training tasks using a single monocular camera with completely unknown intrinsic and extrinsic parameters. This is a depth-independent visual feedback controller that directly translates the pixel error between the racket and the ball into control actions. To handle the unknown camera projection model, a computationally efficient adaptive law was designed. This law utilizes a regression matrix to isolate the unknown parameters and updates them in real time, completely eliminating the need for any pre-calibration process. The stability of the entire closed-loop system was rigorously proven via the Lyapunov method, guaranteeing the convergence of the tracking error. Extensive simulations on a manipulator validated the method's effectiveness and practicality. The results demonstrated that the system achieves rapid and precise tracking, with the image error converging completely to zero in under 0.8 second. The controller's robustness was further confirmed in scenarios with varying target positions and continuous multi-stroke sequences, demonstrating its suitability for dynamic and realistic training environments.
Citation: Quanyu Song, Rong Lu, Yaobo Long, Xiaobing Zheng. A calibration-free visual control approach for table tennis training robots using a positional monocular camera[J]. AIMS Mathematics, 2025, 10(11): 27364-27380. doi: 10.3934/math.20251203
Developing low-cost and easy-to-deploy table tennis training robots is a significant challenge, largely due to the stringent, time-consuming, and error-prone camera calibration required by traditional visual servoing systems. This paper directly addressed this problem by proposing a robust, calibration-free visual control framework that enables a robotic manipulator to perform training tasks using a single monocular camera with completely unknown intrinsic and extrinsic parameters. This is a depth-independent visual feedback controller that directly translates the pixel error between the racket and the ball into control actions. To handle the unknown camera projection model, a computationally efficient adaptive law was designed. This law utilizes a regression matrix to isolate the unknown parameters and updates them in real time, completely eliminating the need for any pre-calibration process. The stability of the entire closed-loop system was rigorously proven via the Lyapunov method, guaranteeing the convergence of the tracking error. Extensive simulations on a manipulator validated the method's effectiveness and practicality. The results demonstrated that the system achieves rapid and precise tracking, with the image error converging completely to zero in under 0.8 second. The controller's robustness was further confirmed in scenarios with varying target positions and continuous multi-stroke sequences, demonstrating its suitability for dynamic and realistic training environments.
| [1] | D. Seemiller, M. Holowchak, Winning table tennis, Human Kinetics, 1997. |
| [2] | R. McAfee, Table tennis: Steps to success, Human Kinetics, 2009. https://doi.org/10.5040/9781718219250 |
| [3] |
I. M. Lanzoni, R. D. Michele, F. Merni, Performance indicators in table tennis: A review of the literature, Eur. J. Sport Sci., 14 (2014), 492–499. https://doi.org/10.1080/17461391.2013.819382 doi: 10.1080/17461391.2013.819382
|
| [4] |
M. Pfeiffer, H. Zhang, A. Hohmann, A markov chain model of elite table tennis competition, Int. J. Sports Sci. Coa., 5 (2010), 205–222. https://doi.org/10.1260/1747-9541.5.2.205 doi: 10.1260/1747-9541.5.2.205
|
| [5] |
C. Ferrandez, T. Marsan, Y. Poulet, P. Rouch, P. Thoreux, C. Sauret, Physiology, biomechanics and injuries in table tennis: A systematic review, Sci. Sport., 36 (2021), 95–104. https://doi.org/10.1016/j.scispo.2020.04.007 doi: 10.1016/j.scispo.2020.04.007
|
| [6] |
G. Martinent, E. Ansnes, A literature review on coach-athlete relationship in table tennis, Int. J. Racket Sport. Sci., 2 (2020), 9–21. https://doi.org/10.30827/digibug.63717 doi: 10.30827/digibug.63717
|
| [7] |
Z. Zhang, D. Xu, M. Tan, Visual measurement and prediction of ball trajectory for table tennis robot, IEEE T. Instrum. Meas., 59 (2010), 3195–3205. https://doi.org/10.1109/tim.2010.2047128 doi: 10.1109/tim.2010.2047128
|
| [8] |
F. Basiri, A. Farsi, B. Abdoli, M. Kavyani, The effect of visual and tennis training on perceptual-motor skill and learning of forehand drive in table tennis players, J. Modern Rehabilitation, 14 (2020), 21–32. https://doi.org/10.32598/jmr.14.1.3 doi: 10.32598/jmr.14.1.3
|
| [9] |
H. Ripoll, Uncertainty and visual strategies in table tennis, Percept. Motor Skill., 68 (1989), 507–512. https://doi.org/10.2466/pms.1989.68.2.507 doi: 10.2466/pms.1989.68.2.507
|
| [10] |
R. Nakazato, C. Aoyama, T. Komiyama, R. Himo, S. Shimegi, Table tennis players use superior saccadic eye movements to track moving visual targets, Front. Sports Act. Liv., 6 (2024), 1289800. https://doi.org/10.3389/fspor.2024.1289800 doi: 10.3389/fspor.2024.1289800
|
| [11] |
X. Wang, J. Li, Towards a novel paradigm in brain-inspired computer vision, Front. Neurorobotics, 19 (2025). https://doi.org/10.3389/fnbot.2025.1592181 doi: 10.3389/fnbot.2025.1592181
|
| [12] | Y. Chen, Intelligent tennis ball pick-up trolley based on visual recognition, In: 2025 International Conference on Electrical Drives, Power Electronics & Engineering (EDPEE), IEEE, 2025,628–633. https://doi.org/10.1109/edpee65754.2025.00114 |
| [13] |
H. I. Lin, Z. Yu, Y. C. Huang, Ball tracking and trajectory prediction for table-tennis robots, Sensors, 20 (2022), 333. https://doi.org/10.3390/s20020333 doi: 10.3390/s20020333
|
| [14] |
G. L. Cai, A method for prediction the trajectory of table tennis in multirotation state based on binocular vision, Comput. Intel. Neurosc., 2022 (2022), 8274202. https://doi.org/10.1155/2022/8274202 doi: 10.1155/2022/8274202
|
| [15] | A. Ziegler, T. Gossard, A. Glover, A. Zell, An event-based perception pipeline for a table tennis robot, arXiv Preprint, 2025. https://doi.org/10.48550/arXiv.2502.00749 |
| [16] | Z. Zhao, H. Huang, S. Sun, J. Jin, W. Lu, Reinforcement learning for dynamic task execution: A robotic arm for playing table tennis, In: 2024 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2024,608–613. https://doi.org/10.1109/robio64047.2024.10907471 |
| [17] |
L. Yang, H. Zhang, X. Zhu, X. Sheng, Ball motion control in the table tennis robot system using time-series deep reinforcement learning, IEEE Access, 9 (2021), 99816–99827. https://doi.org/10.1109/access.2021.3093340 doi: 10.1109/access.2021.3093340
|
| [18] | D. Ma, X. Hu, J. Shi, M. Patel, R. Jain, Z. Liu, et al., avattar: Table tennis stroke training with embodied and detached visualization in augmented reality, In: Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. UIST '24, Association for Computing Machinery, New York, USA, 2024. https://doi.org/10.1145/3654777.3676400 |
| [19] | A. Ziegler, T. Gossard, K. Vetter, J. Tebbe, A. Zell, A multi-modal table tennis robot system, arXiv Preprint, 2023. https://doi.org/10.48550/arXiv.2310.19062 |
| [20] |
J. Jiao, Z. Li, G. Xia, Y. Chen, J. Xin, Adaptive neural network control of manipulators with input deadband and field-of-view constraints, Appl. Math. Model., 150 (2026), 116311. https://doi.org/10.1016/j.apm.2025.116311 doi: 10.1016/j.apm.2025.116311
|
| [21] |
T. A. Clarke, J. G. Fryer, The development of camera calibration methods and models, Photogramm. Rec., 16 (1998), 51–66. https://doi.org/10.1111/0031-868x.00113 doi: 10.1111/0031-868x.00113
|
| [22] |
X. Li, Y. Xiao, B. Wang, H. Ren, Y. Zhang, J. Ji, Automatic targetless lidar–camera calibration: A survey, Artif. Intell. Rev., 56 (2023), 9949–9987. https://doi.org/10.1007/s10462-023-10404-0 doi: 10.1007/s10462-023-10404-0
|
| [23] | J. Lee, H. Go, H. Lee, S. Cho, M. Sung, J. Kim, Ctrl-c: Camera calibration transformer with line-classification, In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, 2021, 16228–16237. https://doi.org/10.1109/iccv48922.2021.01592 |
| [24] | L. Jin, J. Zhang, Y. H. Geoffroy, O. Wang, K. B. Matzen, M. Sticha, et al., Perspective fields for single image camera calibration, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2023, 17307–17316. https://doi.org/10.1109/cvpr52729.2023.01660 |
| [25] | C. Jiang, Y. Yang, M. Jagersand, Clipunetr: Assisting human-robot interface for uncalibrated visual servoing control with clip-driven referring expression segmentation, In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, 6620–6626. https://doi.org/10.1109/icra57147.2024.10611647 |
| [26] |
M. Cao, L. Xiao, Q. Zuo, X. Yan, L. Li, X. Gao, Uncalibrated model-free visual servo control for robotic endoscopic with rcm constraint using neural networks, IEEE T. Cybernetics, 55 (2025), 4541–4553. https://doi.org/10.1109/TCYB.2025.3582866 doi: 10.1109/TCYB.2025.3582866
|
| [27] |
J. Jiao, Z. Li, G. Xia, J. Xin, G. Wang, Y. Chen, An uncalibrated visual servo control method of manipulator for multiple peg-in-hole assembly based on projective homography, J. Franklin I., 362 (2025), 107572. https://doi.org/10.1016/j.jfranklin.2025.107572 doi: 10.1016/j.jfranklin.2025.107572
|
| [28] |
E. Salvato, F. Blanchini, G. Fenu, G. Giordano, F. A. Pellegrino, Position-based visual servo control without hand-eye calibration, Robot. Auton. Syst., 193 (2025), 105045. https://doi.org/10.1016/j.robot.2025.105045 doi: 10.1016/j.robot.2025.105045
|