Deep Learning YOLO Models in Hand Gesture Recognition

Authors

DOI:

https://doi.org/10.56286/08e3yp91

Keywords:

YOLO models, Hand gesture, YOLO11, YOLO10, AI.

Abstract

      Hand gesture detection is essential for improving human-computer interaction, considerably advancing the creation of more intuitive and effective interfaces. This study examines the efficacy of three sophisticated object identification models in identifying hand motions: YOLOv8, YOLOv10, and YOLOv11. A dataset was created for this purpose, enhanced via Roboflow, and utilized for training and evaluating the performance of these models based on metrics including mAP, Precision, Recall, and F1-score. Performance was enhanced by adjusting the optimizers (Adam, SGD, and AdamW) and hyperparameters, including learning rate and epochs. The findings indicate outstanding performance, with YOLOv8 and YOLOv10 attaining mean Average Precisions of 99.2% and 99.1%, respectively, alongside precisions of 98.1% and 98.6%, recalls of 97.7% and 97.2%, YOLOv11 had a mean Average Precision of 99.2%, a precision of 98.6%, and a recall of 98.4%, These findings demonstrate that YOLOv11 adeptly manages more complex datasets while preserving an ideal equilibrium between speed and precision.

Downloads

Download data is not yet available.

Additional Files

Published

2026-03-06

How to Cite

[1]
“Deep Learning YOLO Models in Hand Gesture Recognition”, NTU-JET, vol. 5, no. 1, pp. 53–61, Mar. 2026, doi: 10.56286/08e3yp91.

Similar Articles

1-10 of 70

You may also start an advanced similarity search for this article.