Deep Learning YOLO Models in Hand Gesture Recognition
DOI:
https://doi.org/10.56286/08e3yp91Keywords:
YOLO models, Hand gesture, YOLO11, YOLO10, AI.Abstract
Hand gesture detection is essential for improving human-computer interaction, considerably advancing the creation of more intuitive and effective interfaces. This study examines the efficacy of three sophisticated object identification models in identifying hand motions: YOLOv8, YOLOv10, and YOLOv11. A dataset was created for this purpose, enhanced via Roboflow, and utilized for training and evaluating the performance of these models based on metrics including mAP, Precision, Recall, and F1-score. Performance was enhanced by adjusting the optimizers (Adam, SGD, and AdamW) and hyperparameters, including learning rate and epochs. The findings indicate outstanding performance, with YOLOv8 and YOLOv10 attaining mean Average Precisions of 99.2% and 99.1%, respectively, alongside precisions of 98.1% and 98.6%, recalls of 97.7% and 97.2%, YOLOv11 had a mean Average Precision of 99.2%, a precision of 98.6%, and a recall of 98.4%, These findings demonstrate that YOLOv11 adeptly manages more complex datasets while preserving an ideal equilibrium between speed and precision.
Downloads
Additional Files
Published
Issue
Section
License
Copyright (c) 2026 Amina TH. Mohealdeen, Emad A. Mohammed

This work is licensed under a Creative Commons Attribution 4.0 International License.






