Ingeniería en Sistemas, Electrónica e Industrial
Permanent URI for this communityhttp://repositorio.uta.edu.ec/handle/123456789/1
Browse
2 results
Search Results
Item Plataforma de interacción virtual para la navegación de visitantes primarios en los predios de la UTA del Campus Huachi usando un avatar y algoritmos de machine learning(Universidad Técnica de Ambato. Facultad de Ingeniería en Sistemas, Electrónica e Industrial. Carrera de Ingeniería en Sistemas Computacionales e Informáticos, 2023-09) Palate Amaguaña, Alexis Israel; Nogales Portero, Rubén EduardoPeople who do not know a place of interest have problems on their first visit, causing them a bad experience. In this context, a person can look for alternatives to get to know these places virtually, before getting to know them physically. However, most of the options are not a reflection of reality. Basically limiting them to see the exteriors of a place. In this sense, a virtual interaction platform is proposed where the user can control an avatar with hand gestures, to get to know a place virtually before getting to know it physically. In this way you can improve your experience on the first visit to the places that are of interest to you. In this project, a 3D model of the Huachi campus of the Technical University of Ambato (UTA) is proposed. This platform works with two integrated systems, running on the same computer. The first system consists of a hand gesture recognition model. This system uses the Leap Motion Controller sensor to capture the gestures made by the user. These data are sent to the model. This is in charge of processing them using a KNN classifier, to predict a label that represents the gesture. Subsequently, the predicted label is sent through the TCP/IP communication protocol, with the local IP address of the computer where the platform is running. The second system receives the label and interprets it as an instruction that the avatar must comply with by performing an action within its virtual environment. The platform was tested by 15 people. A group of 5 people who do know the UTA, and another group of 10 people who do not know the UTA, giving priority to people who do not know the UTA. The model was tested by the researcher, performing 20 repetitions for each gesture. Finding that, with kn=5 and frames=400, an accuracy percentage of 100% is obtained in the Open Hand gesture, 85% in the Close Hand gesture, 65% in the Wave In gesture, 95% in the Wave Out gesture, and 10% on the Pinch gesture. These gestures are used for the interaction of the avatar with the virtual world. However, the action corresponding to the Pinch gesture is limited, due to its low percentage of accuracyItem Sistema inteligente usando video cámaras y motion capture para monitoreo de la rehabilitación de la mano derecha(Universidad Técnica de Ambato. Facultad de Ingeniería en Sistemas, Electrónica e Industrial. Carrera de Tecnologías de la Información, 2023-09) Bonilla Quishpe, Miguel Enrique; Nogales Portero, Rubén EduardoRehabilitation monitoring has become a prominent advancement in the technological field, thanks to the convergence of different technologies. The integration of these technologies allows for more accurate and detailed monitoring of hand rehabilitation gest ures. Through data capture and analysis real time information about the rehabilitation progress is provided, enabling healthcare professionals and users to assess their performance during the process. Moreover, the system not only offers visual tracking of daily progress but also motivates users to continue their treatment, contributing to the success of their rehabilitation. In this research project, various technologies are implemented to facilitate the monitoring and analysis of hand rehabilitation data . Firstly, the Leap Motion Controller is used to capture hand position and direction data during the performed gestures. These data are processed using a custom acquisition system developed in Matlab, enabling their analysis, and generating a dataset. The dataset is preprocessed and relevant features are extracted for the classification of each gesture. For this, different models are implemented, such as Artificial Neural Networks (ANN) and K Nearest Neighbors (KNN) to obtain a high accuracy in the classifi cation. After training, ANN with 93% accuracy was selected. Once the model is trained, it is integrated into Unity, where an interactive interface is created to provide visual feedback to the user, allowing them to perform the gestures effectively and moni tor their progress during rehabilitation.