Capstone Projects
Description: Nowadays, facial recognition—and particularly emotion recognition through facial expressions—has gained importance in various fields, including video games. This research focused on developing an adaptive Flappy Bird-style game prototype aimed at analyzing how automatic emotion detection can enhance player engagement by dynamically adapting the gaming experience. The study was conducted in two phases. In the first phase, a non-adaptive prototype was created to collect interaction data and facial expressions. A convolutional neural network was trained using the DAiSEE dataset to identify affective states, while DeepFace detected basic emotions. The collected data was compared with the Game Experience Questionnaire (GEQ) to find useful patterns for adaptive design. In the second phase, a new prototype with real-time adaptation mechanisms was built, based on the previously established relationships. This system adjusted game variables such as speed, difficulty, and visual/auditory stimuli according to the player’s emotional state. Experimental results showed that the adaptive version significantly improved engagement levels compared to the non-adaptive version. These findings demonstrated that integrating emotional recognition techniques with artificial intelligence is an effective approach to enrich human-computer interaction and opens new possibilities for designing emotionally responsive video games.
Description: Soil degradation is an increasingly pressing global concern, with direct impacts on agriculture, climate stability, and ecosystem health. In regions such as the Ecuadorian Amazon, this issue is intensified by persistent cloud cover, which hinders consistent land monitoring using optical imagery. This context presents a significant technical challenge, as conventional methods based on optical images depend on favorable atmospheric conditions, limiting their ability to deliver reliable and frequent information in cloud-covered areas. The lack of updated data restricts the early detection of land-use changes and delays decision-making aimed at environmental conservation and sustainable land management. To address this limitation, this study proposes an alternative approach for change detection in land use by employing Synthetic Aperture Radar (SAR) imagery from the Sentinel-1 satellite, which enables surface observation regardless of cloud cover or time of day. A comprehensive process was developed, including image preprocessing, calculation of the RVI index, and the application of deep learning models—highlighting the Bi-temporal Adapter Network (BAN) for its adaptability. The model was retrained using local data and validated through fieldwork. As a final outcome, an interactive visual interface was developed to explore detected changes intuitively, supporting its application in environmental monitoring and land-use planning.
Description: In contemporary organizational environments, efficient access to information has become a critical factor for strategic decision-making, process optimization, and service quality improvement. However, many organizations encounter challenges when querying both structured and unstructured data due to a reliance on specialized technical expertise. This dependency hinders the effective utilization of internal knowledge and limits the democratization of information access. To address this challenge, the present research proposes the development of a multi-agent chatbot with conversational artificial intelligence capabilities, based on the Retrieval-Augmented Generation (RAG) architecture and Large Language Models (LLMs). Unlike traditional rule-based systems, the proposed solution enables open-ended queries in natural language, facilitating user interaction with databases without requiring technical knowledge. The system was evaluated across multiple architectural configurations including Vanilla RAG, Agentic RAG, and Fine-tuned as well as with several open-source LLMs. Among these, the Mistral model stood out for its contextual accuracy. Additionally, techniques for the effective transformation of organizational data into semantic representations were validated, and optimal configurations for contextual retrieval and information structuring were identified. The results demonstrate the potential of this solution to democratize access to organizational knowledge and serve as a foundation for future advancements in applied artificial intelligence.
Description: This thesis project proposes the development of an emotion recognition model based on EEG brain signals, captured through brain-computer interfaces (BCIs) using the OpenBCI Cyton + Daisy device. The research falls within the field of affective computing, aiming to enhance human-computer interaction through systems that understand and empathetically respond to the user’s emotional state. The study involves the design and implementation of a proprietary EEG database, created with the participation of 37 volunteers. An experimental protocol based on the visualization of audiovisual stimuli was applied, accompanied by the SAM questionnaire for emotional labeling. The acquired signals underwent a rigorous preprocessing pipeline, which included techniques such as filtering, baseline correction, common average referencing (CAR), artifact removal, and independent component analysis (ICA). Subsequently, relevant features were extracted and used as input for various Machine Learning models: Support Vector Machines (SVM), Random Forests (RF), Multilayer Perceptron (MLP), and Transformer Encoders (TE). The models’ performance was evaluated using metrics such as accuracy, recall, and F1-score in both subject-independent and subject-dependent evaluation scenarios, also comparing results with open databases such as DEAP. The proposed model demonstrated competitive and consistent results in the classification of emotional states, with an emphasis on the system’s scalability and accessibility through the use of open-source hardware and software. This research contributes to the advancement of inclusive and low-cost solutions in the field of emotion recognition based on neurotechnology.
Description: Effective faculty assignment to courses in higher education institutions represents a critical challenge for academic human talent management, directly impacting the quality of the teaching-learning process. Currently, manual assignments face limitations related to subjectivity, lack of standardization, and significant administrative workload, often resulting in suboptimal allocations. To overcome these barriers, this study proposes and validates a comprehensive recommender system that integrates sentiment analysis using locally adapted transformer language models (RoBERTuito) and mathematical optimization models to ensure precise alignment between faculty competencies and specific academic requirements. The methodology involved the creation of comprehensive faculty profiles by integrating historical evaluations, automatically categorized student feedback, and competencies defined according to the institution’s Teaching Competency Pentagon. Additionally, dynamic weights were implemented to progressively adjust the relative importance of pedagogical and specialized factors according to the academic cycle, accurately reflecting institutional expectations. The results show a high degree of alignment between the system’s recommendations and the manual assignments made by department heads, with particularly strong performance in technical programs such as Computer Science. Furthermore, a pilot evaluation with program directors from three engineering departments revealed strong acceptance and high perceived usefulness of the system, particularly in terms of the clarity, quality, and relevance of the generated profiles. The developed system not only significantly reduces the manual operational burden but also serves as a strategic tool for continuously optimizing academic and administrative processes in universities, suggesting its scalability and replicability in other educational contexts.
Description: The increase in climate variability and the growing frequency of extreme events represent significant challenges for society and the environment. Gaining a deep understanding of past anomalous events is essential for planning, adaptation, and mitigation of the impact of similar events in the future. Climate time series contain valuable information about these patterns, but analyzing and interpreting significant deviations requires advanced analytical tools. Although established methods exist for anomaly detection in time series, they often lack enriched contextual interpretation. The absence of this contextualization limits the practical value of anomaly detection for informed decision-making and anticipating future scenarios. This work proposes a process that integrates anomaly detection techniques in time series with the incorporation of contextual information from textual sources. Statistical approaches, machine learning, deep learning, and large language models are used to identify significant deviations in historical precipitation data. The detected anomalies are complemented with relevant information extracted from news sources through a manual and semi-automated process, and are stored in a structured relational database. Additionally, a numerical and visual similarity analysis is implemented to compare anomalies with one another, enabling the identification of recurring patterns over time and providing a more comprehensive understanding of extreme climate events.
Description: This study analyzes the feasibility and impact of Low-Code/No-Code platforms on the digitalization of administrative processes and data analysis in SMEs in Cuenca, Ecuador. Through a literature review, structured interviews, and a case study at the University of Cuenca, the main challenges, needs, and opportunities for technology adoption in local microenterprises and SMEs are identified. The results show that although there is interest in digitalization, barriers such as limited resources, lack of technical knowledge, and absence of clear strategies still persist. Low-Code/No-Code platforms emerge as a viable alternative to overcome these limitations, enabling the agile development of customized solutions, task automation, and data integration without requiring significant investments. The case study demonstrates that, through the use of these tools, it was possible to implement a solution for inventory data entry and visualization in an institutional environment, facilitating report generation and data-driven decision-making, which contributed to improved efficiency and transparency. It is concluded that the adoption of Low-Code/No-Code technologies can accelerate the digital transformation of SMEs, provided it is accompanied by training, support, and proper change management. Finally, recommendations are proposed to strengthen the digital culture and ensure the sustainability of the implemented solutions.
Description: Cervical cancer (CC), mainly caused by Human Papillomavirus (HPV), is one of the leading causes of death among women, particularly in rural areas with limited access to healthcare. HPV self-sampling is an effective and accepted alternative, but its adoption faces barriers such as lack of information and guidance. This thesis presents the development of a conversational Virtual Assistant (VA) integrated into an Android mobile app to educate women aged 30 to 65 about self-sampling and sexual and reproductive health. The VA combines a chatbot built with the Rasa framework and a large language model (LLM) to handle queries outside the trained dataset. A dataset of 212 validated question-answer pairs was created, incorporating colloquial language from the target population. Initially, a local LLM was implemented for offline use, but due to performance issues and app size, it was replaced with a rule-based chatbot. Laboratory and final usability tests with women from Baños parish showed high acceptance, with users highlighting the system’s clarity, usefulness, and ease of use. The SUS questionnaire yielded an average score of 90.6/100, indicating excellent usability. Overall, the developed system proves to be a viable solution for improving access to reliable medical information in rural settings.
Description: Modern business applications require software structures that allow for evolution, scalability, and maintenance without affecting the overall operation of the system. However, many current systems have limitations because they are based on monolithic architectures, making it difficult to adapt to new processes and technologies. In this context, microservices architecture is presented as an alternative for designing more modular and decoupled business applications. This thesis proposes the design of a microservices-based software architecture applied to a commercial management system. The design was developed using the 4SRS-MSLA method, which guides the transformation of functional requirements into logical components. Complementarily, SoaML notation was used to model participants, interfaces, and contracts between microservices, ensuring clarity in the specification of services and interaction flows. As part of the process, practical tools such as a block diagram and an API table were incorporated, which facilitated communication with the stakeholder and allowed for early validation of the expected services between modules. The detailed design of the most important microservices for the stakeholder was also carried out, and a functional prototype was built that integrated these microservices. Through a representative functional case, the interoperability between services was tested and the technical feasibility of the design was confirmed. The proposed architecture proves to be a feasible, modular, and scalable solution, capable of guiding the development of modern service-oriented business systems, thus fulfilling the objectives set out in this research.
Description: Agriculture represents a fundamental sector in developing countries like Ecuador, where it is still practiced in an artisanal manner and with limited access to technologies that optimize production. Current IoT-based solutions have shown great potential to automate tasks such as irrigation, fertilization, or environmental monitoring. However, these solutions often rely on constant connectivity and energy supply, which limits their effectiveness in rural areas with deficient infrastructure. In addition, they focus solely on process automation, without considering mechanisms that allow the capture and appreciation of ancestral knowledge possessed by farmers regarding environmental conditions at the time of performing their tasks. This need motivated the design of a solution capable of operating under adverse conditions, automating the detection of agricultural activities through sensors while also documenting the farmers’ decisions. In this way, it facilitates the correlation between environmental conditions and traditional practices, allowing their analysis, validation, and replicability. The proposal addresses current limitations through an architecture that combines automation with the active preservation of ancestral knowledge, even in contexts with intermittent connectivity. This work aimed to design a distributed software architecture for IoT applications in agricultural contexts with limited connectivity, focused on task monitoring and the preservation of ancestral knowledge. The proposal envisions a scalable architecture that enables: (1) detecting agricultural practices through environmental data captured by sensors, (2) recording decisions and techniques applied by farmers, and (3) correlating both sources to digitize ancestral knowledge, validate patterns with machine learning, and generate adaptive recommendations that integrate traditional wisdom and precision agriculture. The adopted methodology combined iterative modeling of functional and non-functional requirements, architectural design based on Kruchten’s 4+1 view model, and SOLID design principles. A functional prototype was later implemented using Flutter and Isar for the mobile application, Angular for the web environment, RabbitMQ for asynchronous messaging, and Spring Boot with PostgreSQL for the central server. The local node was developed in Python and deployed on a Raspberry Pi. The system was evaluated through functional and non-functional tests in controlled scenarios, using simulated data. The results highlight the system’s ability to operate autonomously even under intermittent connectivity, as well as its fault tolerance, scalability for the inclusion of new cultivation areas in environments with multiple nodes, and energy efficiency, achieved through intelligent assignment of responsibilities that optimizes the use of limited resources such as infrastructure, connectivity, and energy. The developed architecture constitutes a contribution that offers a modular, resilient, and adaptable solution for rural contexts with limited infrastructure. It facilitates the automated detection of agricultural practices through environmental data and allows recording farmers’ actions with their specific conditions, thus strengthening the preservation of ancestral knowledge. It integrates accessible digital tools that improve decision-making, promotes technological equity by providing operational autonomy to communities with low connectivity, and presents a replicable methodological framework to inspire innovative and sustainable solutions in similar environments.









