Publication Details
Abstract
This paper addresses the efficacy and significance of using bare-bones AI frameworks in the detection of wear, tear and malfunctions that affects the embedded system. By employing pruning, quantum transformation, knowledge extraction mechanisms, we were able to adapt deep learning models to increase the efficiency of peripheral devices, providing high accuracy and low latency, while maintaining small memory and low power consumption. The use of bare-bones AI technologies is not only based on cloud architecture but also supports data privacy, system independence, and operational reliability in industrial systems, respiratory healthcare devices, and measurement devices, the study offers an integrated vision of a real-time fault detection system, and is characterized by being scalable and deployable through a large set of embedded applications. With the large wave in the development of embedded healthcare devices that are linked to artificial intelligence and raise the possibility of building more complex and broader intelligent edge systems, this research also paves the way for discovering fault detection solutions that can be deployed in the embedded system. Edge AI's ability to access data has become evident, reducing latency, enhancing privacy, and consuming less power within the system. This study focuses on the importance of edge AI in embedded systems, the mechanisms of software operation, and how to design a proposal that improves performance, particularly in practical applications such as healthcare devices (Healthcare devices for breathing and heart rate monitoring) the Internet of Things (IoT), and robotics. The XGBOOST, CNN, LSTM, and Random Forest algorithms were trained on the same data for training and testing to ensure neutrality in comparison and evaluation, and normalization processes were performed to measure differences in speed, accuracy, and memory, The experiments performed in the proposed model were conducted using Python 3.10 and the NumPy, Pandas, and Scikit learn libraries. The training was conducted on Linux computers with the use of Intel i7 processor and 16 GB of RAM Deployment was carried out using Arduino Portenta and Raspberry Pi Pico. The 1D-CNN model includes a Conv1D layer with 64 filters and 3 cores, followed by Max Pooling, Flatten, and Dense for classification. The LSTM model uses the Adam model layer as an optimizer and Cross Entropy as a loss, with 50 Epoch and a batch size of 32. In the study model, the models were evaluated using: F1-score m Recall, Precision, Accuracy, and an ambiguity matrix to diagnose classification accuracy and distinguish between normal and abnormal cases. The goal is to enable intelligent decision-making with less time and energy, ensuring the reliability and safety of industrial software and applications. This study also recommends easing the challenges facing modern trends, such as scheduling. The tasks, reliable and secure reasoning, while working on the need to deepen operational harmony with the open-source system, which relies on peripheral artificial intelligence.
Keywords
Document Preview
Preview Not Allowed
The journal provider does not allow direct previewing of this document.
Download PDF Article