A review of on-device machine learning for IoT: An energy perspective
Abstract
Recently, there has been a substantial interest in on-device Machine Learning (ML) models to provide intelligence for the Internet of Things (IoT) applications such as image classification, human activity recognition, and
anomaly detection. Traditionally, ML models are deployed in the cloud or centralized servers to take advantage
of their abundant computational resources. However, sharing data with the cloud and third parties degrades
privacy and may cause propagation delay in the network due to a large amount of transmitted data impacting
the performance of real-time applications. To this end, deploying ML models on-device (i.e., on IoT devices), in
which data does not need to be transmitted, becomes imperative. However, deploying and running ML models
on already resource-constrained IoT devices is challenging and requires intense energy consumption. Numerous
works have been proposed in the literature to address this issue. Although there are considerable works that
discuss energy-aware ML approaches for on-device implementation, there remains a gap in the literature on a
comprehensive review of this subject. In this paper, we provide a review of existing studies focusing on-device
ML models for IoT applications in terms of energy consumption. One of the key contributions of this study is to
introduce a taxonomy to define approaches for employing energy-aware on-device ML models on IoT devices
in the literature. Based on our review in this paper, our key findings are provided and the open issues that
can be investigated further by other researchers are discussed. We believe that this study will be a reference
for practitioners and researchers who want to employ energy-aware on-device ML models for IoT applications.