BORIS Theses

BORIS Theses
Bern Open Repository and Information System

Resource-Aware Distributed Machine Learning for Artificial Intelligence of Things

Samikwa, Eric (2024). Resource-Aware Distributed Machine Learning for Artificial Intelligence of Things. (Thesis). Universität Bern, Bern

[img]
Preview
Text
24samikwa_e_1_.pdf - Thesis
Available under License Creative Commons: Attribution-Noncommercial-No Derivative Works (CC-BY-NC-ND 4.0).

Download (4MB) | Preview

Abstract

The integration of machine learning models within the Internet of Things (IoT) ecosystems presents significant challenges, including the expensive execution of deep neural networks (DNNs) on resource-constrained devices, ensuring privacy in data-sensitive applications, and managing the dynamic and heterogeneous nature of data and resources available in IoT devices. This thesis tackles these challenges by proposing a set of novel distributed machine learning strategies, each designed to optimize different aspects of machine learning workflows in IoT systems. These strategies include Early Exit of Computation (EEoC), Distributed Micro-Split Deep Learning (DISNET), Adaptive Resource-Aware Split-learning (ARES), and Dynamic Federated split Learning (DFL), offering solutions to manage inference and training processes across heterogeneous IoT networks efficiently. Through these approaches, the thesis contributes to the emergence of the Artificial Intelligence of Things (AIoT), where intelligent decision-making is integrated with IoT infrastructure. EEoC introduces an adaptive mechanism for optimizing DNN inference tasks by allowing early exits based on the computation intensity and resource availability, thereby reducing latency and conserving energy on IoT devices. DISNET extends this efficiency to a broader scale by implementing a micro-split deep learning approach that facilitates flexible, distributed, and parallel execution of DNN tasks, ensuring minimal inference latency and reduced energy consumption while maintaining accuracy across heterogeneous IoT devices. Building on the foundation of efficient inference, ARES introduces a dynamic, resource-aware approach for the distributed training of machine learning models specifically tailored for edge IoT environments. It significantly accelerates local training processes in resource-constrained devices and minimizes the effects of slower devices on global model performance through device-targeted split points while adjusting to time-varying training conditions. DFL complements ARES by enhancing the federated learning framework to accommodate the heterogeneity and dynamism of IoT devices and training data. It optimizes training efficiency through a resource-aware federated approach with similarity-based clustering, addressing both the heterogeneity of training data and resources. Through comprehensive evaluations in IoT testbeds with heterogeneous data and device resources, this thesis demonstrates the effectiveness of these strategies in improving the efficiency, accuracy, and adaptability of machine learning models in IoT. The contributions significantly advance the field of distributed machine learning, offering scalable and efficient methods for deploying intelligent applications in varied IoT contexts. This research paves the way for next-generation AIoT ecosystems capable of supporting complex, data-driven applications with enhanced responsiveness, energy efficiency, and privacy.

Item Type: Thesis
Dissertation Type: Single
Date of Defense: 12 August 2024
Subjects: 000 Computer science, knowledge & systems
500 Science > 510 Mathematics
Institute / Center: 08 Faculty of Science > Institute of Computer Science (INF)
Depositing User: Hammer Igor
Date Deposited: 22 Aug 2024 18:38
Last Modified: 22 Aug 2024 18:38
URI: https://boristheses.unibe.ch/id/eprint/5378

Actions (login required)

View Item View Item