BORIS Theses

BORIS Theses
Bern Open Repository and Information System

Novel Techniques for Robust and Generalizable Machine Learning

Lemkhenter, Abdelhak (2023). Novel Techniques for Robust and Generalizable Machine Learning. (Thesis). Universität Bern, Bern

[img]
Preview
Text
23lemkhenter_a.pdf - Thesis
Available under License Creative Commons: Attribution (CC-BY 4.0).

Download (7MB) | Preview

Abstract

Neural networks have transcended their status of powerful proof-of-concept machine learning into the realm of a highly disruptive technology that has revolutionized many quantitative fields such as drug discovery, autonomous vehicles, and machine translation. Today, it is nearly impossible to go a single day without interacting with a neural network-powered application. From search engines to on-device photo-processing, neural networks have become the go-to solution thanks to recent advances in computational hardware and an unprecedented scale of training data. Larger and less curated datasets, typically obtained through web crawling, have greatly propelled the capabilities of neural networks forward. However, this increase in scale amplifies certain challenges associated with training such models. Beyond toy or carefully curated datasets, data in the wild is plagued with biases, imbalances, and various noisy components. Given the larger size of modern neural networks, such models run the risk of learning spurious correlations that fail to generalize beyond their training data. This thesis addresses the problem of training more robust and generalizable machine learning models across a wide range of learning paradigms for medical time series and computer vision tasks. The former is a typical example of a low signal-to-noise ratio data modality with a high degree of variability between subjects and datasets. There, we tailor the training scheme to focus on robust patterns that generalize to new subjects and ignore the noisier and subject-specific patterns. To achieve this, we first introduce a physiologically inspired unsupervised training task and then extend it by explicitly optimizing for cross-dataset generalization using meta-learning. In the context of image classification, we address the challenge of training semi-supervised models under class imbalance by designing a novel label refinement strategy with higher local sensitivity to minority class samples while preserving the global data distribution. Lastly, we introduce a new Generative Adversarial Networks training loss. Such generative models could be applied to improve the training of subsequent models in the low data regime by augmenting the dataset using generated samples. Unfortunately, GAN training relies on a delicate balance between its components, making it prone mode collapse. Our contribution consists of defining a more principled GAN loss whose gradients incentivize the generator model to seek out missing modes in its distribution. All in all, this thesis tackles the challenge of training more robust machine learning models that can generalize beyond their training data. This necessitates the development of methods specifically tailored to handle the diverse biases and spurious correlations inherent in the data. It is important to note that achieving greater generalizability in models goes beyond simply increasing the volume of data; it requires meticulous consideration of training objectives and model architecture. By tackling these challenges, this research contributes to advancing the field of machine learning and underscores the significance of thoughtful design in obtaining more resilient and versatile models.

Item Type: Thesis
Dissertation Type: Cumulative
Date of Defense: 22 August 2023
Subjects: 000 Computer science, knowledge & systems
500 Science > 510 Mathematics
600 Technology
600 Technology > 620 Engineering
Institute / Center: 08 Faculty of Science > Institute of Computer Science (INF)
Depositing User: Sarah Stalder
Date Deposited: 05 Aug 2025 11:04
Last Modified: 06 Aug 2025 12:11
URI: https://boristheses.unibe.ch/id/eprint/6529

Actions (login required)

View Item View Item