Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
Pareto Optimization of CNN Models via Hardware-Aware Neural Architecture Search for Drainage Crossing Classification on Resource-Limited Devices
DescriptionEmbedded devices, constrained by limited memory and processors, require deep learning models to be tailored to their specifications. This research explores customized model architectures for classifying drainage crossing images. Building on the foundational ResNet-18, this paper aims to maximize prediction accuracy, reduce memory size, and minimize inference latency. Various configurations were systematically probed by leveraging hardware-aware neural architecture search, accumulating 1,717 experimental results over six benchmarking variants. The experimental data analysis, enhanced by nn-Meter, provided a comprehensive understanding of inference latency across four different predictors. Significantly, a Pareto front analysis with three objectives of accuracy, latency, and memory resulted in five non-dominated solutions. These standout models showcased efficiency while retaining accuracy, offering a compelling alternative to the conventional ResNet-18 when deployed in resource-constrained environments. The presentation concludes by highlighting insights drawn from the results and suggesting avenues for future exploration.
Event Type
Workshop
TimeSunday, 12 November 20234:58pm - 5:18pm MST
Location704-706
Tags
Accelerators
Codesign
Heterogeneous Computing
Task Parallelism
Registration Categories
W